diff --git a/30daysofIA/archive/index.html b/30daysofIA/archive/index.html index 4c33b9ad40..f0b8807a51 100644 --- a/30daysofIA/archive/index.html +++ b/30daysofIA/archive/index.html @@ -14,13 +14,13 @@ - +
Skip to main content

Archive

Archive

- + \ No newline at end of file diff --git a/30daysofIA/hacktogether-recap/index.html b/30daysofIA/hacktogether-recap/index.html index 31619c8509..2de6e18fda 100644 --- a/30daysofIA/hacktogether-recap/index.html +++ b/30daysofIA/hacktogether-recap/index.html @@ -3,7 +3,7 @@ -HackTogether Recap 🍂 | Build Intelligent Apps On Azure +HackTogether Recap 🍂 | Build Intelligent Apps On Azure @@ -14,14 +14,14 @@ - +
Skip to main content

HackTogether Recap 🍂

· 4 min read
It's 30DaysOfIA

Continue The Learning Journey through Fall For Intelligent Apps! 🍂

What We'll Cover

Thank you! ♥️

image

It's hard to believe that JavaScript on Azure hack-together is ending! It seems like just yesterday that we launched this initiative, and yet here we are, 15 days later, with an incredible amount of learning and growth behind us. So... it's time for a wrap!

From the bottom of our hearts, we want to thank each and every one of you for your participation, engagement, and enthusiasm. It's been truly inspiring to see the passion and dedication from this strong community, and we're honored to be a part of it. ✨

Recap of The JavaScript on Azure Global Hack-Together

As we wrap up this exciting event, we wanted to take a moment to reflect on all that we've accomplished together. Over the last 15 days, we've covered a lot of ground, from the basics of contributing to Open source to the exploration of the Contoso Real Estate project from its Frontend to its Backend and future AI implementation.

Now that the hack-together is ending, we want to make sure that you have all the resources you need to continue honing your skills in the future. Whether you're looking to make your fist contribution to open source, become an open source maintainers, collaborate with others, or simply keep learning, there are plenty of resources out there to help you achieve your goals. So, let's dive in and explore all the ways you can continue to grow your JavaScript skills on Azure!

JSonAzure Hack-together Roadmap 📍:

hack-together-roadmap (2)

Recap on past Livestreams🌟:

Day 1️⃣: Opening Keynote (Hack-together Launch): Introduction to the Contoso Real Estate Open-source project!, managing complex and complex enterprise architecture, new announcements for JavaScript developers on Azure

Day 2️⃣: GitHub Copilot & Codespaces: Introduction to your AI pair programmer (GitHub Copilot) and your virtual developer environment on the cloud (GitHub Codespaces)

Day 6️⃣: Build your Frontend using Static Web Apps as part of a complex, modern composable frontends (or micro-frontends) and cloud-native applications.

Day 9️⃣: Build a Serverless Backend using Azure Functions

Day 1️⃣3️⃣: Easily connect to an Azure Cosmos DB, exploring its benefits and how to get started

Day 1️⃣5️⃣: Being in the AI Era, we dive into the Azure OpenAI Service and how you can start to build intelligent JavaScript applications

📖 Self-Learning Resources

  1. JavaScript on Azure Global Hack Together Module collection
  2. Lets #HackTogether: Javascript On Azure Keynote
  3. Step by Step Guide: Migrating v3 to v4 programming model for Azure Functions for Node.Js Application

Continue your journey with #FallForIntelligentApps

Join us this Fall on a learning journey to explore building intelligent apps. Combine the power of AI, cloud-native app development, and cloud-scale data to build highly differentiated digital experiences. Develop adaptive, responsive, and personalized experiences by building and modernizing intelligent applications with Azure. Engage in a self-paced learning adventure by following along the #30Days of Intelligent Apps series, completing the Intelligent Apps Skills Challenge, joining the Product Group for a live Ask The Expert series or building an end to end solution architecture with a live guided experience through the Learn Live series. Discover more here.

Hands-on practice: Make your first contribution to open source!

Join our GitHUb Discussion Forum to connect with developers from every part of world, see contributions from other, find collaborators and make your first contribution to a real-world project! Don't forget to give the repo a star ⭐

Resources

All resources are accessible on our landing page

- + \ No newline at end of file diff --git a/30daysofIA/index.html b/30daysofIA/index.html index b056370385..7863fea74c 100644 --- a/30daysofIA/index.html +++ b/30daysofIA/index.html @@ -3,7 +3,7 @@ -Learn in #30DaysOfIA | Build Intelligent Apps On Azure +Learn in #30DaysOfIA | Build Intelligent Apps On Azure @@ -14,14 +14,14 @@ - +
Skip to main content

· 4 min read
It's 30DaysOfIA

Continue The Learning Journey through Fall For Intelligent Apps! 🍂

What We'll Cover

Thank you! ♥️

image

It's hard to believe that JavaScript on Azure hack-together is ending! It seems like just yesterday that we launched this initiative, and yet here we are, 15 days later, with an incredible amount of learning and growth behind us. So... it's time for a wrap!

From the bottom of our hearts, we want to thank each and every one of you for your participation, engagement, and enthusiasm. It's been truly inspiring to see the passion and dedication from this strong community, and we're honored to be a part of it. ✨

Recap of The JavaScript on Azure Global Hack-Together

As we wrap up this exciting event, we wanted to take a moment to reflect on all that we've accomplished together. Over the last 15 days, we've covered a lot of ground, from the basics of contributing to Open source to the exploration of the Contoso Real Estate project from its Frontend to its Backend and future AI implementation.

Now that the hack-together is ending, we want to make sure that you have all the resources you need to continue honing your skills in the future. Whether you're looking to make your fist contribution to open source, become an open source maintainers, collaborate with others, or simply keep learning, there are plenty of resources out there to help you achieve your goals. So, let's dive in and explore all the ways you can continue to grow your JavaScript skills on Azure!

JSonAzure Hack-together Roadmap 📍:

hack-together-roadmap (2)

Recap on past Livestreams🌟:

Day 1️⃣: Opening Keynote (Hack-together Launch): Introduction to the Contoso Real Estate Open-source project!, managing complex and complex enterprise architecture, new announcements for JavaScript developers on Azure

Day 2️⃣: GitHub Copilot & Codespaces: Introduction to your AI pair programmer (GitHub Copilot) and your virtual developer environment on the cloud (GitHub Codespaces)

Day 6️⃣: Build your Frontend using Static Web Apps as part of a complex, modern composable frontends (or micro-frontends) and cloud-native applications.

Day 9️⃣: Build a Serverless Backend using Azure Functions

Day 1️⃣3️⃣: Easily connect to an Azure Cosmos DB, exploring its benefits and how to get started

Day 1️⃣5️⃣: Being in the AI Era, we dive into the Azure OpenAI Service and how you can start to build intelligent JavaScript applications

📖 Self-Learning Resources

  1. JavaScript on Azure Global Hack Together Module collection
  2. Lets #HackTogether: Javascript On Azure Keynote
  3. Step by Step Guide: Migrating v3 to v4 programming model for Azure Functions for Node.Js Application

Continue your journey with #FallForIntelligentApps

Join us this Fall on a learning journey to explore building intelligent apps. Combine the power of AI, cloud-native app development, and cloud-scale data to build highly differentiated digital experiences. Develop adaptive, responsive, and personalized experiences by building and modernizing intelligent applications with Azure. Engage in a self-paced learning adventure by following along the #30Days of Intelligent Apps series, completing the Intelligent Apps Skills Challenge, joining the Product Group for a live Ask The Expert series or building an end to end solution architecture with a live guided experience through the Learn Live series. Discover more here.

Hands-on practice: Make your first contribution to open source!

Join our GitHUb Discussion Forum to connect with developers from every part of world, see contributions from other, find collaborators and make your first contribution to a real-world project! Don't forget to give the repo a star ⭐

Resources

All resources are accessible on our landing page

- + \ No newline at end of file diff --git a/30daysofIA/page/2/index.html b/30daysofIA/page/2/index.html index 71ce09c5d0..14b7979913 100644 --- a/30daysofIA/page/2/index.html +++ b/30daysofIA/page/2/index.html @@ -3,7 +3,7 @@ -Learn in #30DaysOfIA | Build Intelligent Apps On Azure +Learn in #30DaysOfIA | Build Intelligent Apps On Azure @@ -14,13 +14,13 @@ - +
-
Skip to main content

· One min read
It's 30DaysOfIA

September is almost here - and that can only mean one thing!! It's time to 🍂 Fall for something new and exciting and spend a few weeks skilling up on relevant tools, techologies and solutions!!

Last year, we focused on #ServerlessSeptember. This year, we're building on that theme with the addition of cloud-scale Data, cloud-native Technologies and cloud-based AI integrations to help you modernize and build intelligent apps for the enterprise!

Watch this space - and join us in September to learn more!

- +
Skip to main content

· 2 min read
It's 30DaysOfIA

September is almost here - and that can only mean one thing!! It's time to 🍂 Fall for something new and exciting and spend a few weeks skilling up on relevant tools, techologies and solutions!!

Last year, we focused on #ServerlessSeptember. This year, we're building on that theme with the addition of cloud-scale Data, cloud-native Technologies and cloud-based AI integrations to help you modernize and build intelligent apps for the enterprise!

Watch this space - and join us in September to learn more!

+ \ No newline at end of file diff --git a/30daysofIA/road-to-fallforIA/index.html b/30daysofIA/road-to-fallforIA/index.html index 3eaf7080ce..71214631e0 100644 --- a/30daysofIA/road-to-fallforIA/index.html +++ b/30daysofIA/road-to-fallforIA/index.html @@ -3,7 +3,7 @@ -Fall is Coming! 🍂 | Build Intelligent Apps On Azure +Fall is Coming! 🍂 | Build Intelligent Apps On Azure @@ -14,13 +14,13 @@ - +
-
Skip to main content

Fall is Coming! 🍂

· One min read
It's 30DaysOfIA

September is almost here - and that can only mean one thing!! It's time to 🍂 Fall for something new and exciting and spend a few weeks skilling up on relevant tools, techologies and solutions!!

Last year, we focused on #ServerlessSeptember. This year, we're building on that theme with the addition of cloud-scale Data, cloud-native Technologies and cloud-based AI integrations to help you modernize and build intelligent apps for the enterprise!

Watch this space - and join us in September to learn more!

- +
Skip to main content

Fall is Coming! 🍂

· 2 min read
It's 30DaysOfIA

September is almost here - and that can only mean one thing!! It's time to 🍂 Fall for something new and exciting and spend a few weeks skilling up on relevant tools, techologies and solutions!!

Last year, we focused on #ServerlessSeptember. This year, we're building on that theme with the addition of cloud-scale Data, cloud-native Technologies and cloud-based AI integrations to help you modernize and build intelligent apps for the enterprise!

Watch this space - and join us in September to learn more!

+ \ No newline at end of file diff --git a/30daysofIA/tags/30-days-of-ia/index.html b/30daysofIA/tags/30-days-of-ia/index.html index 04f4dfbd36..f50724f532 100644 --- a/30daysofIA/tags/30-days-of-ia/index.html +++ b/30daysofIA/tags/30-days-of-ia/index.html @@ -3,7 +3,7 @@ -2 posts tagged with "30-days-of-IA" | Build Intelligent Apps On Azure +2 posts tagged with "30-days-of-IA" | Build Intelligent Apps On Azure @@ -14,14 +14,14 @@ - +
Skip to main content

2 posts tagged with "30-days-of-IA"

View All Tags

· 4 min read
It's 30DaysOfIA

Continue The Learning Journey through Fall For Intelligent Apps! 🍂

What We'll Cover

Thank you! ♥️

image

It's hard to believe that JavaScript on Azure hack-together is ending! It seems like just yesterday that we launched this initiative, and yet here we are, 15 days later, with an incredible amount of learning and growth behind us. So... it's time for a wrap!

From the bottom of our hearts, we want to thank each and every one of you for your participation, engagement, and enthusiasm. It's been truly inspiring to see the passion and dedication from this strong community, and we're honored to be a part of it. ✨

Recap of The JavaScript on Azure Global Hack-Together

As we wrap up this exciting event, we wanted to take a moment to reflect on all that we've accomplished together. Over the last 15 days, we've covered a lot of ground, from the basics of contributing to Open source to the exploration of the Contoso Real Estate project from its Frontend to its Backend and future AI implementation.

Now that the hack-together is ending, we want to make sure that you have all the resources you need to continue honing your skills in the future. Whether you're looking to make your fist contribution to open source, become an open source maintainers, collaborate with others, or simply keep learning, there are plenty of resources out there to help you achieve your goals. So, let's dive in and explore all the ways you can continue to grow your JavaScript skills on Azure!

JSonAzure Hack-together Roadmap 📍:

hack-together-roadmap (2)

Recap on past Livestreams🌟:

Day 1️⃣: Opening Keynote (Hack-together Launch): Introduction to the Contoso Real Estate Open-source project!, managing complex and complex enterprise architecture, new announcements for JavaScript developers on Azure

Day 2️⃣: GitHub Copilot & Codespaces: Introduction to your AI pair programmer (GitHub Copilot) and your virtual developer environment on the cloud (GitHub Codespaces)

Day 6️⃣: Build your Frontend using Static Web Apps as part of a complex, modern composable frontends (or micro-frontends) and cloud-native applications.

Day 9️⃣: Build a Serverless Backend using Azure Functions

Day 1️⃣3️⃣: Easily connect to an Azure Cosmos DB, exploring its benefits and how to get started

Day 1️⃣5️⃣: Being in the AI Era, we dive into the Azure OpenAI Service and how you can start to build intelligent JavaScript applications

📖 Self-Learning Resources

  1. JavaScript on Azure Global Hack Together Module collection
  2. Lets #HackTogether: Javascript On Azure Keynote
  3. Step by Step Guide: Migrating v3 to v4 programming model for Azure Functions for Node.Js Application

Continue your journey with #FallForIntelligentApps

Join us this Fall on a learning journey to explore building intelligent apps. Combine the power of AI, cloud-native app development, and cloud-scale data to build highly differentiated digital experiences. Develop adaptive, responsive, and personalized experiences by building and modernizing intelligent applications with Azure. Engage in a self-paced learning adventure by following along the #30Days of Intelligent Apps series, completing the Intelligent Apps Skills Challenge, joining the Product Group for a live Ask The Expert series or building an end to end solution architecture with a live guided experience through the Learn Live series. Discover more here.

Hands-on practice: Make your first contribution to open source!

Join our GitHUb Discussion Forum to connect with developers from every part of world, see contributions from other, find collaborators and make your first contribution to a real-world project! Don't forget to give the repo a star ⭐

Resources

All resources are accessible on our landing page

- + \ No newline at end of file diff --git a/30daysofIA/tags/30-days-of-ia/page/2/index.html b/30daysofIA/tags/30-days-of-ia/page/2/index.html index bed6b3f879..262f72c86a 100644 --- a/30daysofIA/tags/30-days-of-ia/page/2/index.html +++ b/30daysofIA/tags/30-days-of-ia/page/2/index.html @@ -3,7 +3,7 @@ -2 posts tagged with "30-days-of-IA" | Build Intelligent Apps On Azure +2 posts tagged with "30-days-of-IA" | Build Intelligent Apps On Azure @@ -14,13 +14,13 @@ - +
-
Skip to main content

2 posts tagged with "30-days-of-IA"

View All Tags

· One min read
It's 30DaysOfIA

September is almost here - and that can only mean one thing!! It's time to 🍂 Fall for something new and exciting and spend a few weeks skilling up on relevant tools, techologies and solutions!!

Last year, we focused on #ServerlessSeptember. This year, we're building on that theme with the addition of cloud-scale Data, cloud-native Technologies and cloud-based AI integrations to help you modernize and build intelligent apps for the enterprise!

Watch this space - and join us in September to learn more!

- +
Skip to main content

2 posts tagged with "30-days-of-IA"

View All Tags

· 2 min read
It's 30DaysOfIA

September is almost here - and that can only mean one thing!! It's time to 🍂 Fall for something new and exciting and spend a few weeks skilling up on relevant tools, techologies and solutions!!

Last year, we focused on #ServerlessSeptember. This year, we're building on that theme with the addition of cloud-scale Data, cloud-native Technologies and cloud-based AI integrations to help you modernize and build intelligent apps for the enterprise!

Watch this space - and join us in September to learn more!

+ \ No newline at end of file diff --git a/30daysofIA/tags/ask-the-expert/index.html b/30daysofIA/tags/ask-the-expert/index.html index 52196397ab..6368c99816 100644 --- a/30daysofIA/tags/ask-the-expert/index.html +++ b/30daysofIA/tags/ask-the-expert/index.html @@ -3,7 +3,7 @@ -2 posts tagged with "ask-the-expert" | Build Intelligent Apps On Azure +2 posts tagged with "ask-the-expert" | Build Intelligent Apps On Azure @@ -14,14 +14,14 @@ - +
Skip to main content

2 posts tagged with "ask-the-expert"

View All Tags

· 4 min read
It's 30DaysOfIA

Continue The Learning Journey through Fall For Intelligent Apps! 🍂

What We'll Cover

Thank you! ♥️

image

It's hard to believe that JavaScript on Azure hack-together is ending! It seems like just yesterday that we launched this initiative, and yet here we are, 15 days later, with an incredible amount of learning and growth behind us. So... it's time for a wrap!

From the bottom of our hearts, we want to thank each and every one of you for your participation, engagement, and enthusiasm. It's been truly inspiring to see the passion and dedication from this strong community, and we're honored to be a part of it. ✨

Recap of The JavaScript on Azure Global Hack-Together

As we wrap up this exciting event, we wanted to take a moment to reflect on all that we've accomplished together. Over the last 15 days, we've covered a lot of ground, from the basics of contributing to Open source to the exploration of the Contoso Real Estate project from its Frontend to its Backend and future AI implementation.

Now that the hack-together is ending, we want to make sure that you have all the resources you need to continue honing your skills in the future. Whether you're looking to make your fist contribution to open source, become an open source maintainers, collaborate with others, or simply keep learning, there are plenty of resources out there to help you achieve your goals. So, let's dive in and explore all the ways you can continue to grow your JavaScript skills on Azure!

JSonAzure Hack-together Roadmap 📍:

hack-together-roadmap (2)

Recap on past Livestreams🌟:

Day 1️⃣: Opening Keynote (Hack-together Launch): Introduction to the Contoso Real Estate Open-source project!, managing complex and complex enterprise architecture, new announcements for JavaScript developers on Azure

Day 2️⃣: GitHub Copilot & Codespaces: Introduction to your AI pair programmer (GitHub Copilot) and your virtual developer environment on the cloud (GitHub Codespaces)

Day 6️⃣: Build your Frontend using Static Web Apps as part of a complex, modern composable frontends (or micro-frontends) and cloud-native applications.

Day 9️⃣: Build a Serverless Backend using Azure Functions

Day 1️⃣3️⃣: Easily connect to an Azure Cosmos DB, exploring its benefits and how to get started

Day 1️⃣5️⃣: Being in the AI Era, we dive into the Azure OpenAI Service and how you can start to build intelligent JavaScript applications

📖 Self-Learning Resources

  1. JavaScript on Azure Global Hack Together Module collection
  2. Lets #HackTogether: Javascript On Azure Keynote
  3. Step by Step Guide: Migrating v3 to v4 programming model for Azure Functions for Node.Js Application

Continue your journey with #FallForIntelligentApps

Join us this Fall on a learning journey to explore building intelligent apps. Combine the power of AI, cloud-native app development, and cloud-scale data to build highly differentiated digital experiences. Develop adaptive, responsive, and personalized experiences by building and modernizing intelligent applications with Azure. Engage in a self-paced learning adventure by following along the #30Days of Intelligent Apps series, completing the Intelligent Apps Skills Challenge, joining the Product Group for a live Ask The Expert series or building an end to end solution architecture with a live guided experience through the Learn Live series. Discover more here.

Hands-on practice: Make your first contribution to open source!

Join our GitHUb Discussion Forum to connect with developers from every part of world, see contributions from other, find collaborators and make your first contribution to a real-world project! Don't forget to give the repo a star ⭐

Resources

All resources are accessible on our landing page

- + \ No newline at end of file diff --git a/30daysofIA/tags/ask-the-expert/page/2/index.html b/30daysofIA/tags/ask-the-expert/page/2/index.html index 3cf65b18af..17642d6416 100644 --- a/30daysofIA/tags/ask-the-expert/page/2/index.html +++ b/30daysofIA/tags/ask-the-expert/page/2/index.html @@ -3,7 +3,7 @@ -2 posts tagged with "ask-the-expert" | Build Intelligent Apps On Azure +2 posts tagged with "ask-the-expert" | Build Intelligent Apps On Azure @@ -14,13 +14,13 @@ - +
-
Skip to main content

2 posts tagged with "ask-the-expert"

View All Tags

· One min read
It's 30DaysOfIA

September is almost here - and that can only mean one thing!! It's time to 🍂 Fall for something new and exciting and spend a few weeks skilling up on relevant tools, techologies and solutions!!

Last year, we focused on #ServerlessSeptember. This year, we're building on that theme with the addition of cloud-scale Data, cloud-native Technologies and cloud-based AI integrations to help you modernize and build intelligent apps for the enterprise!

Watch this space - and join us in September to learn more!

- +
Skip to main content

2 posts tagged with "ask-the-expert"

View All Tags

· 2 min read
It's 30DaysOfIA

September is almost here - and that can only mean one thing!! It's time to 🍂 Fall for something new and exciting and spend a few weeks skilling up on relevant tools, techologies and solutions!!

Last year, we focused on #ServerlessSeptember. This year, we're building on that theme with the addition of cloud-scale Data, cloud-native Technologies and cloud-based AI integrations to help you modernize and build intelligent apps for the enterprise!

Watch this space - and join us in September to learn more!

+ \ No newline at end of file diff --git a/30daysofIA/tags/azure-container-apps/index.html b/30daysofIA/tags/azure-container-apps/index.html index 920304ac70..307f2fecec 100644 --- a/30daysofIA/tags/azure-container-apps/index.html +++ b/30daysofIA/tags/azure-container-apps/index.html @@ -3,7 +3,7 @@ -2 posts tagged with "azure-container-apps" | Build Intelligent Apps On Azure +2 posts tagged with "azure-container-apps" | Build Intelligent Apps On Azure @@ -14,14 +14,14 @@ - +
Skip to main content

2 posts tagged with "azure-container-apps"

View All Tags

· 4 min read
It's 30DaysOfIA

Continue The Learning Journey through Fall For Intelligent Apps! 🍂

What We'll Cover

Thank you! ♥️

image

It's hard to believe that JavaScript on Azure hack-together is ending! It seems like just yesterday that we launched this initiative, and yet here we are, 15 days later, with an incredible amount of learning and growth behind us. So... it's time for a wrap!

From the bottom of our hearts, we want to thank each and every one of you for your participation, engagement, and enthusiasm. It's been truly inspiring to see the passion and dedication from this strong community, and we're honored to be a part of it. ✨

Recap of The JavaScript on Azure Global Hack-Together

As we wrap up this exciting event, we wanted to take a moment to reflect on all that we've accomplished together. Over the last 15 days, we've covered a lot of ground, from the basics of contributing to Open source to the exploration of the Contoso Real Estate project from its Frontend to its Backend and future AI implementation.

Now that the hack-together is ending, we want to make sure that you have all the resources you need to continue honing your skills in the future. Whether you're looking to make your fist contribution to open source, become an open source maintainers, collaborate with others, or simply keep learning, there are plenty of resources out there to help you achieve your goals. So, let's dive in and explore all the ways you can continue to grow your JavaScript skills on Azure!

JSonAzure Hack-together Roadmap 📍:

hack-together-roadmap (2)

Recap on past Livestreams🌟:

Day 1️⃣: Opening Keynote (Hack-together Launch): Introduction to the Contoso Real Estate Open-source project!, managing complex and complex enterprise architecture, new announcements for JavaScript developers on Azure

Day 2️⃣: GitHub Copilot & Codespaces: Introduction to your AI pair programmer (GitHub Copilot) and your virtual developer environment on the cloud (GitHub Codespaces)

Day 6️⃣: Build your Frontend using Static Web Apps as part of a complex, modern composable frontends (or micro-frontends) and cloud-native applications.

Day 9️⃣: Build a Serverless Backend using Azure Functions

Day 1️⃣3️⃣: Easily connect to an Azure Cosmos DB, exploring its benefits and how to get started

Day 1️⃣5️⃣: Being in the AI Era, we dive into the Azure OpenAI Service and how you can start to build intelligent JavaScript applications

📖 Self-Learning Resources

  1. JavaScript on Azure Global Hack Together Module collection
  2. Lets #HackTogether: Javascript On Azure Keynote
  3. Step by Step Guide: Migrating v3 to v4 programming model for Azure Functions for Node.Js Application

Continue your journey with #FallForIntelligentApps

Join us this Fall on a learning journey to explore building intelligent apps. Combine the power of AI, cloud-native app development, and cloud-scale data to build highly differentiated digital experiences. Develop adaptive, responsive, and personalized experiences by building and modernizing intelligent applications with Azure. Engage in a self-paced learning adventure by following along the #30Days of Intelligent Apps series, completing the Intelligent Apps Skills Challenge, joining the Product Group for a live Ask The Expert series or building an end to end solution architecture with a live guided experience through the Learn Live series. Discover more here.

Hands-on practice: Make your first contribution to open source!

Join our GitHUb Discussion Forum to connect with developers from every part of world, see contributions from other, find collaborators and make your first contribution to a real-world project! Don't forget to give the repo a star ⭐

Resources

All resources are accessible on our landing page

- + \ No newline at end of file diff --git a/30daysofIA/tags/azure-container-apps/page/2/index.html b/30daysofIA/tags/azure-container-apps/page/2/index.html index f7066e2920..0770f3fce5 100644 --- a/30daysofIA/tags/azure-container-apps/page/2/index.html +++ b/30daysofIA/tags/azure-container-apps/page/2/index.html @@ -3,7 +3,7 @@ -2 posts tagged with "azure-container-apps" | Build Intelligent Apps On Azure +2 posts tagged with "azure-container-apps" | Build Intelligent Apps On Azure @@ -14,13 +14,13 @@ - +
-
Skip to main content

2 posts tagged with "azure-container-apps"

View All Tags

· One min read
It's 30DaysOfIA

September is almost here - and that can only mean one thing!! It's time to 🍂 Fall for something new and exciting and spend a few weeks skilling up on relevant tools, techologies and solutions!!

Last year, we focused on #ServerlessSeptember. This year, we're building on that theme with the addition of cloud-scale Data, cloud-native Technologies and cloud-based AI integrations to help you modernize and build intelligent apps for the enterprise!

Watch this space - and join us in September to learn more!

- +
Skip to main content

2 posts tagged with "azure-container-apps"

View All Tags

· 2 min read
It's 30DaysOfIA

September is almost here - and that can only mean one thing!! It's time to 🍂 Fall for something new and exciting and spend a few weeks skilling up on relevant tools, techologies and solutions!!

Last year, we focused on #ServerlessSeptember. This year, we're building on that theme with the addition of cloud-scale Data, cloud-native Technologies and cloud-based AI integrations to help you modernize and build intelligent apps for the enterprise!

Watch this space - and join us in September to learn more!

+ \ No newline at end of file diff --git a/30daysofIA/tags/azure-cosmos-db/index.html b/30daysofIA/tags/azure-cosmos-db/index.html index d36ba902fb..7624dc1c98 100644 --- a/30daysofIA/tags/azure-cosmos-db/index.html +++ b/30daysofIA/tags/azure-cosmos-db/index.html @@ -3,7 +3,7 @@ -2 posts tagged with "azure-cosmos-db" | Build Intelligent Apps On Azure +2 posts tagged with "azure-cosmos-db" | Build Intelligent Apps On Azure @@ -14,14 +14,14 @@ - +
Skip to main content

2 posts tagged with "azure-cosmos-db"

View All Tags

· 4 min read
It's 30DaysOfIA

Continue The Learning Journey through Fall For Intelligent Apps! 🍂

What We'll Cover

Thank you! ♥️

image

It's hard to believe that JavaScript on Azure hack-together is ending! It seems like just yesterday that we launched this initiative, and yet here we are, 15 days later, with an incredible amount of learning and growth behind us. So... it's time for a wrap!

From the bottom of our hearts, we want to thank each and every one of you for your participation, engagement, and enthusiasm. It's been truly inspiring to see the passion and dedication from this strong community, and we're honored to be a part of it. ✨

Recap of The JavaScript on Azure Global Hack-Together

As we wrap up this exciting event, we wanted to take a moment to reflect on all that we've accomplished together. Over the last 15 days, we've covered a lot of ground, from the basics of contributing to Open source to the exploration of the Contoso Real Estate project from its Frontend to its Backend and future AI implementation.

Now that the hack-together is ending, we want to make sure that you have all the resources you need to continue honing your skills in the future. Whether you're looking to make your fist contribution to open source, become an open source maintainers, collaborate with others, or simply keep learning, there are plenty of resources out there to help you achieve your goals. So, let's dive in and explore all the ways you can continue to grow your JavaScript skills on Azure!

JSonAzure Hack-together Roadmap 📍:

hack-together-roadmap (2)

Recap on past Livestreams🌟:

Day 1️⃣: Opening Keynote (Hack-together Launch): Introduction to the Contoso Real Estate Open-source project!, managing complex and complex enterprise architecture, new announcements for JavaScript developers on Azure

Day 2️⃣: GitHub Copilot & Codespaces: Introduction to your AI pair programmer (GitHub Copilot) and your virtual developer environment on the cloud (GitHub Codespaces)

Day 6️⃣: Build your Frontend using Static Web Apps as part of a complex, modern composable frontends (or micro-frontends) and cloud-native applications.

Day 9️⃣: Build a Serverless Backend using Azure Functions

Day 1️⃣3️⃣: Easily connect to an Azure Cosmos DB, exploring its benefits and how to get started

Day 1️⃣5️⃣: Being in the AI Era, we dive into the Azure OpenAI Service and how you can start to build intelligent JavaScript applications

📖 Self-Learning Resources

  1. JavaScript on Azure Global Hack Together Module collection
  2. Lets #HackTogether: Javascript On Azure Keynote
  3. Step by Step Guide: Migrating v3 to v4 programming model for Azure Functions for Node.Js Application

Continue your journey with #FallForIntelligentApps

Join us this Fall on a learning journey to explore building intelligent apps. Combine the power of AI, cloud-native app development, and cloud-scale data to build highly differentiated digital experiences. Develop adaptive, responsive, and personalized experiences by building and modernizing intelligent applications with Azure. Engage in a self-paced learning adventure by following along the #30Days of Intelligent Apps series, completing the Intelligent Apps Skills Challenge, joining the Product Group for a live Ask The Expert series or building an end to end solution architecture with a live guided experience through the Learn Live series. Discover more here.

Hands-on practice: Make your first contribution to open source!

Join our GitHUb Discussion Forum to connect with developers from every part of world, see contributions from other, find collaborators and make your first contribution to a real-world project! Don't forget to give the repo a star ⭐

Resources

All resources are accessible on our landing page

- + \ No newline at end of file diff --git a/30daysofIA/tags/azure-cosmos-db/page/2/index.html b/30daysofIA/tags/azure-cosmos-db/page/2/index.html index 51209e2f63..c30269f953 100644 --- a/30daysofIA/tags/azure-cosmos-db/page/2/index.html +++ b/30daysofIA/tags/azure-cosmos-db/page/2/index.html @@ -3,7 +3,7 @@ -2 posts tagged with "azure-cosmos-db" | Build Intelligent Apps On Azure +2 posts tagged with "azure-cosmos-db" | Build Intelligent Apps On Azure @@ -14,13 +14,13 @@ - +
-
Skip to main content

2 posts tagged with "azure-cosmos-db"

View All Tags

· One min read
It's 30DaysOfIA

September is almost here - and that can only mean one thing!! It's time to 🍂 Fall for something new and exciting and spend a few weeks skilling up on relevant tools, techologies and solutions!!

Last year, we focused on #ServerlessSeptember. This year, we're building on that theme with the addition of cloud-scale Data, cloud-native Technologies and cloud-based AI integrations to help you modernize and build intelligent apps for the enterprise!

Watch this space - and join us in September to learn more!

- +
Skip to main content

2 posts tagged with "azure-cosmos-db"

View All Tags

· 2 min read
It's 30DaysOfIA

September is almost here - and that can only mean one thing!! It's time to 🍂 Fall for something new and exciting and spend a few weeks skilling up on relevant tools, techologies and solutions!!

Last year, we focused on #ServerlessSeptember. This year, we're building on that theme with the addition of cloud-scale Data, cloud-native Technologies and cloud-based AI integrations to help you modernize and build intelligent apps for the enterprise!

Watch this space - and join us in September to learn more!

+ \ No newline at end of file diff --git a/30daysofIA/tags/azure-functions/index.html b/30daysofIA/tags/azure-functions/index.html index e0a25c32e8..b37663f300 100644 --- a/30daysofIA/tags/azure-functions/index.html +++ b/30daysofIA/tags/azure-functions/index.html @@ -3,7 +3,7 @@ -2 posts tagged with "azure-functions" | Build Intelligent Apps On Azure +2 posts tagged with "azure-functions" | Build Intelligent Apps On Azure @@ -14,14 +14,14 @@ - +
Skip to main content

2 posts tagged with "azure-functions"

View All Tags

· 4 min read
It's 30DaysOfIA

Continue The Learning Journey through Fall For Intelligent Apps! 🍂

What We'll Cover

Thank you! ♥️

image

It's hard to believe that JavaScript on Azure hack-together is ending! It seems like just yesterday that we launched this initiative, and yet here we are, 15 days later, with an incredible amount of learning and growth behind us. So... it's time for a wrap!

From the bottom of our hearts, we want to thank each and every one of you for your participation, engagement, and enthusiasm. It's been truly inspiring to see the passion and dedication from this strong community, and we're honored to be a part of it. ✨

Recap of The JavaScript on Azure Global Hack-Together

As we wrap up this exciting event, we wanted to take a moment to reflect on all that we've accomplished together. Over the last 15 days, we've covered a lot of ground, from the basics of contributing to Open source to the exploration of the Contoso Real Estate project from its Frontend to its Backend and future AI implementation.

Now that the hack-together is ending, we want to make sure that you have all the resources you need to continue honing your skills in the future. Whether you're looking to make your fist contribution to open source, become an open source maintainers, collaborate with others, or simply keep learning, there are plenty of resources out there to help you achieve your goals. So, let's dive in and explore all the ways you can continue to grow your JavaScript skills on Azure!

JSonAzure Hack-together Roadmap 📍:

hack-together-roadmap (2)

Recap on past Livestreams🌟:

Day 1️⃣: Opening Keynote (Hack-together Launch): Introduction to the Contoso Real Estate Open-source project!, managing complex and complex enterprise architecture, new announcements for JavaScript developers on Azure

Day 2️⃣: GitHub Copilot & Codespaces: Introduction to your AI pair programmer (GitHub Copilot) and your virtual developer environment on the cloud (GitHub Codespaces)

Day 6️⃣: Build your Frontend using Static Web Apps as part of a complex, modern composable frontends (or micro-frontends) and cloud-native applications.

Day 9️⃣: Build a Serverless Backend using Azure Functions

Day 1️⃣3️⃣: Easily connect to an Azure Cosmos DB, exploring its benefits and how to get started

Day 1️⃣5️⃣: Being in the AI Era, we dive into the Azure OpenAI Service and how you can start to build intelligent JavaScript applications

📖 Self-Learning Resources

  1. JavaScript on Azure Global Hack Together Module collection
  2. Lets #HackTogether: Javascript On Azure Keynote
  3. Step by Step Guide: Migrating v3 to v4 programming model for Azure Functions for Node.Js Application

Continue your journey with #FallForIntelligentApps

Join us this Fall on a learning journey to explore building intelligent apps. Combine the power of AI, cloud-native app development, and cloud-scale data to build highly differentiated digital experiences. Develop adaptive, responsive, and personalized experiences by building and modernizing intelligent applications with Azure. Engage in a self-paced learning adventure by following along the #30Days of Intelligent Apps series, completing the Intelligent Apps Skills Challenge, joining the Product Group for a live Ask The Expert series or building an end to end solution architecture with a live guided experience through the Learn Live series. Discover more here.

Hands-on practice: Make your first contribution to open source!

Join our GitHUb Discussion Forum to connect with developers from every part of world, see contributions from other, find collaborators and make your first contribution to a real-world project! Don't forget to give the repo a star ⭐

Resources

All resources are accessible on our landing page

- + \ No newline at end of file diff --git a/30daysofIA/tags/azure-functions/page/2/index.html b/30daysofIA/tags/azure-functions/page/2/index.html index 9b4bda8efe..2788cc52e9 100644 --- a/30daysofIA/tags/azure-functions/page/2/index.html +++ b/30daysofIA/tags/azure-functions/page/2/index.html @@ -3,7 +3,7 @@ -2 posts tagged with "azure-functions" | Build Intelligent Apps On Azure +2 posts tagged with "azure-functions" | Build Intelligent Apps On Azure @@ -14,13 +14,13 @@ - +
-
Skip to main content

2 posts tagged with "azure-functions"

View All Tags

· One min read
It's 30DaysOfIA

September is almost here - and that can only mean one thing!! It's time to 🍂 Fall for something new and exciting and spend a few weeks skilling up on relevant tools, techologies and solutions!!

Last year, we focused on #ServerlessSeptember. This year, we're building on that theme with the addition of cloud-scale Data, cloud-native Technologies and cloud-based AI integrations to help you modernize and build intelligent apps for the enterprise!

Watch this space - and join us in September to learn more!

- +
Skip to main content

2 posts tagged with "azure-functions"

View All Tags

· 2 min read
It's 30DaysOfIA

September is almost here - and that can only mean one thing!! It's time to 🍂 Fall for something new and exciting and spend a few weeks skilling up on relevant tools, techologies and solutions!!

Last year, we focused on #ServerlessSeptember. This year, we're building on that theme with the addition of cloud-scale Data, cloud-native Technologies and cloud-based AI integrations to help you modernize and build intelligent apps for the enterprise!

Watch this space - and join us in September to learn more!

+ \ No newline at end of file diff --git a/30daysofIA/tags/azure-kubernetes-service/index.html b/30daysofIA/tags/azure-kubernetes-service/index.html index a28fe6e0c3..695599e3f8 100644 --- a/30daysofIA/tags/azure-kubernetes-service/index.html +++ b/30daysofIA/tags/azure-kubernetes-service/index.html @@ -3,7 +3,7 @@ -2 posts tagged with "azure-kubernetes-service" | Build Intelligent Apps On Azure +2 posts tagged with "azure-kubernetes-service" | Build Intelligent Apps On Azure @@ -14,14 +14,14 @@ - +
Skip to main content

2 posts tagged with "azure-kubernetes-service"

View All Tags

· 4 min read
It's 30DaysOfIA

Continue The Learning Journey through Fall For Intelligent Apps! 🍂

What We'll Cover

Thank you! ♥️

image

It's hard to believe that JavaScript on Azure hack-together is ending! It seems like just yesterday that we launched this initiative, and yet here we are, 15 days later, with an incredible amount of learning and growth behind us. So... it's time for a wrap!

From the bottom of our hearts, we want to thank each and every one of you for your participation, engagement, and enthusiasm. It's been truly inspiring to see the passion and dedication from this strong community, and we're honored to be a part of it. ✨

Recap of The JavaScript on Azure Global Hack-Together

As we wrap up this exciting event, we wanted to take a moment to reflect on all that we've accomplished together. Over the last 15 days, we've covered a lot of ground, from the basics of contributing to Open source to the exploration of the Contoso Real Estate project from its Frontend to its Backend and future AI implementation.

Now that the hack-together is ending, we want to make sure that you have all the resources you need to continue honing your skills in the future. Whether you're looking to make your fist contribution to open source, become an open source maintainers, collaborate with others, or simply keep learning, there are plenty of resources out there to help you achieve your goals. So, let's dive in and explore all the ways you can continue to grow your JavaScript skills on Azure!

JSonAzure Hack-together Roadmap 📍:

hack-together-roadmap (2)

Recap on past Livestreams🌟:

Day 1️⃣: Opening Keynote (Hack-together Launch): Introduction to the Contoso Real Estate Open-source project!, managing complex and complex enterprise architecture, new announcements for JavaScript developers on Azure

Day 2️⃣: GitHub Copilot & Codespaces: Introduction to your AI pair programmer (GitHub Copilot) and your virtual developer environment on the cloud (GitHub Codespaces)

Day 6️⃣: Build your Frontend using Static Web Apps as part of a complex, modern composable frontends (or micro-frontends) and cloud-native applications.

Day 9️⃣: Build a Serverless Backend using Azure Functions

Day 1️⃣3️⃣: Easily connect to an Azure Cosmos DB, exploring its benefits and how to get started

Day 1️⃣5️⃣: Being in the AI Era, we dive into the Azure OpenAI Service and how you can start to build intelligent JavaScript applications

📖 Self-Learning Resources

  1. JavaScript on Azure Global Hack Together Module collection
  2. Lets #HackTogether: Javascript On Azure Keynote
  3. Step by Step Guide: Migrating v3 to v4 programming model for Azure Functions for Node.Js Application

Continue your journey with #FallForIntelligentApps

Join us this Fall on a learning journey to explore building intelligent apps. Combine the power of AI, cloud-native app development, and cloud-scale data to build highly differentiated digital experiences. Develop adaptive, responsive, and personalized experiences by building and modernizing intelligent applications with Azure. Engage in a self-paced learning adventure by following along the #30Days of Intelligent Apps series, completing the Intelligent Apps Skills Challenge, joining the Product Group for a live Ask The Expert series or building an end to end solution architecture with a live guided experience through the Learn Live series. Discover more here.

Hands-on practice: Make your first contribution to open source!

Join our GitHUb Discussion Forum to connect with developers from every part of world, see contributions from other, find collaborators and make your first contribution to a real-world project! Don't forget to give the repo a star ⭐

Resources

All resources are accessible on our landing page

- + \ No newline at end of file diff --git a/30daysofIA/tags/azure-kubernetes-service/page/2/index.html b/30daysofIA/tags/azure-kubernetes-service/page/2/index.html index c1ea82a551..3bf3cb4b17 100644 --- a/30daysofIA/tags/azure-kubernetes-service/page/2/index.html +++ b/30daysofIA/tags/azure-kubernetes-service/page/2/index.html @@ -3,7 +3,7 @@ -2 posts tagged with "azure-kubernetes-service" | Build Intelligent Apps On Azure +2 posts tagged with "azure-kubernetes-service" | Build Intelligent Apps On Azure @@ -14,13 +14,13 @@ - +
-
Skip to main content

2 posts tagged with "azure-kubernetes-service"

View All Tags

· One min read
It's 30DaysOfIA

September is almost here - and that can only mean one thing!! It's time to 🍂 Fall for something new and exciting and spend a few weeks skilling up on relevant tools, techologies and solutions!!

Last year, we focused on #ServerlessSeptember. This year, we're building on that theme with the addition of cloud-scale Data, cloud-native Technologies and cloud-based AI integrations to help you modernize and build intelligent apps for the enterprise!

Watch this space - and join us in September to learn more!

- +
Skip to main content

2 posts tagged with "azure-kubernetes-service"

View All Tags

· 2 min read
It's 30DaysOfIA

September is almost here - and that can only mean one thing!! It's time to 🍂 Fall for something new and exciting and spend a few weeks skilling up on relevant tools, techologies and solutions!!

Last year, we focused on #ServerlessSeptember. This year, we're building on that theme with the addition of cloud-scale Data, cloud-native Technologies and cloud-based AI integrations to help you modernize and build intelligent apps for the enterprise!

Watch this space - and join us in September to learn more!

+ \ No newline at end of file diff --git a/30daysofIA/tags/azure-openai/index.html b/30daysofIA/tags/azure-openai/index.html index 5d05d5a475..7bc25a59ac 100644 --- a/30daysofIA/tags/azure-openai/index.html +++ b/30daysofIA/tags/azure-openai/index.html @@ -3,7 +3,7 @@ -2 posts tagged with "azure-openai" | Build Intelligent Apps On Azure +2 posts tagged with "azure-openai" | Build Intelligent Apps On Azure @@ -14,14 +14,14 @@ - +
Skip to main content

2 posts tagged with "azure-openai"

View All Tags

· 4 min read
It's 30DaysOfIA

Continue The Learning Journey through Fall For Intelligent Apps! 🍂

What We'll Cover

Thank you! ♥️

image

It's hard to believe that JavaScript on Azure hack-together is ending! It seems like just yesterday that we launched this initiative, and yet here we are, 15 days later, with an incredible amount of learning and growth behind us. So... it's time for a wrap!

From the bottom of our hearts, we want to thank each and every one of you for your participation, engagement, and enthusiasm. It's been truly inspiring to see the passion and dedication from this strong community, and we're honored to be a part of it. ✨

Recap of The JavaScript on Azure Global Hack-Together

As we wrap up this exciting event, we wanted to take a moment to reflect on all that we've accomplished together. Over the last 15 days, we've covered a lot of ground, from the basics of contributing to Open source to the exploration of the Contoso Real Estate project from its Frontend to its Backend and future AI implementation.

Now that the hack-together is ending, we want to make sure that you have all the resources you need to continue honing your skills in the future. Whether you're looking to make your fist contribution to open source, become an open source maintainers, collaborate with others, or simply keep learning, there are plenty of resources out there to help you achieve your goals. So, let's dive in and explore all the ways you can continue to grow your JavaScript skills on Azure!

JSonAzure Hack-together Roadmap 📍:

hack-together-roadmap (2)

Recap on past Livestreams🌟:

Day 1️⃣: Opening Keynote (Hack-together Launch): Introduction to the Contoso Real Estate Open-source project!, managing complex and complex enterprise architecture, new announcements for JavaScript developers on Azure

Day 2️⃣: GitHub Copilot & Codespaces: Introduction to your AI pair programmer (GitHub Copilot) and your virtual developer environment on the cloud (GitHub Codespaces)

Day 6️⃣: Build your Frontend using Static Web Apps as part of a complex, modern composable frontends (or micro-frontends) and cloud-native applications.

Day 9️⃣: Build a Serverless Backend using Azure Functions

Day 1️⃣3️⃣: Easily connect to an Azure Cosmos DB, exploring its benefits and how to get started

Day 1️⃣5️⃣: Being in the AI Era, we dive into the Azure OpenAI Service and how you can start to build intelligent JavaScript applications

📖 Self-Learning Resources

  1. JavaScript on Azure Global Hack Together Module collection
  2. Lets #HackTogether: Javascript On Azure Keynote
  3. Step by Step Guide: Migrating v3 to v4 programming model for Azure Functions for Node.Js Application

Continue your journey with #FallForIntelligentApps

Join us this Fall on a learning journey to explore building intelligent apps. Combine the power of AI, cloud-native app development, and cloud-scale data to build highly differentiated digital experiences. Develop adaptive, responsive, and personalized experiences by building and modernizing intelligent applications with Azure. Engage in a self-paced learning adventure by following along the #30Days of Intelligent Apps series, completing the Intelligent Apps Skills Challenge, joining the Product Group for a live Ask The Expert series or building an end to end solution architecture with a live guided experience through the Learn Live series. Discover more here.

Hands-on practice: Make your first contribution to open source!

Join our GitHUb Discussion Forum to connect with developers from every part of world, see contributions from other, find collaborators and make your first contribution to a real-world project! Don't forget to give the repo a star ⭐

Resources

All resources are accessible on our landing page

- + \ No newline at end of file diff --git a/30daysofIA/tags/azure-openai/page/2/index.html b/30daysofIA/tags/azure-openai/page/2/index.html index 6f68d26c2f..a72bc58a14 100644 --- a/30daysofIA/tags/azure-openai/page/2/index.html +++ b/30daysofIA/tags/azure-openai/page/2/index.html @@ -3,7 +3,7 @@ -2 posts tagged with "azure-openai" | Build Intelligent Apps On Azure +2 posts tagged with "azure-openai" | Build Intelligent Apps On Azure @@ -14,13 +14,13 @@ - +
-
Skip to main content

2 posts tagged with "azure-openai"

View All Tags

· One min read
It's 30DaysOfIA

September is almost here - and that can only mean one thing!! It's time to 🍂 Fall for something new and exciting and spend a few weeks skilling up on relevant tools, techologies and solutions!!

Last year, we focused on #ServerlessSeptember. This year, we're building on that theme with the addition of cloud-scale Data, cloud-native Technologies and cloud-based AI integrations to help you modernize and build intelligent apps for the enterprise!

Watch this space - and join us in September to learn more!

- +
Skip to main content

2 posts tagged with "azure-openai"

View All Tags

· 2 min read
It's 30DaysOfIA

September is almost here - and that can only mean one thing!! It's time to 🍂 Fall for something new and exciting and spend a few weeks skilling up on relevant tools, techologies and solutions!!

Last year, we focused on #ServerlessSeptember. This year, we're building on that theme with the addition of cloud-scale Data, cloud-native Technologies and cloud-based AI integrations to help you modernize and build intelligent apps for the enterprise!

Watch this space - and join us in September to learn more!

+ \ No newline at end of file diff --git a/30daysofIA/tags/community-buzz/index.html b/30daysofIA/tags/community-buzz/index.html index 2dc01d283c..88fdfbf02b 100644 --- a/30daysofIA/tags/community-buzz/index.html +++ b/30daysofIA/tags/community-buzz/index.html @@ -3,7 +3,7 @@ -2 posts tagged with "community-buzz" | Build Intelligent Apps On Azure +2 posts tagged with "community-buzz" | Build Intelligent Apps On Azure @@ -14,14 +14,14 @@ - +
Skip to main content

2 posts tagged with "community-buzz"

View All Tags

· 4 min read
It's 30DaysOfIA

Continue The Learning Journey through Fall For Intelligent Apps! 🍂

What We'll Cover

Thank you! ♥️

image

It's hard to believe that JavaScript on Azure hack-together is ending! It seems like just yesterday that we launched this initiative, and yet here we are, 15 days later, with an incredible amount of learning and growth behind us. So... it's time for a wrap!

From the bottom of our hearts, we want to thank each and every one of you for your participation, engagement, and enthusiasm. It's been truly inspiring to see the passion and dedication from this strong community, and we're honored to be a part of it. ✨

Recap of The JavaScript on Azure Global Hack-Together

As we wrap up this exciting event, we wanted to take a moment to reflect on all that we've accomplished together. Over the last 15 days, we've covered a lot of ground, from the basics of contributing to Open source to the exploration of the Contoso Real Estate project from its Frontend to its Backend and future AI implementation.

Now that the hack-together is ending, we want to make sure that you have all the resources you need to continue honing your skills in the future. Whether you're looking to make your fist contribution to open source, become an open source maintainers, collaborate with others, or simply keep learning, there are plenty of resources out there to help you achieve your goals. So, let's dive in and explore all the ways you can continue to grow your JavaScript skills on Azure!

JSonAzure Hack-together Roadmap 📍:

hack-together-roadmap (2)

Recap on past Livestreams🌟:

Day 1️⃣: Opening Keynote (Hack-together Launch): Introduction to the Contoso Real Estate Open-source project!, managing complex and complex enterprise architecture, new announcements for JavaScript developers on Azure

Day 2️⃣: GitHub Copilot & Codespaces: Introduction to your AI pair programmer (GitHub Copilot) and your virtual developer environment on the cloud (GitHub Codespaces)

Day 6️⃣: Build your Frontend using Static Web Apps as part of a complex, modern composable frontends (or micro-frontends) and cloud-native applications.

Day 9️⃣: Build a Serverless Backend using Azure Functions

Day 1️⃣3️⃣: Easily connect to an Azure Cosmos DB, exploring its benefits and how to get started

Day 1️⃣5️⃣: Being in the AI Era, we dive into the Azure OpenAI Service and how you can start to build intelligent JavaScript applications

📖 Self-Learning Resources

  1. JavaScript on Azure Global Hack Together Module collection
  2. Lets #HackTogether: Javascript On Azure Keynote
  3. Step by Step Guide: Migrating v3 to v4 programming model for Azure Functions for Node.Js Application

Continue your journey with #FallForIntelligentApps

Join us this Fall on a learning journey to explore building intelligent apps. Combine the power of AI, cloud-native app development, and cloud-scale data to build highly differentiated digital experiences. Develop adaptive, responsive, and personalized experiences by building and modernizing intelligent applications with Azure. Engage in a self-paced learning adventure by following along the #30Days of Intelligent Apps series, completing the Intelligent Apps Skills Challenge, joining the Product Group for a live Ask The Expert series or building an end to end solution architecture with a live guided experience through the Learn Live series. Discover more here.

Hands-on practice: Make your first contribution to open source!

Join our GitHUb Discussion Forum to connect with developers from every part of world, see contributions from other, find collaborators and make your first contribution to a real-world project! Don't forget to give the repo a star ⭐

Resources

All resources are accessible on our landing page

- + \ No newline at end of file diff --git a/30daysofIA/tags/community-buzz/page/2/index.html b/30daysofIA/tags/community-buzz/page/2/index.html index f395ba1c7d..f0f7f9b7cf 100644 --- a/30daysofIA/tags/community-buzz/page/2/index.html +++ b/30daysofIA/tags/community-buzz/page/2/index.html @@ -3,7 +3,7 @@ -2 posts tagged with "community-buzz" | Build Intelligent Apps On Azure +2 posts tagged with "community-buzz" | Build Intelligent Apps On Azure @@ -14,13 +14,13 @@ - +
-
Skip to main content

2 posts tagged with "community-buzz"

View All Tags

· One min read
It's 30DaysOfIA

September is almost here - and that can only mean one thing!! It's time to 🍂 Fall for something new and exciting and spend a few weeks skilling up on relevant tools, techologies and solutions!!

Last year, we focused on #ServerlessSeptember. This year, we're building on that theme with the addition of cloud-scale Data, cloud-native Technologies and cloud-based AI integrations to help you modernize and build intelligent apps for the enterprise!

Watch this space - and join us in September to learn more!

- +
Skip to main content

2 posts tagged with "community-buzz"

View All Tags

· 2 min read
It's 30DaysOfIA

September is almost here - and that can only mean one thing!! It's time to 🍂 Fall for something new and exciting and spend a few weeks skilling up on relevant tools, techologies and solutions!!

Last year, we focused on #ServerlessSeptember. This year, we're building on that theme with the addition of cloud-scale Data, cloud-native Technologies and cloud-based AI integrations to help you modernize and build intelligent apps for the enterprise!

Watch this space - and join us in September to learn more!

+ \ No newline at end of file diff --git a/30daysofIA/tags/fall-for-ia/index.html b/30daysofIA/tags/fall-for-ia/index.html index c3d58bae75..54a2ebb8d8 100644 --- a/30daysofIA/tags/fall-for-ia/index.html +++ b/30daysofIA/tags/fall-for-ia/index.html @@ -3,7 +3,7 @@ -2 posts tagged with "Fall-For-IA" | Build Intelligent Apps On Azure +2 posts tagged with "Fall-For-IA" | Build Intelligent Apps On Azure @@ -14,14 +14,14 @@ - +
Skip to main content

2 posts tagged with "Fall-For-IA"

View All Tags

· 4 min read
It's 30DaysOfIA

Continue The Learning Journey through Fall For Intelligent Apps! 🍂

What We'll Cover

Thank you! ♥️

image

It's hard to believe that JavaScript on Azure hack-together is ending! It seems like just yesterday that we launched this initiative, and yet here we are, 15 days later, with an incredible amount of learning and growth behind us. So... it's time for a wrap!

From the bottom of our hearts, we want to thank each and every one of you for your participation, engagement, and enthusiasm. It's been truly inspiring to see the passion and dedication from this strong community, and we're honored to be a part of it. ✨

Recap of The JavaScript on Azure Global Hack-Together

As we wrap up this exciting event, we wanted to take a moment to reflect on all that we've accomplished together. Over the last 15 days, we've covered a lot of ground, from the basics of contributing to Open source to the exploration of the Contoso Real Estate project from its Frontend to its Backend and future AI implementation.

Now that the hack-together is ending, we want to make sure that you have all the resources you need to continue honing your skills in the future. Whether you're looking to make your fist contribution to open source, become an open source maintainers, collaborate with others, or simply keep learning, there are plenty of resources out there to help you achieve your goals. So, let's dive in and explore all the ways you can continue to grow your JavaScript skills on Azure!

JSonAzure Hack-together Roadmap 📍:

hack-together-roadmap (2)

Recap on past Livestreams🌟:

Day 1️⃣: Opening Keynote (Hack-together Launch): Introduction to the Contoso Real Estate Open-source project!, managing complex and complex enterprise architecture, new announcements for JavaScript developers on Azure

Day 2️⃣: GitHub Copilot & Codespaces: Introduction to your AI pair programmer (GitHub Copilot) and your virtual developer environment on the cloud (GitHub Codespaces)

Day 6️⃣: Build your Frontend using Static Web Apps as part of a complex, modern composable frontends (or micro-frontends) and cloud-native applications.

Day 9️⃣: Build a Serverless Backend using Azure Functions

Day 1️⃣3️⃣: Easily connect to an Azure Cosmos DB, exploring its benefits and how to get started

Day 1️⃣5️⃣: Being in the AI Era, we dive into the Azure OpenAI Service and how you can start to build intelligent JavaScript applications

📖 Self-Learning Resources

  1. JavaScript on Azure Global Hack Together Module collection
  2. Lets #HackTogether: Javascript On Azure Keynote
  3. Step by Step Guide: Migrating v3 to v4 programming model for Azure Functions for Node.Js Application

Continue your journey with #FallForIntelligentApps

Join us this Fall on a learning journey to explore building intelligent apps. Combine the power of AI, cloud-native app development, and cloud-scale data to build highly differentiated digital experiences. Develop adaptive, responsive, and personalized experiences by building and modernizing intelligent applications with Azure. Engage in a self-paced learning adventure by following along the #30Days of Intelligent Apps series, completing the Intelligent Apps Skills Challenge, joining the Product Group for a live Ask The Expert series or building an end to end solution architecture with a live guided experience through the Learn Live series. Discover more here.

Hands-on practice: Make your first contribution to open source!

Join our GitHUb Discussion Forum to connect with developers from every part of world, see contributions from other, find collaborators and make your first contribution to a real-world project! Don't forget to give the repo a star ⭐

Resources

All resources are accessible on our landing page

- + \ No newline at end of file diff --git a/30daysofIA/tags/fall-for-ia/page/2/index.html b/30daysofIA/tags/fall-for-ia/page/2/index.html index e50344f10f..7b2dfec027 100644 --- a/30daysofIA/tags/fall-for-ia/page/2/index.html +++ b/30daysofIA/tags/fall-for-ia/page/2/index.html @@ -3,7 +3,7 @@ -2 posts tagged with "Fall-For-IA" | Build Intelligent Apps On Azure +2 posts tagged with "Fall-For-IA" | Build Intelligent Apps On Azure @@ -14,13 +14,13 @@ - +
-
Skip to main content

2 posts tagged with "Fall-For-IA"

View All Tags

· One min read
It's 30DaysOfIA

September is almost here - and that can only mean one thing!! It's time to 🍂 Fall for something new and exciting and spend a few weeks skilling up on relevant tools, techologies and solutions!!

Last year, we focused on #ServerlessSeptember. This year, we're building on that theme with the addition of cloud-scale Data, cloud-native Technologies and cloud-based AI integrations to help you modernize and build intelligent apps for the enterprise!

Watch this space - and join us in September to learn more!

- +
Skip to main content

2 posts tagged with "Fall-For-IA"

View All Tags

· 2 min read
It's 30DaysOfIA

September is almost here - and that can only mean one thing!! It's time to 🍂 Fall for something new and exciting and spend a few weeks skilling up on relevant tools, techologies and solutions!!

Last year, we focused on #ServerlessSeptember. This year, we're building on that theme with the addition of cloud-scale Data, cloud-native Technologies and cloud-based AI integrations to help you modernize and build intelligent apps for the enterprise!

Watch this space - and join us in September to learn more!

+ \ No newline at end of file diff --git a/30daysofIA/tags/github-actions/index.html b/30daysofIA/tags/github-actions/index.html index 30fd08886a..e1ea350a18 100644 --- a/30daysofIA/tags/github-actions/index.html +++ b/30daysofIA/tags/github-actions/index.html @@ -3,7 +3,7 @@ -2 posts tagged with "github-actions" | Build Intelligent Apps On Azure +2 posts tagged with "github-actions" | Build Intelligent Apps On Azure @@ -14,14 +14,14 @@ - +
Skip to main content

2 posts tagged with "github-actions"

View All Tags

· 4 min read
It's 30DaysOfIA

Continue The Learning Journey through Fall For Intelligent Apps! 🍂

What We'll Cover

Thank you! ♥️

image

It's hard to believe that JavaScript on Azure hack-together is ending! It seems like just yesterday that we launched this initiative, and yet here we are, 15 days later, with an incredible amount of learning and growth behind us. So... it's time for a wrap!

From the bottom of our hearts, we want to thank each and every one of you for your participation, engagement, and enthusiasm. It's been truly inspiring to see the passion and dedication from this strong community, and we're honored to be a part of it. ✨

Recap of The JavaScript on Azure Global Hack-Together

As we wrap up this exciting event, we wanted to take a moment to reflect on all that we've accomplished together. Over the last 15 days, we've covered a lot of ground, from the basics of contributing to Open source to the exploration of the Contoso Real Estate project from its Frontend to its Backend and future AI implementation.

Now that the hack-together is ending, we want to make sure that you have all the resources you need to continue honing your skills in the future. Whether you're looking to make your fist contribution to open source, become an open source maintainers, collaborate with others, or simply keep learning, there are plenty of resources out there to help you achieve your goals. So, let's dive in and explore all the ways you can continue to grow your JavaScript skills on Azure!

JSonAzure Hack-together Roadmap 📍:

hack-together-roadmap (2)

Recap on past Livestreams🌟:

Day 1️⃣: Opening Keynote (Hack-together Launch): Introduction to the Contoso Real Estate Open-source project!, managing complex and complex enterprise architecture, new announcements for JavaScript developers on Azure

Day 2️⃣: GitHub Copilot & Codespaces: Introduction to your AI pair programmer (GitHub Copilot) and your virtual developer environment on the cloud (GitHub Codespaces)

Day 6️⃣: Build your Frontend using Static Web Apps as part of a complex, modern composable frontends (or micro-frontends) and cloud-native applications.

Day 9️⃣: Build a Serverless Backend using Azure Functions

Day 1️⃣3️⃣: Easily connect to an Azure Cosmos DB, exploring its benefits and how to get started

Day 1️⃣5️⃣: Being in the AI Era, we dive into the Azure OpenAI Service and how you can start to build intelligent JavaScript applications

📖 Self-Learning Resources

  1. JavaScript on Azure Global Hack Together Module collection
  2. Lets #HackTogether: Javascript On Azure Keynote
  3. Step by Step Guide: Migrating v3 to v4 programming model for Azure Functions for Node.Js Application

Continue your journey with #FallForIntelligentApps

Join us this Fall on a learning journey to explore building intelligent apps. Combine the power of AI, cloud-native app development, and cloud-scale data to build highly differentiated digital experiences. Develop adaptive, responsive, and personalized experiences by building and modernizing intelligent applications with Azure. Engage in a self-paced learning adventure by following along the #30Days of Intelligent Apps series, completing the Intelligent Apps Skills Challenge, joining the Product Group for a live Ask The Expert series or building an end to end solution architecture with a live guided experience through the Learn Live series. Discover more here.

Hands-on practice: Make your first contribution to open source!

Join our GitHUb Discussion Forum to connect with developers from every part of world, see contributions from other, find collaborators and make your first contribution to a real-world project! Don't forget to give the repo a star ⭐

Resources

All resources are accessible on our landing page

- + \ No newline at end of file diff --git a/30daysofIA/tags/github-actions/page/2/index.html b/30daysofIA/tags/github-actions/page/2/index.html index d433fdfca2..5f519938cb 100644 --- a/30daysofIA/tags/github-actions/page/2/index.html +++ b/30daysofIA/tags/github-actions/page/2/index.html @@ -3,7 +3,7 @@ -2 posts tagged with "github-actions" | Build Intelligent Apps On Azure +2 posts tagged with "github-actions" | Build Intelligent Apps On Azure @@ -14,13 +14,13 @@ - +
-
Skip to main content

2 posts tagged with "github-actions"

View All Tags

· One min read
It's 30DaysOfIA

September is almost here - and that can only mean one thing!! It's time to 🍂 Fall for something new and exciting and spend a few weeks skilling up on relevant tools, techologies and solutions!!

Last year, we focused on #ServerlessSeptember. This year, we're building on that theme with the addition of cloud-scale Data, cloud-native Technologies and cloud-based AI integrations to help you modernize and build intelligent apps for the enterprise!

Watch this space - and join us in September to learn more!

- +
Skip to main content

2 posts tagged with "github-actions"

View All Tags

· 2 min read
It's 30DaysOfIA

September is almost here - and that can only mean one thing!! It's time to 🍂 Fall for something new and exciting and spend a few weeks skilling up on relevant tools, techologies and solutions!!

Last year, we focused on #ServerlessSeptember. This year, we're building on that theme with the addition of cloud-scale Data, cloud-native Technologies and cloud-based AI integrations to help you modernize and build intelligent apps for the enterprise!

Watch this space - and join us in September to learn more!

+ \ No newline at end of file diff --git a/30daysofIA/tags/github-codespaces/index.html b/30daysofIA/tags/github-codespaces/index.html index c0b07a148a..1ebd3dff77 100644 --- a/30daysofIA/tags/github-codespaces/index.html +++ b/30daysofIA/tags/github-codespaces/index.html @@ -3,7 +3,7 @@ -2 posts tagged with "github-codespaces" | Build Intelligent Apps On Azure +2 posts tagged with "github-codespaces" | Build Intelligent Apps On Azure @@ -14,14 +14,14 @@ - +
Skip to main content

2 posts tagged with "github-codespaces"

View All Tags

· 4 min read
It's 30DaysOfIA

Continue The Learning Journey through Fall For Intelligent Apps! 🍂

What We'll Cover

Thank you! ♥️

image

It's hard to believe that JavaScript on Azure hack-together is ending! It seems like just yesterday that we launched this initiative, and yet here we are, 15 days later, with an incredible amount of learning and growth behind us. So... it's time for a wrap!

From the bottom of our hearts, we want to thank each and every one of you for your participation, engagement, and enthusiasm. It's been truly inspiring to see the passion and dedication from this strong community, and we're honored to be a part of it. ✨

Recap of The JavaScript on Azure Global Hack-Together

As we wrap up this exciting event, we wanted to take a moment to reflect on all that we've accomplished together. Over the last 15 days, we've covered a lot of ground, from the basics of contributing to Open source to the exploration of the Contoso Real Estate project from its Frontend to its Backend and future AI implementation.

Now that the hack-together is ending, we want to make sure that you have all the resources you need to continue honing your skills in the future. Whether you're looking to make your fist contribution to open source, become an open source maintainers, collaborate with others, or simply keep learning, there are plenty of resources out there to help you achieve your goals. So, let's dive in and explore all the ways you can continue to grow your JavaScript skills on Azure!

JSonAzure Hack-together Roadmap 📍:

hack-together-roadmap (2)

Recap on past Livestreams🌟:

Day 1️⃣: Opening Keynote (Hack-together Launch): Introduction to the Contoso Real Estate Open-source project!, managing complex and complex enterprise architecture, new announcements for JavaScript developers on Azure

Day 2️⃣: GitHub Copilot & Codespaces: Introduction to your AI pair programmer (GitHub Copilot) and your virtual developer environment on the cloud (GitHub Codespaces)

Day 6️⃣: Build your Frontend using Static Web Apps as part of a complex, modern composable frontends (or micro-frontends) and cloud-native applications.

Day 9️⃣: Build a Serverless Backend using Azure Functions

Day 1️⃣3️⃣: Easily connect to an Azure Cosmos DB, exploring its benefits and how to get started

Day 1️⃣5️⃣: Being in the AI Era, we dive into the Azure OpenAI Service and how you can start to build intelligent JavaScript applications

📖 Self-Learning Resources

  1. JavaScript on Azure Global Hack Together Module collection
  2. Lets #HackTogether: Javascript On Azure Keynote
  3. Step by Step Guide: Migrating v3 to v4 programming model for Azure Functions for Node.Js Application

Continue your journey with #FallForIntelligentApps

Join us this Fall on a learning journey to explore building intelligent apps. Combine the power of AI, cloud-native app development, and cloud-scale data to build highly differentiated digital experiences. Develop adaptive, responsive, and personalized experiences by building and modernizing intelligent applications with Azure. Engage in a self-paced learning adventure by following along the #30Days of Intelligent Apps series, completing the Intelligent Apps Skills Challenge, joining the Product Group for a live Ask The Expert series or building an end to end solution architecture with a live guided experience through the Learn Live series. Discover more here.

Hands-on practice: Make your first contribution to open source!

Join our GitHUb Discussion Forum to connect with developers from every part of world, see contributions from other, find collaborators and make your first contribution to a real-world project! Don't forget to give the repo a star ⭐

Resources

All resources are accessible on our landing page

- + \ No newline at end of file diff --git a/30daysofIA/tags/github-codespaces/page/2/index.html b/30daysofIA/tags/github-codespaces/page/2/index.html index 99ad1ac671..06163776e5 100644 --- a/30daysofIA/tags/github-codespaces/page/2/index.html +++ b/30daysofIA/tags/github-codespaces/page/2/index.html @@ -3,7 +3,7 @@ -2 posts tagged with "github-codespaces" | Build Intelligent Apps On Azure +2 posts tagged with "github-codespaces" | Build Intelligent Apps On Azure @@ -14,13 +14,13 @@ - +
-
Skip to main content

2 posts tagged with "github-codespaces"

View All Tags

· One min read
It's 30DaysOfIA

September is almost here - and that can only mean one thing!! It's time to 🍂 Fall for something new and exciting and spend a few weeks skilling up on relevant tools, techologies and solutions!!

Last year, we focused on #ServerlessSeptember. This year, we're building on that theme with the addition of cloud-scale Data, cloud-native Technologies and cloud-based AI integrations to help you modernize and build intelligent apps for the enterprise!

Watch this space - and join us in September to learn more!

- +
Skip to main content

2 posts tagged with "github-codespaces"

View All Tags

· 2 min read
It's 30DaysOfIA

September is almost here - and that can only mean one thing!! It's time to 🍂 Fall for something new and exciting and spend a few weeks skilling up on relevant tools, techologies and solutions!!

Last year, we focused on #ServerlessSeptember. This year, we're building on that theme with the addition of cloud-scale Data, cloud-native Technologies and cloud-based AI integrations to help you modernize and build intelligent apps for the enterprise!

Watch this space - and join us in September to learn more!

+ \ No newline at end of file diff --git a/30daysofIA/tags/github-copilot/index.html b/30daysofIA/tags/github-copilot/index.html index 84d8c9358e..694959b1ad 100644 --- a/30daysofIA/tags/github-copilot/index.html +++ b/30daysofIA/tags/github-copilot/index.html @@ -3,7 +3,7 @@ -2 posts tagged with "github-copilot" | Build Intelligent Apps On Azure +2 posts tagged with "github-copilot" | Build Intelligent Apps On Azure @@ -14,14 +14,14 @@ - +
Skip to main content

2 posts tagged with "github-copilot"

View All Tags

· 4 min read
It's 30DaysOfIA

Continue The Learning Journey through Fall For Intelligent Apps! 🍂

What We'll Cover

Thank you! ♥️

image

It's hard to believe that JavaScript on Azure hack-together is ending! It seems like just yesterday that we launched this initiative, and yet here we are, 15 days later, with an incredible amount of learning and growth behind us. So... it's time for a wrap!

From the bottom of our hearts, we want to thank each and every one of you for your participation, engagement, and enthusiasm. It's been truly inspiring to see the passion and dedication from this strong community, and we're honored to be a part of it. ✨

Recap of The JavaScript on Azure Global Hack-Together

As we wrap up this exciting event, we wanted to take a moment to reflect on all that we've accomplished together. Over the last 15 days, we've covered a lot of ground, from the basics of contributing to Open source to the exploration of the Contoso Real Estate project from its Frontend to its Backend and future AI implementation.

Now that the hack-together is ending, we want to make sure that you have all the resources you need to continue honing your skills in the future. Whether you're looking to make your fist contribution to open source, become an open source maintainers, collaborate with others, or simply keep learning, there are plenty of resources out there to help you achieve your goals. So, let's dive in and explore all the ways you can continue to grow your JavaScript skills on Azure!

JSonAzure Hack-together Roadmap 📍:

hack-together-roadmap (2)

Recap on past Livestreams🌟:

Day 1️⃣: Opening Keynote (Hack-together Launch): Introduction to the Contoso Real Estate Open-source project!, managing complex and complex enterprise architecture, new announcements for JavaScript developers on Azure

Day 2️⃣: GitHub Copilot & Codespaces: Introduction to your AI pair programmer (GitHub Copilot) and your virtual developer environment on the cloud (GitHub Codespaces)

Day 6️⃣: Build your Frontend using Static Web Apps as part of a complex, modern composable frontends (or micro-frontends) and cloud-native applications.

Day 9️⃣: Build a Serverless Backend using Azure Functions

Day 1️⃣3️⃣: Easily connect to an Azure Cosmos DB, exploring its benefits and how to get started

Day 1️⃣5️⃣: Being in the AI Era, we dive into the Azure OpenAI Service and how you can start to build intelligent JavaScript applications

📖 Self-Learning Resources

  1. JavaScript on Azure Global Hack Together Module collection
  2. Lets #HackTogether: Javascript On Azure Keynote
  3. Step by Step Guide: Migrating v3 to v4 programming model for Azure Functions for Node.Js Application

Continue your journey with #FallForIntelligentApps

Join us this Fall on a learning journey to explore building intelligent apps. Combine the power of AI, cloud-native app development, and cloud-scale data to build highly differentiated digital experiences. Develop adaptive, responsive, and personalized experiences by building and modernizing intelligent applications with Azure. Engage in a self-paced learning adventure by following along the #30Days of Intelligent Apps series, completing the Intelligent Apps Skills Challenge, joining the Product Group for a live Ask The Expert series or building an end to end solution architecture with a live guided experience through the Learn Live series. Discover more here.

Hands-on practice: Make your first contribution to open source!

Join our GitHUb Discussion Forum to connect with developers from every part of world, see contributions from other, find collaborators and make your first contribution to a real-world project! Don't forget to give the repo a star ⭐

Resources

All resources are accessible on our landing page

- + \ No newline at end of file diff --git a/30daysofIA/tags/github-copilot/page/2/index.html b/30daysofIA/tags/github-copilot/page/2/index.html index bb85e80b6d..033039d8f2 100644 --- a/30daysofIA/tags/github-copilot/page/2/index.html +++ b/30daysofIA/tags/github-copilot/page/2/index.html @@ -3,7 +3,7 @@ -2 posts tagged with "github-copilot" | Build Intelligent Apps On Azure +2 posts tagged with "github-copilot" | Build Intelligent Apps On Azure @@ -14,13 +14,13 @@ - +
-
Skip to main content

2 posts tagged with "github-copilot"

View All Tags

· One min read
It's 30DaysOfIA

September is almost here - and that can only mean one thing!! It's time to 🍂 Fall for something new and exciting and spend a few weeks skilling up on relevant tools, techologies and solutions!!

Last year, we focused on #ServerlessSeptember. This year, we're building on that theme with the addition of cloud-scale Data, cloud-native Technologies and cloud-based AI integrations to help you modernize and build intelligent apps for the enterprise!

Watch this space - and join us in September to learn more!

- +
Skip to main content

2 posts tagged with "github-copilot"

View All Tags

· 2 min read
It's 30DaysOfIA

September is almost here - and that can only mean one thing!! It's time to 🍂 Fall for something new and exciting and spend a few weeks skilling up on relevant tools, techologies and solutions!!

Last year, we focused on #ServerlessSeptember. This year, we're building on that theme with the addition of cloud-scale Data, cloud-native Technologies and cloud-based AI integrations to help you modernize and build intelligent apps for the enterprise!

Watch this space - and join us in September to learn more!

+ \ No newline at end of file diff --git a/30daysofIA/tags/hack-together/index.html b/30daysofIA/tags/hack-together/index.html index 6ce644eefd..a4f76da8b6 100644 --- a/30daysofIA/tags/hack-together/index.html +++ b/30daysofIA/tags/hack-together/index.html @@ -3,7 +3,7 @@ -2 posts tagged with "hack-together" | Build Intelligent Apps On Azure +2 posts tagged with "hack-together" | Build Intelligent Apps On Azure @@ -14,14 +14,14 @@ - +
Skip to main content

2 posts tagged with "hack-together"

View All Tags

· 4 min read
It's 30DaysOfIA

Continue The Learning Journey through Fall For Intelligent Apps! 🍂

What We'll Cover

Thank you! ♥️

image

It's hard to believe that JavaScript on Azure hack-together is ending! It seems like just yesterday that we launched this initiative, and yet here we are, 15 days later, with an incredible amount of learning and growth behind us. So... it's time for a wrap!

From the bottom of our hearts, we want to thank each and every one of you for your participation, engagement, and enthusiasm. It's been truly inspiring to see the passion and dedication from this strong community, and we're honored to be a part of it. ✨

Recap of The JavaScript on Azure Global Hack-Together

As we wrap up this exciting event, we wanted to take a moment to reflect on all that we've accomplished together. Over the last 15 days, we've covered a lot of ground, from the basics of contributing to Open source to the exploration of the Contoso Real Estate project from its Frontend to its Backend and future AI implementation.

Now that the hack-together is ending, we want to make sure that you have all the resources you need to continue honing your skills in the future. Whether you're looking to make your fist contribution to open source, become an open source maintainers, collaborate with others, or simply keep learning, there are plenty of resources out there to help you achieve your goals. So, let's dive in and explore all the ways you can continue to grow your JavaScript skills on Azure!

JSonAzure Hack-together Roadmap 📍:

hack-together-roadmap (2)

Recap on past Livestreams🌟:

Day 1️⃣: Opening Keynote (Hack-together Launch): Introduction to the Contoso Real Estate Open-source project!, managing complex and complex enterprise architecture, new announcements for JavaScript developers on Azure

Day 2️⃣: GitHub Copilot & Codespaces: Introduction to your AI pair programmer (GitHub Copilot) and your virtual developer environment on the cloud (GitHub Codespaces)

Day 6️⃣: Build your Frontend using Static Web Apps as part of a complex, modern composable frontends (or micro-frontends) and cloud-native applications.

Day 9️⃣: Build a Serverless Backend using Azure Functions

Day 1️⃣3️⃣: Easily connect to an Azure Cosmos DB, exploring its benefits and how to get started

Day 1️⃣5️⃣: Being in the AI Era, we dive into the Azure OpenAI Service and how you can start to build intelligent JavaScript applications

📖 Self-Learning Resources

  1. JavaScript on Azure Global Hack Together Module collection
  2. Lets #HackTogether: Javascript On Azure Keynote
  3. Step by Step Guide: Migrating v3 to v4 programming model for Azure Functions for Node.Js Application

Continue your journey with #FallForIntelligentApps

Join us this Fall on a learning journey to explore building intelligent apps. Combine the power of AI, cloud-native app development, and cloud-scale data to build highly differentiated digital experiences. Develop adaptive, responsive, and personalized experiences by building and modernizing intelligent applications with Azure. Engage in a self-paced learning adventure by following along the #30Days of Intelligent Apps series, completing the Intelligent Apps Skills Challenge, joining the Product Group for a live Ask The Expert series or building an end to end solution architecture with a live guided experience through the Learn Live series. Discover more here.

Hands-on practice: Make your first contribution to open source!

Join our GitHUb Discussion Forum to connect with developers from every part of world, see contributions from other, find collaborators and make your first contribution to a real-world project! Don't forget to give the repo a star ⭐

Resources

All resources are accessible on our landing page

- + \ No newline at end of file diff --git a/30daysofIA/tags/hack-together/page/2/index.html b/30daysofIA/tags/hack-together/page/2/index.html index 7d06849596..4c69c8dc31 100644 --- a/30daysofIA/tags/hack-together/page/2/index.html +++ b/30daysofIA/tags/hack-together/page/2/index.html @@ -3,7 +3,7 @@ -2 posts tagged with "hack-together" | Build Intelligent Apps On Azure +2 posts tagged with "hack-together" | Build Intelligent Apps On Azure @@ -14,13 +14,13 @@ - +
-
Skip to main content

2 posts tagged with "hack-together"

View All Tags

· One min read
It's 30DaysOfIA

September is almost here - and that can only mean one thing!! It's time to 🍂 Fall for something new and exciting and spend a few weeks skilling up on relevant tools, techologies and solutions!!

Last year, we focused on #ServerlessSeptember. This year, we're building on that theme with the addition of cloud-scale Data, cloud-native Technologies and cloud-based AI integrations to help you modernize and build intelligent apps for the enterprise!

Watch this space - and join us in September to learn more!

- +
Skip to main content

2 posts tagged with "hack-together"

View All Tags

· 2 min read
It's 30DaysOfIA

September is almost here - and that can only mean one thing!! It's time to 🍂 Fall for something new and exciting and spend a few weeks skilling up on relevant tools, techologies and solutions!!

Last year, we focused on #ServerlessSeptember. This year, we're building on that theme with the addition of cloud-scale Data, cloud-native Technologies and cloud-based AI integrations to help you modernize and build intelligent apps for the enterprise!

Watch this space - and join us in September to learn more!

+ \ No newline at end of file diff --git a/30daysofIA/tags/index.html b/30daysofIA/tags/index.html index 64a8cb6fb0..6a2e3f1d83 100644 --- a/30daysofIA/tags/index.html +++ b/30daysofIA/tags/index.html @@ -14,13 +14,13 @@ - +
Skip to main content
- + \ No newline at end of file diff --git a/30daysofIA/tags/learn-live/index.html b/30daysofIA/tags/learn-live/index.html index a8612f7f37..581a7aee33 100644 --- a/30daysofIA/tags/learn-live/index.html +++ b/30daysofIA/tags/learn-live/index.html @@ -3,7 +3,7 @@ -2 posts tagged with "learn-live" | Build Intelligent Apps On Azure +2 posts tagged with "learn-live" | Build Intelligent Apps On Azure @@ -14,14 +14,14 @@ - +
Skip to main content

2 posts tagged with "learn-live"

View All Tags

· 4 min read
It's 30DaysOfIA

Continue The Learning Journey through Fall For Intelligent Apps! 🍂

What We'll Cover

Thank you! ♥️

image

It's hard to believe that JavaScript on Azure hack-together is ending! It seems like just yesterday that we launched this initiative, and yet here we are, 15 days later, with an incredible amount of learning and growth behind us. So... it's time for a wrap!

From the bottom of our hearts, we want to thank each and every one of you for your participation, engagement, and enthusiasm. It's been truly inspiring to see the passion and dedication from this strong community, and we're honored to be a part of it. ✨

Recap of The JavaScript on Azure Global Hack-Together

As we wrap up this exciting event, we wanted to take a moment to reflect on all that we've accomplished together. Over the last 15 days, we've covered a lot of ground, from the basics of contributing to Open source to the exploration of the Contoso Real Estate project from its Frontend to its Backend and future AI implementation.

Now that the hack-together is ending, we want to make sure that you have all the resources you need to continue honing your skills in the future. Whether you're looking to make your fist contribution to open source, become an open source maintainers, collaborate with others, or simply keep learning, there are plenty of resources out there to help you achieve your goals. So, let's dive in and explore all the ways you can continue to grow your JavaScript skills on Azure!

JSonAzure Hack-together Roadmap 📍:

hack-together-roadmap (2)

Recap on past Livestreams🌟:

Day 1️⃣: Opening Keynote (Hack-together Launch): Introduction to the Contoso Real Estate Open-source project!, managing complex and complex enterprise architecture, new announcements for JavaScript developers on Azure

Day 2️⃣: GitHub Copilot & Codespaces: Introduction to your AI pair programmer (GitHub Copilot) and your virtual developer environment on the cloud (GitHub Codespaces)

Day 6️⃣: Build your Frontend using Static Web Apps as part of a complex, modern composable frontends (or micro-frontends) and cloud-native applications.

Day 9️⃣: Build a Serverless Backend using Azure Functions

Day 1️⃣3️⃣: Easily connect to an Azure Cosmos DB, exploring its benefits and how to get started

Day 1️⃣5️⃣: Being in the AI Era, we dive into the Azure OpenAI Service and how you can start to build intelligent JavaScript applications

📖 Self-Learning Resources

  1. JavaScript on Azure Global Hack Together Module collection
  2. Lets #HackTogether: Javascript On Azure Keynote
  3. Step by Step Guide: Migrating v3 to v4 programming model for Azure Functions for Node.Js Application

Continue your journey with #FallForIntelligentApps

Join us this Fall on a learning journey to explore building intelligent apps. Combine the power of AI, cloud-native app development, and cloud-scale data to build highly differentiated digital experiences. Develop adaptive, responsive, and personalized experiences by building and modernizing intelligent applications with Azure. Engage in a self-paced learning adventure by following along the #30Days of Intelligent Apps series, completing the Intelligent Apps Skills Challenge, joining the Product Group for a live Ask The Expert series or building an end to end solution architecture with a live guided experience through the Learn Live series. Discover more here.

Hands-on practice: Make your first contribution to open source!

Join our GitHUb Discussion Forum to connect with developers from every part of world, see contributions from other, find collaborators and make your first contribution to a real-world project! Don't forget to give the repo a star ⭐

Resources

All resources are accessible on our landing page

- + \ No newline at end of file diff --git a/30daysofIA/tags/learn-live/page/2/index.html b/30daysofIA/tags/learn-live/page/2/index.html index 5a4fc9e73c..0c01549a57 100644 --- a/30daysofIA/tags/learn-live/page/2/index.html +++ b/30daysofIA/tags/learn-live/page/2/index.html @@ -3,7 +3,7 @@ -2 posts tagged with "learn-live" | Build Intelligent Apps On Azure +2 posts tagged with "learn-live" | Build Intelligent Apps On Azure @@ -14,13 +14,13 @@ - +
-
Skip to main content

2 posts tagged with "learn-live"

View All Tags

· One min read
It's 30DaysOfIA

September is almost here - and that can only mean one thing!! It's time to 🍂 Fall for something new and exciting and spend a few weeks skilling up on relevant tools, techologies and solutions!!

Last year, we focused on #ServerlessSeptember. This year, we're building on that theme with the addition of cloud-scale Data, cloud-native Technologies and cloud-based AI integrations to help you modernize and build intelligent apps for the enterprise!

Watch this space - and join us in September to learn more!

- +
Skip to main content

2 posts tagged with "learn-live"

View All Tags

· 2 min read
It's 30DaysOfIA

September is almost here - and that can only mean one thing!! It's time to 🍂 Fall for something new and exciting and spend a few weeks skilling up on relevant tools, techologies and solutions!!

Last year, we focused on #ServerlessSeptember. This year, we're building on that theme with the addition of cloud-scale Data, cloud-native Technologies and cloud-based AI integrations to help you modernize and build intelligent apps for the enterprise!

Watch this space - and join us in September to learn more!

+ \ No newline at end of file diff --git a/404.html b/404.html index 60a82cceef..e4e589bb63 100644 --- a/404.html +++ b/404.html @@ -14,13 +14,13 @@ - +
Skip to main content

Page Not Found

We could not find what you were looking for.

Please contact the owner of the site that linked you to the original URL and let them know their link is broken.

- + \ No newline at end of file diff --git a/Fall-For-IA/AskTheExpert/index.html b/Fall-For-IA/AskTheExpert/index.html index 7f1883c053..2d349a59e8 100644 --- a/Fall-For-IA/AskTheExpert/index.html +++ b/Fall-For-IA/AskTheExpert/index.html @@ -14,13 +14,13 @@ - +
Skip to main content

Ask The Expert

  1. Open a New Issue on the repo.
  2. Click Get Started on the 🎤 Ask the Expert! template.
  3. Fill in the details and submit!

Our team will review all submitted questions and prioritize them for the live ATE session. Questions that don't get answered live (due to time constraints) will be responded to here, in response to your submitted issue.


What is it?

Ask the Expert is a series of scheduled 30-minute LIVE broadcasts where you can connect with experts to get your questions answered! You an also visit the site later, to view sessions on demand - and view answers to questions you may have submitted ahead of time.

Ask the Expert


How does it work?

The live broadcast will have a moderated chat session where you can submit questions in real time. We also have a custom 🎤 Ask The Expert issue you can use to submit questions ahead of time as mentioned earlier.

  • We strongly encourage you to submit questions early using that issue
  • Browse previously posted questions to reduce duplication.
  • Upvote (👍🏽) existing questions of interest to help us prioritize them for the live show.

Doing this will help us all in a few ways:

  • We can ensure that all questions get answered here, even if we run out of time on the live broadcast.
  • Others can vote (👍🏽) on your question - helping us prioritize them live based on popularity
  • We can update them with responses post-event for future readers.

When is it?

Visit the ATE : Fall for Intelligent Apps page to see the latest schedule and registration links! For convenience, we've replicated some information here. Please click the REGISTER TO ATTEND links to save the date and get notified of key details like links to the livestream (pre-event) and recording (post-event.)

DateDescription
Azure Container Apps Landing Zone Accelerator September 13, 2023 : Azure Container Apps Landing Zone Accelerator

It can be challenging to build and deploy cloud native apps at enterprise scale and get it right the first time. Landing Zone Accelerators help you address this challenge, providing guidance to deploy workloads faster, with better security, scalability, availability and lower cost; allowing you to operate confidently with better performance.

REGISTER TO ATTEND

Fall for Intelligent Apps with Azure Container Apps Option 1September 20, 2023 : Fall for Intelligent Apps with Azure Container Apps (Option 1)

Join the Azure Container Apps Product Group this fall to learn about combining the power of AI, cloud-scale data, and cloud-native app development to create highly differentiated digital experiences with microservices. Azure Container Apps is an app-centric service, empowering developers to focus on the differentiating business logic of their apps rather than on cloud infrastructure management. Discuss with the experts on how to develop adaptive, responsive, and personalized experiences by building and modernizing intelligent applications with Azure Container Apps.

REGISTER TO ATTEND

Fall for Intelligent Apps with Azure Container Apps Option 2September 20, 2023 : Fall for Intelligent Apps with Azure Container Apps (Option 2)

Join the Azure Container Apps Product Group this fall to learn about combining the power of AI, cloud-scale data, and cloud-native app development to create highly differentiated digital experiences with microservices. Azure Container Apps is an app-centric service, empowering developers to focus on the differentiating business logic of their apps rather than on cloud infrastructure management. Discuss with the experts on how to develop adaptive, responsive, and personalized experiences by building and modernizing intelligent applications with Azure Container Apps.

REGISTER TO ATTEND

Fall for Intelligent Apps with Azure Functions Option 1September 26, 2023 : Fall for Intelligent Apps with Azure Functions (Option 1)

Join the Azure Functions Product Group this fall to learn about FaaS or Functions-as-a-Service in Azure serverless computing. It is time to focus on the pieces of code that matter most to you while Azure Functions handles the rest. Discuss with the experts on how to combine the power of AI, cloud-scale data, and serverless app development to create highly differentiated digital experiences. Develop adaptive, responsive, and personalized experiences by building and modernizing intelligent applications with Azure Functions.

REGISTER TO ATTEND

Fall for Intelligent apps with Azure Functions Option 2September 26, 2023 : Fall for Intelligent apps with Azure Functions (Option 2)

Join the Azure Functions Product Group this fall to learn about FaaS or Functions-as-a-Service in Azure serverless computing. It is time to focus on the pieces of code that matter most to you while Azure Functions handles the rest. Discuss with the experts on how to combine the power of AI, cloud-scale data, and serverless app development to create highly differentiated digital experiences. Develop adaptive, responsive, and personalized experiences by building and modernizing intelligent applications with Azure Functions.

REGISTER TO ATTEND

- + \ No newline at end of file diff --git a/Fall-For-IA/CloudSkills/index.html b/Fall-For-IA/CloudSkills/index.html index bb640121df..02acecfc16 100644 --- a/Fall-For-IA/CloudSkills/index.html +++ b/Fall-For-IA/CloudSkills/index.html @@ -14,13 +14,13 @@ - +
Skip to main content

Cloud Skills Challenge

Use the link above to register for the Cloud Skills Challenge today! You will get an automatical email notification when the challenge kicks off, ensuring you don't waste any time! The challenge runs for 30 days (Sep 1 - Sep 30) so an early start helps!


Fall for Intelligent Apps Skills Challenge

Join us on a learning journey this fall to skill up on your core skills for developing intelligent apps. Explore how to combine the power of AI, cloud-scale data, and cloud-native app development to create highly differentiated digital experiences.

Fall for intelligent apps

  • Intelligent Apps Skills Challenge - Applications are at the core of intelligent solution development. Cloud-native app development empowers you to create modern containerized and serverless apps to build innovative solutions. Explore how to get started with building intelligent apps using Azure Kubernetes Service, Azure Functions and GitHub.

  • Data Skills Challenge - It is time to activate our enormous data stores for building data driven intelligent solutions. Explore the capabilities of cloud-scale data with Microsoft Fabric in this Cloud Skills Challenge! Follow along with the Fabric Community @ https://aka.ms/fabriccommunity.

  • AI Skills Challenge - The world of generative AI is rapidly evolving. Learn how to create intelligent solutions that extract semantic meaning from text and support common computer vision scenarios. Explore how to take advantage of large-scale, generative AI models with deep understandings of language and code to enable new reasoning and comprehension capabilities for building cutting-edge applications responsibly.


About Cloud Skills

The Cloud Skills Challenge is a fun way to skill up on Azure serverless technologies while competing with other members of the community for a chance to win fun swag!

About Cloud Skills

You'll work your way through learning modules that skill you up on relevant technologies - while collecting points that place you on a Leaderboard.

  1. 🎯 Compete - Benchmark your progress against friends and coworkers.
  2. 🎓 Learn - Increase your understanding by completing learning modules.
  3. 🏆 Skill Up - Gain useful technical skills and prep for certifications.

About Microsoft Learn

Completed the Cloud Skills Challenge, and want to keep going on your learning journey? Or, perhaps there are other Cloud+AI topics you want to skill up in? Check out these three resources for building your professional profile!

1️⃣ - LEARNING PATHS2️⃣ - CERTIFICATIONS3️⃣ - LEARNING EVENTS
Skill up on a topic with guided paths for self-study!Showcase your expertise with industry-recognized credentials!Learn from subject matter experts in live & recorded events
- + \ No newline at end of file diff --git a/Fall-For-IA/CommunityGallery/index.html b/Fall-For-IA/CommunityGallery/index.html index de4f950a99..bdd3045101 100644 --- a/Fall-For-IA/CommunityGallery/index.html +++ b/Fall-For-IA/CommunityGallery/index.html @@ -14,13 +14,13 @@ - +
-
Skip to main content

Community Gallery

Explore the Community Showcase for videos, blog posts and open-source projects from the community.

Filters

21 posts

Featured Posts


  • Build a Serverless Backend with Azure Functions

    In this session, we'll give you a gentle introduction to serverless backends and how Azure Functions can help you build them quickly and easily. We'll start by discussing the benefits of using serverless backends for your web projects and how Azure Functions can help you get started quickly. Then, we'll dive into a demo of the Contoso Real Estate project, showing you how it uses Azure Functions to power its backend.

  •  ♥️ featured
  • azure functions
  • cosmos db
  • serverless
  • video
  • Build and connect to a Database using Azure Cosmos DB

    In this session, we'll give you a gentle introduction to Azure Cosmos DB and how it can help you store and manage your data in the cloud. We'll start by discussing the benefits of using Azure Cosmos DB for your data storage needs, including its global distribution and scalability. Then, we'll dive into a demo of the Contoso Real Estate project, showing you how it uses Azure Cosmos DB to store its data.

  •  ♥️ featured
  • cosmos db
  • video
  • Build your Frontend with Azure Static Web Apps

    In this session, we'll give you a gentle introduction to Static Web Apps and the SWA CLI. We'll start by discussing the benefits of using Static Web Apps for your web projects and how the SWA CLI can help you get started quickly. Then, we'll dive into a demo of the Contoso Real Estate project, showing you how it uses Static Web Apps to deploy changes quickly and easily.

  •  ♥️ featured
  • azure functions
  • cosmos db
  • video
  • Hack Together Launch – Opening Keynote

    Join us for an in-depth walkthrough of the Contoso Real Estate project, with a focus on the portal app architecture (Full stack application). During this session, we'll guide you through the key components of the architecture and show you how to set up your own environment for the project. We'll also provide detailed contribution instructions to help you get involved in the project and make a meaningful impact. Whether you're a seasoned developer or just getting started, this session is a must-attend for anyone interested in building scalable, modern web applications.

  •  ♥️ featured
  • azure functions
  • cosmos db
  • video
  • Introduction to Azure OpenAI Service

    Join us for an exciting introduction to the world of AI with Azure OpenAI. In this session, you'll learn how to harness the power of OpenAI to build intelligent applications that can learn, reason, and adapt. We'll cover the basics of Azure OpenAI, including how to set up and configure your environment, and walk you through a series of hands-on exercises to help you get started. Whether you're a seasoned developer or just starting out, this session is the perfect way to unlock the full potential of AI and take your applications to the next level.

  •  ♥️ featured
  • azure openai
  • ai
  • video
  • Introduction to GitHub Copilot

    Join us for an exciting introduction to GitHub Copilot, the revolutionary AI-powered coding assistant. In this session, you'll learn how to harness the power of Copilot to write code faster and more efficiently than ever before. We'll cover the basics of Copilot, including how to install and configure it, and walk you through a series of hands-on exercises to help you get started. Whether you're a seasoned developer or just starting out, this session is the perfect way to take your coding skills to the next level.

  •  ♥️ featured
  • github
  • ai
  • video
  • All Posts


  • Ask the Expert: Serverless September | Azure Functions

    Join the Azure Functions Product Group this Serverless September to learn about FaaS or Functions-as-a-Service in Azure serverless computing. It is time to focus on the pieces of code that matter most to you while Azure Functions handles the rest. Discuss with the experts on how to execute event-driven serverless code functions with an end-to-end development experience using Azure Functions.

  • azure functions
  • serverless
  • video
  • Azure Samples / Azure Container Apps That Use OpenAI

    This sample demonstrates how to quickly build chat applications using Python and leveraging powerful technologies such as OpenAI ChatGPT models, Embedding models, LangChain framework, ChromaDB vector database, and Chainlit, an open-source Python package that is specifically designed to create user interfaces (UIs) for AI applications. These applications are hosted on Azure Container Apps, a fully managed environment that enables you to run microservices and containerized applications on a serverless platform.

  • azure container apps
  • azure openai
  • code sample
  • Azure Samples / Contoso Real Estate

    This repository contains the reference architecture and components for building enterprise-grade modern composable frontends (or micro-frontends) and cloud-native applications. It is a collection of best practices, architecture patterns, and functional components that can be used to build and deploy modern JavaScript applications to Azure.

  • azure container apps
  • azure functions
  • github
  • cosmos db
  • code sample
  • Build a Serverless Backend with Azure Functions

    In this session, we'll give you a gentle introduction to serverless backends and how Azure Functions can help you build them quickly and easily. We'll start by discussing the benefits of using serverless backends for your web projects and how Azure Functions can help you get started quickly. Then, we'll dive into a demo of the Contoso Real Estate project, showing you how it uses Azure Functions to power its backend.

  •  ♥️ featured
  • azure functions
  • cosmos db
  • serverless
  • video
  • Build an intelligent application fast and flexibly using Open Source on Azure

    Watch this end-to-end demo of an intelligent app that was built using a combination of open source technologies developed by Microsoft and the community. Highlights of the demo include announcements and key technologies.

  • azure functions
  • video
  • Build and connect to a Database using Azure Cosmos DB

    In this session, we'll give you a gentle introduction to Azure Cosmos DB and how it can help you store and manage your data in the cloud. We'll start by discussing the benefits of using Azure Cosmos DB for your data storage needs, including its global distribution and scalability. Then, we'll dive into a demo of the Contoso Real Estate project, showing you how it uses Azure Cosmos DB to store its data.

  •  ♥️ featured
  • cosmos db
  • video
  • Build Intelligent Microservices with Azure Container Apps

    Azure Container Apps (ACA) is a great place to run intelligent microservices, APIs, event-driven apps, and more. Infuse AI with Azure Container Apps jobs, leverage adaptable design patterns with Dapr, and explore flexible containerized compute for microservices across serverless or dedicated options.

  • azure container apps
  • video
  • Build scalable, cloud-native apps with AKS and Azure Cosmos DB

    Develop, deploy, and scale cloud-native applications that are high-performance, fast, and can handle traffic bursts with ease. Explore the latest news and capabilities for Azure Kubernetes Service (AKS) and Azure Cosmos DB, and hear from Rockwell Automation about how they've used Azure's cloud-scale app and data services to create global applications.

  • azure kubernetes service
  • cosmos db
  • kubernetes
  • video
  • Build your Frontend with Azure Static Web Apps

    In this session, we'll give you a gentle introduction to Static Web Apps and the SWA CLI. We'll start by discussing the benefits of using Static Web Apps for your web projects and how the SWA CLI can help you get started quickly. Then, we'll dive into a demo of the Contoso Real Estate project, showing you how it uses Static Web Apps to deploy changes quickly and easily.

  •  ♥️ featured
  • azure functions
  • cosmos db
  • video
  • Building and scaling cloud-native intelligent applications on Azure

    Learn how to run cloud-native serverless and container applications in Azure using Azure Kubernetes Service and Azure Container Apps. We help you choose the right service for your apps. We show you how Azure is the best platform for hosting cloud native and intelligent apps, and an app using Azure OpenAI Service and Azure Data. Learn all the new capabilities of our container platforms including how to deploy, test for scale, monitor, and much more.

  • azure kubernetes service
  • azure container apps
  • azure openai
  • ai
  • kubernetes
  • video
  • Cloud-Native New Year - Azure Kubernetes Service

    Join the Azure Kubernetes Service Product Group this New Year to learn about cloud-native development using Kubernetes on Azure computing. It is time to accelerate your cloud-native application development leveraging the de-facto container platform, Kubernetes. Discuss with the experts on how to develop, manage, scale and secure managed Kubernetes clusters on Azure with an end-to-end development and management experience using Azure Kubernetes Service and Azure Fleet Manager.

  • azure kubernetes service
  • kubernetes
  • video
  • Deliver apps from code to cloud with Azure Kubernetes Service

    Do you want to build and run cloud-native apps in Microsoft Azure with ease and confidence? Do you want to leverage the power and flexibility of Kubernetes, without the hassle and complexity of managing it yourself? Or maybe you want to learn about the latest and greatest features and integrations that Azure Kubernetes Service (AKS) has to offer? If you answered yes to any of these questions, then this session is for you!

  • azure kubernetes service
  • kubernetes
  • video
  • Focus on code not infra with Azure Functions Azure Spring Apps Dapr

    Explore an easy on-ramp to build your cloud-native APIs with containers in the cloud. Build an application using Azure Spring APIs to send messages to Dapr enabled message broker, triggering optimized processing with Azure Functions, all hosted in the same Azure Container Apps environment. This unified experience for microservices hosts multitype apps that interact with each other using Dapr, scale dynamically with KEDA, and focus on code, offering a true high productivity developer experience.

  • azure container apps
  • azure functions
  • video
  • Hack Together Launch – Opening Keynote

    Join us for an in-depth walkthrough of the Contoso Real Estate project, with a focus on the portal app architecture (Full stack application). During this session, we'll guide you through the key components of the architecture and show you how to set up your own environment for the project. We'll also provide detailed contribution instructions to help you get involved in the project and make a meaningful impact. Whether you're a seasoned developer or just getting started, this session is a must-attend for anyone interested in building scalable, modern web applications.

  •  ♥️ featured
  • azure functions
  • cosmos db
  • video
  • Integrating Azure AI and Azure Kubernetes Service to build intelligent apps

    Build intelligent apps that leverage Azure AI services for natural language processing, machine learning, Azure OpenAI Service with Azure Kubernetes Service (AKS) and other Azure application platform services. Learn best practices to help you achieve optimal scalability, reliability and automation with CI/CD using GitHub. By the end of this session, you will have a better understanding of how to build and deploy intelligent applications on Azure that deliver measurable impact.

  • azure kubernetes service
  • azure openai
  • github
  • ai
  • kubernetes
  • video
  • Introduction to Azure OpenAI Service

    Join us for an exciting introduction to the world of AI with Azure OpenAI. In this session, you'll learn how to harness the power of OpenAI to build intelligent applications that can learn, reason, and adapt. We'll cover the basics of Azure OpenAI, including how to set up and configure your environment, and walk you through a series of hands-on exercises to help you get started. Whether you're a seasoned developer or just starting out, this session is the perfect way to unlock the full potential of AI and take your applications to the next level.

  •  ♥️ featured
  • azure openai
  • ai
  • video
  • Introduction to GitHub Copilot

    Join us for an exciting introduction to GitHub Copilot, the revolutionary AI-powered coding assistant. In this session, you'll learn how to harness the power of Copilot to write code faster and more efficiently than ever before. We'll cover the basics of Copilot, including how to install and configure it, and walk you through a series of hands-on exercises to help you get started. Whether you're a seasoned developer or just starting out, this session is the perfect way to take your coding skills to the next level.

  •  ♥️ featured
  • github
  • ai
  • video
  • Modernizing with containers and serverless Q&A

    Join the Azure cloud-native team to dive deeper into developing modern apps on cloud with containers and serverless technologies. Explore how to leverage the latest product advancements in Azure Kubernetes Service, Azure Container Apps and Azure Functions for scenarios that work best for cloud-native development. The experts cover best practices on how to develop with in-built open-source components like Kubernetes, KEDA, and Dapr to achieve high performance along with dynamic scaling.

  • azure kubernetes service
  • azure container apps
  • azure functions
  • kubernetes
  • video
  • Modernizing your applications with containers and serverless

    Dive into how cloud-native architectures and technologies can be applied to help build resilient and modern applications. Learn how to use technologies like containers, Kubernetes and serverless integrated with other application ecosystem services to build and deploy microservices architecture on Microsoft Azure. This discussion is ideal for developers, architects, and IT pros who want to learn how to effectively leverage Azure services to build, run and scale modern cloud-native applications.

  • azure kubernetes service
  • kubernetes
  • video
  • What the Hack: Serverless walkthrough

    The Azure Serverless What The Hack will take you through architecting a serverless solution on Azure for the use case of a Tollbooth Application that needs to meet demand for event driven scale. This is a challenge-based hack. It’s NOT step-by-step. Don’t worry, you will do great whatever your level of experience!

  • serverless
  • cloud-native
  • video
  • - +
    Skip to main content

    Community Gallery

    Explore the Community Showcase for videos, blog posts and open-source projects from the community.

    Filters

    21 posts

    Featured Posts


    • Build a Serverless Backend with Azure Functions

      In this session, we'll give you a gentle introduction to serverless backends and how Azure Functions can help you build them quickly and easily. We'll start by discussing the benefits of using serverless backends for your web projects and how Azure Functions can help you get started quickly. Then, we'll dive into a demo of the Contoso Real Estate project, showing you how it uses Azure Functions to power its backend.

    • Build and connect to a Database using Azure Cosmos DB

      In this session, we'll give you a gentle introduction to Azure Cosmos DB and how it can help you store and manage your data in the cloud. We'll start by discussing the benefits of using Azure Cosmos DB for your data storage needs, including its global distribution and scalability. Then, we'll dive into a demo of the Contoso Real Estate project, showing you how it uses Azure Cosmos DB to store its data.

    • Build your Frontend with Azure Static Web Apps

      In this session, we'll give you a gentle introduction to Static Web Apps and the SWA CLI. We'll start by discussing the benefits of using Static Web Apps for your web projects and how the SWA CLI can help you get started quickly. Then, we'll dive into a demo of the Contoso Real Estate project, showing you how it uses Static Web Apps to deploy changes quickly and easily.

    • Hack Together Launch – Opening Keynote

      Join us for an in-depth walkthrough of the Contoso Real Estate project, with a focus on the portal app architecture (Full stack application). During this session, we'll guide you through the key components of the architecture and show you how to set up your own environment for the project. We'll also provide detailed contribution instructions to help you get involved in the project and make a meaningful impact. Whether you're a seasoned developer or just getting started, this session is a must-attend for anyone interested in building scalable, modern web applications.

    • Introduction to Azure OpenAI Service

      Join us for an exciting introduction to the world of AI with Azure OpenAI. In this session, you'll learn how to harness the power of OpenAI to build intelligent applications that can learn, reason, and adapt. We'll cover the basics of Azure OpenAI, including how to set up and configure your environment, and walk you through a series of hands-on exercises to help you get started. Whether you're a seasoned developer or just starting out, this session is the perfect way to unlock the full potential of AI and take your applications to the next level.

    • Introduction to GitHub Copilot

      Join us for an exciting introduction to GitHub Copilot, the revolutionary AI-powered coding assistant. In this session, you'll learn how to harness the power of Copilot to write code faster and more efficiently than ever before. We'll cover the basics of Copilot, including how to install and configure it, and walk you through a series of hands-on exercises to help you get started. Whether you're a seasoned developer or just starting out, this session is the perfect way to take your coding skills to the next level.

    All Posts


    • Ask the Expert: Serverless September | Azure Container Apps

      Join the Azure Container Apps Product Group this Serverless September to learn about serverless containers purpose-built for microservices. Azure Container Apps is an app-centric service, empowering developers to focus on the differentiating business logic of their apps rather than on cloud infrastructure management. Discuss with the experts on how to build and deploy modern apps and microservices using serverless containers with Azure Container Apps.

    • Ask the Expert: Serverless September | Azure Functions

      Join the Azure Functions Product Group this Serverless September to learn about FaaS or Functions-as-a-Service in Azure serverless computing. It is time to focus on the pieces of code that matter most to you while Azure Functions handles the rest. Discuss with the experts on how to execute event-driven serverless code functions with an end-to-end development experience using Azure Functions.

    • Azure Samples / Azure Container Apps That Use OpenAI

      This sample demonstrates how to quickly build chat applications using Python and leveraging powerful technologies such as OpenAI ChatGPT models, Embedding models, LangChain framework, ChromaDB vector database, and Chainlit, an open-source Python package that is specifically designed to create user interfaces (UIs) for AI applications. These applications are hosted on Azure Container Apps, a fully managed environment that enables you to run microservices and containerized applications on a serverless platform.

    • Azure Samples / Contoso Real Estate

      This repository contains the reference architecture and components for building enterprise-grade modern composable frontends (or micro-frontends) and cloud-native applications. It is a collection of best practices, architecture patterns, and functional components that can be used to build and deploy modern JavaScript applications to Azure.

    • Build a Serverless Backend with Azure Functions

      In this session, we'll give you a gentle introduction to serverless backends and how Azure Functions can help you build them quickly and easily. We'll start by discussing the benefits of using serverless backends for your web projects and how Azure Functions can help you get started quickly. Then, we'll dive into a demo of the Contoso Real Estate project, showing you how it uses Azure Functions to power its backend.

    • Build an intelligent application fast and flexibly using Open Source on Azure

      Watch this end-to-end demo of an intelligent app that was built using a combination of open source technologies developed by Microsoft and the community. Highlights of the demo include announcements and key technologies.

    • Build and connect to a Database using Azure Cosmos DB

      In this session, we'll give you a gentle introduction to Azure Cosmos DB and how it can help you store and manage your data in the cloud. We'll start by discussing the benefits of using Azure Cosmos DB for your data storage needs, including its global distribution and scalability. Then, we'll dive into a demo of the Contoso Real Estate project, showing you how it uses Azure Cosmos DB to store its data.

    • Build Intelligent Microservices with Azure Container Apps

      Azure Container Apps (ACA) is a great place to run intelligent microservices, APIs, event-driven apps, and more. Infuse AI with Azure Container Apps jobs, leverage adaptable design patterns with Dapr, and explore flexible containerized compute for microservices across serverless or dedicated options.

    • Build scalable, cloud-native apps with AKS and Azure Cosmos DB

      Develop, deploy, and scale cloud-native applications that are high-performance, fast, and can handle traffic bursts with ease. Explore the latest news and capabilities for Azure Kubernetes Service (AKS) and Azure Cosmos DB, and hear from Rockwell Automation about how they've used Azure's cloud-scale app and data services to create global applications.

    • Build your Frontend with Azure Static Web Apps

      In this session, we'll give you a gentle introduction to Static Web Apps and the SWA CLI. We'll start by discussing the benefits of using Static Web Apps for your web projects and how the SWA CLI can help you get started quickly. Then, we'll dive into a demo of the Contoso Real Estate project, showing you how it uses Static Web Apps to deploy changes quickly and easily.

    • Building and scaling cloud-native intelligent applications on Azure

      Learn how to run cloud-native serverless and container applications in Azure using Azure Kubernetes Service and Azure Container Apps. We help you choose the right service for your apps. We show you how Azure is the best platform for hosting cloud native and intelligent apps, and an app using Azure OpenAI Service and Azure Data. Learn all the new capabilities of our container platforms including how to deploy, test for scale, monitor, and much more.

    • Cloud-Native New Year - Azure Kubernetes Service

      Join the Azure Kubernetes Service Product Group this New Year to learn about cloud-native development using Kubernetes on Azure computing. It is time to accelerate your cloud-native application development leveraging the de-facto container platform, Kubernetes. Discuss with the experts on how to develop, manage, scale and secure managed Kubernetes clusters on Azure with an end-to-end development and management experience using Azure Kubernetes Service and Azure Fleet Manager.

    • Deliver apps from code to cloud with Azure Kubernetes Service

      Do you want to build and run cloud-native apps in Microsoft Azure with ease and confidence? Do you want to leverage the power and flexibility of Kubernetes, without the hassle and complexity of managing it yourself? Or maybe you want to learn about the latest and greatest features and integrations that Azure Kubernetes Service (AKS) has to offer? If you answered yes to any of these questions, then this session is for you!

    • Focus on code not infra with Azure Functions Azure Spring Apps Dapr

      Explore an easy on-ramp to build your cloud-native APIs with containers in the cloud. Build an application using Azure Spring APIs to send messages to Dapr enabled message broker, triggering optimized processing with Azure Functions, all hosted in the same Azure Container Apps environment. This unified experience for microservices hosts multitype apps that interact with each other using Dapr, scale dynamically with KEDA, and focus on code, offering a true high productivity developer experience.

    • Hack Together Launch – Opening Keynote

      Join us for an in-depth walkthrough of the Contoso Real Estate project, with a focus on the portal app architecture (Full stack application). During this session, we'll guide you through the key components of the architecture and show you how to set up your own environment for the project. We'll also provide detailed contribution instructions to help you get involved in the project and make a meaningful impact. Whether you're a seasoned developer or just getting started, this session is a must-attend for anyone interested in building scalable, modern web applications.

    • Integrating Azure AI and Azure Kubernetes Service to build intelligent apps

      Build intelligent apps that leverage Azure AI services for natural language processing, machine learning, Azure OpenAI Service with Azure Kubernetes Service (AKS) and other Azure application platform services. Learn best practices to help you achieve optimal scalability, reliability and automation with CI/CD using GitHub. By the end of this session, you will have a better understanding of how to build and deploy intelligent applications on Azure that deliver measurable impact.

    • Introduction to Azure OpenAI Service

      Join us for an exciting introduction to the world of AI with Azure OpenAI. In this session, you'll learn how to harness the power of OpenAI to build intelligent applications that can learn, reason, and adapt. We'll cover the basics of Azure OpenAI, including how to set up and configure your environment, and walk you through a series of hands-on exercises to help you get started. Whether you're a seasoned developer or just starting out, this session is the perfect way to unlock the full potential of AI and take your applications to the next level.

    • Introduction to GitHub Copilot

      Join us for an exciting introduction to GitHub Copilot, the revolutionary AI-powered coding assistant. In this session, you'll learn how to harness the power of Copilot to write code faster and more efficiently than ever before. We'll cover the basics of Copilot, including how to install and configure it, and walk you through a series of hands-on exercises to help you get started. Whether you're a seasoned developer or just starting out, this session is the perfect way to take your coding skills to the next level.

    • Modernizing with containers and serverless Q&A

      Join the Azure cloud-native team to dive deeper into developing modern apps on cloud with containers and serverless technologies. Explore how to leverage the latest product advancements in Azure Kubernetes Service, Azure Container Apps and Azure Functions for scenarios that work best for cloud-native development. The experts cover best practices on how to develop with in-built open-source components like Kubernetes, KEDA, and Dapr to achieve high performance along with dynamic scaling.

    • Modernizing your applications with containers and serverless

      Dive into how cloud-native architectures and technologies can be applied to help build resilient and modern applications. Learn how to use technologies like containers, Kubernetes and serverless integrated with other application ecosystem services to build and deploy microservices architecture on Microsoft Azure. This discussion is ideal for developers, architects, and IT pros who want to learn how to effectively leverage Azure services to build, run and scale modern cloud-native applications.

    • What the Hack: Serverless walkthrough

      The Azure Serverless What The Hack will take you through architecting a serverless solution on Azure for the use case of a Tollbooth Application that needs to meet demand for event driven scale. This is a challenge-based hack. It’s NOT step-by-step. Don’t worry, you will do great whatever your level of experience!

    + \ No newline at end of file diff --git a/Fall-For-IA/HackTogether/index.html b/Fall-For-IA/HackTogether/index.html index 993fb5c49c..a4e2576b23 100644 --- a/Fall-For-IA/HackTogether/index.html +++ b/Fall-For-IA/HackTogether/index.html @@ -14,13 +14,13 @@ - +
    Skip to main content

    Hack Together: JS on Azure

    Learn about the core Application and AI technologies behind the Contoso Real Estate Sample. Then make your first open-source contribution!

    Hack Together
    Thumbnail Image forHello, Contoso Real Estate!

    Hello, Contoso Real Estate!

    Get an overview of the Contoso Real estate app and architecture.

    Thumbnail Image forIntroduction to GitHub Copilot

    Introduction to GitHub Copilot

    Learn how to harness the power of Copilot from installation to usage.

    Thumbnail Image forBuild Your Frontend With Azure Static Web Apps

    Build Your Frontend With Azure Static Web Apps

    Learn about Azure Static Web Apps, the SWA CLI - and usage.

    Thumbnail Image forBuild a Serverless Backend with Functions

    Build a Serverless Backend with Functions

    Show how Azure Functions powers the serverless backend for the app.

    Thumbnail Image forBuild & Connect Your Database with Azure Cosmos DB

    Build & Connect Your Database with Azure Cosmos DB

    Show how you can manage your data in CosmosDB, and usage within the Contoso app.

    Thumbnail Image forIntroduction to Azure Open AI Service

    Introduction to Azure Open AI Service

    Learn the basics of Azure Open AI and explore how you can use it.

    - + \ No newline at end of file diff --git a/Fall-For-IA/LearnLive/index.html b/Fall-For-IA/LearnLive/index.html index def53af33e..50d97ea416 100644 --- a/Fall-For-IA/LearnLive/index.html +++ b/Fall-For-IA/LearnLive/index.html @@ -14,13 +14,13 @@ - +
    Skip to main content

    Learn Live: Serverless Edition

    Learn to build an enterprise-grade serverless solution on Azure by deconstructing an open-source reference sample.

    Learn to build
    Thumbnail Image forGet Started With Contoso Real Estate

    Get Started With Contoso Real Estate

    Learn about the Contoso Real Estate sample, fork the repo, launch GitHub Codespaces - and build/preview the application to validate environment.

    Thumbnail Image forDevelop The Portal Application

    Develop The Portal Application

    Learn about micro-frontends and API-first design. Deconstruct the portal app, blog app, and serverless API.

    Thumbnail Image forIntegrate Auth, Payments, Search

    Integrate Auth, Payments, Search

    Integrate authentication to support user profiles. Integrate payments and search features using 3rd party API.

    Thumbnail Image forAutomate, Test & Deploy to Azure

    Automate, Test & Deploy to Azure

    Learn to design and run end-to-end tests with Playwright. Provision and deploy solution to Azure with AZD.

    - + \ No newline at end of file diff --git a/Fall-For-IA/calendar/index.html b/Fall-For-IA/calendar/index.html index 8e6cbc8f70..dc643d2b37 100644 --- a/Fall-For-IA/calendar/index.html +++ b/Fall-For-IA/calendar/index.html @@ -14,13 +14,13 @@ - +
    Skip to main content

    Fall For Intelligent Apps 🍁

    #FallForIA kicks off in mid-September with initiatives to teach you the tools, technologies and skills you need to modernize your applications and build differentiated experiences with AI! Look for these signature events & more:

    • 🎙 Ask The Expert - live Q&A with product teams in Azure Functions, Azure Container Apps and more.
    • 👩🏽‍💻 Learn Live - live training series on building intelligent apps end-to-end on Azure with AI.
    • ✍🏽 #30DaysOfIA - series of daily blog posts organized in 4 themed weeks focused on intelligent apps.
    • 🎯 Cloud Skills Challenge - curated collection of Learn modules in Apps, Data & AI - for self-skilling!
    • 🐝 Community Buzz - activities to showcase your projects and contributions - including a gallery!

    We can't wait to unveil all the exciting content and events we've planned for September and October. But the Road to #FallForIA starts right now with signature CommunityBuzz events in August. Read on to learn where you can tune into livestreams, catch up on replays, and participate by making your first open-source contributions!


    Aug 2023

    🐝 #HackTogether

    Join us on this 15-day virtual hack experience where you'll learn about Contoso Real Estate (an open-sourced real-world enterprise-grade serverless app) and the technologies it uses - in 6 livestreamed sessions. Understand how you can deconstruct an open-source project and make your own contributions:

    Start your journey by watching the opening keynote, then track these three core resources for more:


    Sep 2023

    🍁 #FallForIA

    • 👩🏽‍💻 Sep 14 | #LearnLive Serverless - Deconstruct Contoso Real Estate (Architecture)
    • 🎯 Sep 15 | #CloudSkillsChallenge - Apps, Data and AI
    • ✍🏽 Sep 18 | #30DaysOfIA - Power Of Intelligent Applications
    • 🎙 Sep 20 | #AskTheExpert - Azure Container Apps
    • 👩🏽‍💻 Sep 21 | #LearnLive Serverless - Deconstruct Contoso Real Estate (Frontend Apps)
    • ✍🏽 Sep 25 | #30DaysOfIA - Build Intelligent Apps
    • 🎙 Sep 26 | #AskTheExpert - Azure Functions
    • 👩🏽‍💻 Sep 28 | #LearnLive Serverless - Deconstruct Contoso Real Estate (Backend Integrations)

    Oct 2023

    🍁 #FallForIA

    • 👩🏽‍💻 Oct 05 | #LearnLive Serverless - Deconstruct Contoso Real Estate (Testing & Deployment)

    Nov 2023

    🔥 #MSIgnite

    Experience the latest innovations around AI, learn from product and partner experts to advance your skills, and connect with your community. Join the community in-person at Seattle, or online from anywhere in the world!

    - + \ No newline at end of file diff --git a/Fall-For-IA/index.html b/Fall-For-IA/index.html index 0040988c5f..ffc97887fd 100644 --- a/Fall-For-IA/index.html +++ b/Fall-For-IA/index.html @@ -14,13 +14,13 @@ - +
    Skip to main content

    🍂 Fall For Intelligent Apps!

    Join us this fall on a learning journey to explore building intelligent apps. Combine the power of AI, cloud-scale data, and cloud-native app development to create highly differentiated digital experiences. Develop adaptive, responsive, and personalized experiences by building and modernizing intelligent applications with Azure for your users.

    #30DaysOfIA

    Join us on a #30Day journey that starts by demystifying Intelligent Apps and ends with you Building a Copilot!

    Learn Live

    Deconstruct an enterprise-grade end to end reference sample for a serverless or Kubernetes application.

    Ask The Expert

    Join us for online conversations with the product teams - submit questions ahead of time or ask them live!

    Hack Together

    Explore this 6-part from Microsoft Reactor on JS & AI on Azure and make an open-source contribution!

    Cloud Skills

    Skill up on key cloud technologies with these free, self-guided learning courses - and make the leaderboard!

    🆕 Community Gallery

    Explore the Community Showcase for videos, blog posts and open-source projects for the community!

    - + \ No newline at end of file diff --git a/New-Year/ate/index.html b/New-Year/ate/index.html index d871775097..723061fde5 100644 --- a/New-Year/ate/index.html +++ b/New-Year/ate/index.html @@ -14,13 +14,13 @@ - +
    Skip to main content

    Ask The Expert

    Ask the Expert is a series of scheduled 30-minute LIVE broadcasts where you can connect with experts to get your questions answered! You an also visit the site later, to view sessions on demand - and view answers to questions you may have submitted ahead of time.


    How does it work?

    The live broadcast will have a moderated chat session where you can submit questions in real time. We will also provide guidance on where you can submit questions ahead of time, and recap the questions and responses on this site later - along with links to video recaps where available.


    Ask the Experts: Azure Kubernetes Service

    Join the Azure Kubernetes Service Product Group this New Year to learn about cloud-native development using Kubernetes on Azure computing. It is time to accelerate your cloud-native application development leveraging the de-facto container platform, Kubernetes. Discuss with the experts on how to develop, manage, scale and secure managed Kubernetes clusters on Azure with an end-to-end development and management experience using Azure Kubernetes Service and Azure Fleet Manager.


    When are the sessions?

    Visit the Ask The Experts page to Register:

    DateDescription
    Feb 9th, 2023 @ 9am PSTAsk the Experts: Azure Kubernetes Service
    Feb 10th, 2023 @ 12:00pm SGTAsk the Experts: Azure Kubernetes Service (APAC)
    - + \ No newline at end of file diff --git a/New-Year/calendar/index.html b/New-Year/calendar/index.html index c7111a1b55..61363a9f2a 100644 --- a/New-Year/calendar/index.html +++ b/New-Year/calendar/index.html @@ -14,13 +14,13 @@ - +
    Skip to main content

    #CNNY Calendar

    #SaveTheDate

    #CloudNativeNewYear runs Jan 23 - Feb 23. Check this page for key activities scheduled all month. Use this icon key to scan quickly for dates related to activities of interest.

    • 🎤 Ask The Expert - live Q&A with product teams
    • ✍🏽 #30DaysOfCloudNative - daily content posts from experts
    • 🎯 Cloud Skills Challenge - self-guided learning (with leaderboards)
    • 🎙 Webinars - learn from experts (registration required)
    • ⚛ Reactor - community meetups in-person & online (registration required)
    WhenWhatWhere
    Jan 23🎯 Cloud Skills Challenge StartsRegister Now
    Jan 23✍🏽 #30DaysOfCloudNative KickoffWebsite
    Jan 24#TechEspresso: Container Offerings in AzureRegister Now
    Jan 26🎙 Webinar: Quickstart Guide to KubernetesRegister Now
    Feb 03#AzureHappyHours: How DAPR Bindings simplify 3rd party service integrationsRegister Now
    Feb 10#SamosaChaiDotNET Microservices with DAPR+.NETRegister Now
    Feb 14#TechEspresso: Azure Kubernetes Service for StartupsRegister Now
    Feb 17#AzureHappyHour: DAPR Config Building Block for Microservices SetupRegister Now
    Feb 23🎯 Cloud Skills Challenge EndsLast Day!!
    Feb 28#TechEspresso: KEDA & DAPR extension introduction in AKSRegister Now
    - + \ No newline at end of file diff --git a/New-Year/index.html b/New-Year/index.html index 64ed8050dc..47ca977cea 100644 --- a/New-Year/index.html +++ b/New-Year/index.html @@ -14,13 +14,13 @@ - +
    Skip to main content


    Join us for a month-long celebration of Cloud-Native Computing - from core concepts and developer tools, to usage scenarios and best practices. Bookmark this page, then head over to the blog every week day as we kickstart multiple community-driven and self-guided learning initiatives for jumpstarting your Cloud-Native developer journey.

    #30DaysOfCloudNative

    Join us on a #30Day journey into Cloud-Native fundamentals.

    Ask The Experts

    Join us for online conversations with the product teams - submit questions ahead of time or ask them live!

    Cloud Skills

    Skill up on key cloud technologies with these free, self-guided learning courses - and make the leaderboard!

    - + \ No newline at end of file diff --git a/assets/js/010f538e.b09031ba.js b/assets/js/010f538e.b09031ba.js deleted file mode 100644 index fa6179374f..0000000000 --- a/assets/js/010f538e.b09031ba.js +++ /dev/null @@ -1 +0,0 @@ -"use strict";(self.webpackChunkwebsite=self.webpackChunkwebsite||[]).push([[997],{19881:e=>{e.exports=JSON.parse('{"blogPosts":[{"id":"hacktogether-recap","metadata":{"permalink":"/Cloud-Native/30daysofIA/hacktogether-recap","source":"@site/blog-30daysofIA/2023-09-08/hack-together-recap.md","title":"HackTogether Recap \ud83c\udf42","description":"Exciting news! As we approach the close of #JavaScript on #Azure Global Hack today, we are thrilled to announce another exciting opportunity for all JavaScript developers!! Find a recap of Hack together and read all about the upcoming #FallIntoIA on this post!","date":"2023-09-08T00:00:00.000Z","formattedDate":"September 8, 2023","tags":[{"label":"Fall-For-IA","permalink":"/Cloud-Native/30daysofIA/tags/fall-for-ia"},{"label":"30-days-of-IA","permalink":"/Cloud-Native/30daysofIA/tags/30-days-of-ia"},{"label":"learn-live","permalink":"/Cloud-Native/30daysofIA/tags/learn-live"},{"label":"hack-together","permalink":"/Cloud-Native/30daysofIA/tags/hack-together"},{"label":"community-buzz","permalink":"/Cloud-Native/30daysofIA/tags/community-buzz"},{"label":"ask-the-expert","permalink":"/Cloud-Native/30daysofIA/tags/ask-the-expert"},{"label":"azure-kubernetes-service","permalink":"/Cloud-Native/30daysofIA/tags/azure-kubernetes-service"},{"label":"azure-functions","permalink":"/Cloud-Native/30daysofIA/tags/azure-functions"},{"label":"azure-openai","permalink":"/Cloud-Native/30daysofIA/tags/azure-openai"},{"label":"azure-container-apps","permalink":"/Cloud-Native/30daysofIA/tags/azure-container-apps"},{"label":"azure-cosmos-db","permalink":"/Cloud-Native/30daysofIA/tags/azure-cosmos-db"},{"label":"github-copilot","permalink":"/Cloud-Native/30daysofIA/tags/github-copilot"},{"label":"github-codespaces","permalink":"/Cloud-Native/30daysofIA/tags/github-codespaces"},{"label":"github-actions","permalink":"/Cloud-Native/30daysofIA/tags/github-actions"}],"readingTime":3.675,"hasTruncateMarker":false,"authors":[{"name":"It\'s 30DaysOfIA","title":"FallForIA Content Team","url":"https://github.com/cloud-native","imageURL":"https://azure.github.io/Cloud-Native/img/logo-ms-cloud-native.png","key":"cnteam"}],"frontMatter":{"slug":"hacktogether-recap","title":"HackTogether Recap \ud83c\udf42","authors":["cnteam"],"draft":false,"hide_table_of_contents":false,"toc_min_heading_level":2,"toc_max_heading_level":3,"keywords":["Cloud-Scale","Data","AI","AI/ML","intelligent apps","cloud-native","30-days","enterprise apps","digital experiences","app modernization","hack-together"],"image":"https://github.com/Azure/Cloud-Native/blob/main/website/static/img/ogImage.png","description":"Exciting news! As we approach the close of #JavaScript on #Azure Global Hack today, we are thrilled to announce another exciting opportunity for all JavaScript developers!! Find a recap of Hack together and read all about the upcoming #FallIntoIA on this post!","tags":["Fall-For-IA","30-days-of-IA","learn-live","hack-together","community-buzz","ask-the-expert","azure-kubernetes-service","azure-functions","azure-openai","azure-container-apps","azure-cosmos-db","github-copilot","github-codespaces","github-actions"]},"nextItem":{"title":"Fall is Coming! \ud83c\udf42","permalink":"/Cloud-Native/30daysofIA/road-to-fallforIA"}},"content":"\\n \\n \\n \\n \\n \\n \\n \\n \\n\\n\\n\x3c!-- End METADATA --\x3e\\n\\nContinue The Learning Journey through **Fall For Intelligent Apps!** \ud83c\udf42\\n\\n## What We\'ll Cover\\n * Thank you! \u2665\ufe0f \\n * Recap of The [JavaScript on Azure Global Hack-Together](https://aka.ms/JavaScripton_Azure)\\n * Continue the journey\\n * Hands-on practice: Make your first contribution to open-source!\\n * Resources: For self-study!\\n\\n\\n\x3c!-- ************************************* --\x3e\\n\x3c!-- AUTHORS: ONLY UPDATE BELOW THIS LINE --\x3e\\n\x3c!-- ************************************* --\x3e\\n\\n## Thank you! \u2665\ufe0f \\n![image](https://user-images.githubusercontent.com/40116776/264592120-1dc08b59-0555-40b2-8866-59248a573b83.png)\\n\\nIt\'s hard to believe that JavaScript on Azure hack-together is ending! It seems like just yesterday that we launched this initiative, and yet here we are, 15 days later, with an incredible amount of learning and growth behind us. So... it\'s time for a wrap!\\n\\nFrom the bottom of our hearts, we want to thank each and every one of you for your participation, engagement, and enthusiasm. It\'s been truly inspiring to see the passion and dedication from this strong community, and we\'re honored to be a part of it. \u2728\\n\\n## Recap of The [JavaScript on Azure Global Hack-Together](https://aka.ms/JavaScripton_Azure)\\n\\nAs we wrap up this exciting event, we wanted to take a moment to reflect on all that we\'ve accomplished together. Over the last 15 days, we\'ve covered a lot of ground, from the basics of contributing to Open source to the exploration of the Contoso Real Estate project from its Frontend to its Backend and future AI implementation. \\n\\nNow that the hack-together is ending, we want to make sure that you have all the resources you need to continue honing your skills in the future. Whether you\'re looking to make your fist contribution to open source, become an open source maintainers, collaborate with others, or simply keep learning, there are plenty of resources out there to help you achieve your goals. So, let\'s dive in and explore all the ways you can continue to grow your JavaScript skills on Azure!\\n\\n### JSonAzure Hack-together Roadmap \ud83d\udccd:\\n![hack-together-roadmap (2)](https://user-images.githubusercontent.com/40116776/264975573-85938fcc-b235-4b5b-b45a-f174d3cf560d.png)\\n\\n\\n### Recap on past Livestreams\ud83c\udf1f:\\n\\nDay 1\ufe0f\u20e3: [Opening Keynote (Hack-together Launch)](https://developer.microsoft.com/reactor/events/20275/?WT.mc_id=academic-98351-juliamuiruri): Introduction to the Contoso Real Estate Open-source project!, managing complex and complex enterprise architecture, new announcements for JavaScript developers on Azure\\n\\nDay 2\ufe0f\u20e3: [GitHub Copilot & Codespaces](https://developer.microsoft.com/reactor/events/20321/?WT.mc_id=academic-98351-juliamuiruri): Introduction to your AI pair programmer (GitHub Copilot) and your virtual developer environment on the cloud (GitHub Codespaces)\\n\\nDay 6\ufe0f\u20e3: [Build your Frontend using Static Web Apps](https://developer.microsoft.com/reactor/events/20276/?WT.mc_id=academic-98351-juliamuiruri) as part of a complex, modern composable frontends (or micro-frontends) and cloud-native applications.\\n\\nDay 9\ufe0f\u20e3: Build a Serverless Backend using [Azure Functions](https://developer.microsoft.com/reactor/events/20277/?WT.mc_id=academic-98351-juliamuiruri)\\n\\nDay 1\ufe0f\u20e33\ufe0f\u20e3: Easily connect to an [Azure Cosmos DB](https://developer.microsoft.com/reactor/events/20278/?WT.mc_id=academic-98351-juliamuiruri), exploring its benefits and how to get started\\n\\nDay 1\ufe0f\u20e35\ufe0f\u20e3: Being in the AI Era, we dive into the [Azure OpenAI Service](https://developer.microsoft.com/reactor/events/20322/?WT.mc_id=academic-98351-juliamuiruri) and how you can start to build intelligent JavaScript applications\\n\\n### \ud83d\udcd6 Self-Learning Resources\\n\\n1. JavaScript on Azure Global Hack Together [Module collection](https://aka.ms/JavaScriptonAzureCSC)\\n2. Lets #HackTogether: Javascript On Azure [Keynote](https://dev.to/azure/lets-hacktogether-javascript-on-azure-keynote-nml)\\n3. [Step by Step Guide: Migrating v3 to v4 programming model for Azure Functions for Node.Js Application](https://techcommunity.microsoft.com/t5/educator-developer-blog/step-by-step-guide-migrating-v3-to-v4-programming-model-for/ba-p/3897691?WT.mc_id=academic-98351-juliamuiruri)\\n\\n## Continue your journey with #FallForIntelligentApps\\nJoin us this Fall on a learning journey to explore building intelligent apps. Combine the power of AI, cloud-native app development, and cloud-scale data to build highly differentiated digital experiences. Develop adaptive, responsive, and personalized experiences by building and modernizing intelligent applications with Azure. Engage in a self-paced learning adventure by following along the #30Days of Intelligent Apps series, completing the Intelligent Apps Skills Challenge, joining the Product Group for a live Ask The Expert series or building an end to end solution architecture with a live guided experience through the Learn Live series. Discover more here.\\n\\n## Hands-on practice: Make your first contribution to open source!\\nJoin our GitHUb Discussion Forum to connect with developers from every part of world, see contributions from other, find collaborators and make your first contribution to a real-world project!\\nDon\'t forget to give the repo a star \u2b50\\n\\n## Resources\\nAll resources are accessible on our [landing page](https://aka.ms/JavaScripton_Azure)"},{"id":"road-to-fallforIA","metadata":{"permalink":"/Cloud-Native/30daysofIA/road-to-fallforIA","source":"@site/blog-30daysofIA/2023-08-28/road-to-fallforia.md","title":"Fall is Coming! \ud83c\udf42","description":"Combine the power of AI, cloud-scale data, and cloud-native app development to create highly differentiated digital experiences. Develop adaptive, responsive, and personalized experiences by building and modernizing intelligent applications with Azure.","date":"2023-08-28T00:00:00.000Z","formattedDate":"August 28, 2023","tags":[{"label":"Fall-For-IA","permalink":"/Cloud-Native/30daysofIA/tags/fall-for-ia"},{"label":"30-days-of-IA","permalink":"/Cloud-Native/30daysofIA/tags/30-days-of-ia"},{"label":"learn-live","permalink":"/Cloud-Native/30daysofIA/tags/learn-live"},{"label":"hack-together","permalink":"/Cloud-Native/30daysofIA/tags/hack-together"},{"label":"community-buzz","permalink":"/Cloud-Native/30daysofIA/tags/community-buzz"},{"label":"ask-the-expert","permalink":"/Cloud-Native/30daysofIA/tags/ask-the-expert"},{"label":"azure-kubernetes-service","permalink":"/Cloud-Native/30daysofIA/tags/azure-kubernetes-service"},{"label":"azure-functions","permalink":"/Cloud-Native/30daysofIA/tags/azure-functions"},{"label":"azure-openai","permalink":"/Cloud-Native/30daysofIA/tags/azure-openai"},{"label":"azure-container-apps","permalink":"/Cloud-Native/30daysofIA/tags/azure-container-apps"},{"label":"azure-cosmos-db","permalink":"/Cloud-Native/30daysofIA/tags/azure-cosmos-db"},{"label":"github-copilot","permalink":"/Cloud-Native/30daysofIA/tags/github-copilot"},{"label":"github-codespaces","permalink":"/Cloud-Native/30daysofIA/tags/github-codespaces"},{"label":"github-actions","permalink":"/Cloud-Native/30daysofIA/tags/github-actions"}],"readingTime":0.785,"hasTruncateMarker":false,"authors":[{"name":"It\'s 30DaysOfIA","title":"FallForIA Content Team","url":"https://github.com/cloud-native","imageURL":"https://azure.github.io/Cloud-Native/img/logo-ms-cloud-native.png","key":"cnteam"}],"frontMatter":{"slug":"road-to-fallforIA","title":"Fall is Coming! \ud83c\udf42","authors":["cnteam"],"draft":false,"hide_table_of_contents":false,"toc_min_heading_level":2,"toc_max_heading_level":3,"keywords":["Cloud-Scale","Data","AI","AI/ML","intelligent apps","cloud-native","30-days","enterprise apps","digital experiences","app modernization"],"image":"https://github.com/Azure/Cloud-Native/blob/main/website/static/img/ogImage.png","description":"Combine the power of AI, cloud-scale data, and cloud-native app development to create highly differentiated digital experiences. Develop adaptive, responsive, and personalized experiences by building and modernizing intelligent applications with Azure.","tags":["Fall-For-IA","30-days-of-IA","learn-live","hack-together","community-buzz","ask-the-expert","azure-kubernetes-service","azure-functions","azure-openai","azure-container-apps","azure-cosmos-db","github-copilot","github-codespaces","github-actions"]},"prevItem":{"title":"HackTogether Recap \ud83c\udf42","permalink":"/Cloud-Native/30daysofIA/hacktogether-recap"}},"content":"\\n \\n \\n \\n \\n \\n \\n \\n \\n\\n\\n\x3c!-- End METADATA --\x3e\\n\\nSeptember is almost here - and that can only mean one thing!! It\'s time to **\ud83c\udf42 Fall for something new and exciting** and spend a few weeks skilling up on relevant tools, techologies and solutions!! \\n\\nLast year, we focused on #ServerlessSeptember. This year, we\'re building on that theme with the addition of cloud-scale **Data**, cloud-native **Technologies** and cloud-based **AI** integrations to help you modernize and build intelligent apps for the enterprise!\\n\\nWatch this space - and join us in September to learn more!"}]}')}}]); \ No newline at end of file diff --git a/assets/js/010f538e.ba5bab3a.js b/assets/js/010f538e.ba5bab3a.js new file mode 100644 index 0000000000..bd0f865b89 --- /dev/null +++ b/assets/js/010f538e.ba5bab3a.js @@ -0,0 +1 @@ +"use strict";(self.webpackChunkwebsite=self.webpackChunkwebsite||[]).push([[997],{19881:e=>{e.exports=JSON.parse('{"blogPosts":[{"id":"hacktogether-recap","metadata":{"permalink":"/Cloud-Native/30daysofIA/hacktogether-recap","source":"@site/blog-30daysofIA/2023-09-08/hack-together-recap.md","title":"HackTogether Recap \ud83c\udf42","description":"Exciting news! As we approach the close of #JavaScript on #Azure Global Hack today, we are thrilled to announce another exciting opportunity for all JavaScript developers!! Find a recap of Hack together and read all about the upcoming #FallIntoIA on this post!","date":"2023-09-08T00:00:00.000Z","formattedDate":"September 8, 2023","tags":[{"label":"Fall-For-IA","permalink":"/Cloud-Native/30daysofIA/tags/fall-for-ia"},{"label":"30-days-of-IA","permalink":"/Cloud-Native/30daysofIA/tags/30-days-of-ia"},{"label":"learn-live","permalink":"/Cloud-Native/30daysofIA/tags/learn-live"},{"label":"hack-together","permalink":"/Cloud-Native/30daysofIA/tags/hack-together"},{"label":"community-buzz","permalink":"/Cloud-Native/30daysofIA/tags/community-buzz"},{"label":"ask-the-expert","permalink":"/Cloud-Native/30daysofIA/tags/ask-the-expert"},{"label":"azure-kubernetes-service","permalink":"/Cloud-Native/30daysofIA/tags/azure-kubernetes-service"},{"label":"azure-functions","permalink":"/Cloud-Native/30daysofIA/tags/azure-functions"},{"label":"azure-openai","permalink":"/Cloud-Native/30daysofIA/tags/azure-openai"},{"label":"azure-container-apps","permalink":"/Cloud-Native/30daysofIA/tags/azure-container-apps"},{"label":"azure-cosmos-db","permalink":"/Cloud-Native/30daysofIA/tags/azure-cosmos-db"},{"label":"github-copilot","permalink":"/Cloud-Native/30daysofIA/tags/github-copilot"},{"label":"github-codespaces","permalink":"/Cloud-Native/30daysofIA/tags/github-codespaces"},{"label":"github-actions","permalink":"/Cloud-Native/30daysofIA/tags/github-actions"}],"readingTime":3.995,"hasTruncateMarker":false,"authors":[{"name":"It\'s 30DaysOfIA","title":"FallForIA Content Team","url":"https://github.com/cloud-native","imageURL":"https://azure.github.io/Cloud-Native/img/logo-ms-cloud-native.png","key":"cnteam"}],"frontMatter":{"slug":"hacktogether-recap","title":"HackTogether Recap \ud83c\udf42","authors":["cnteam"],"draft":false,"hide_table_of_contents":false,"toc_min_heading_level":2,"toc_max_heading_level":3,"keywords":["Cloud-Scale","Data","AI","AI/ML","intelligent apps","cloud-native","30-days","enterprise apps","digital experiences","app modernization","hack-together"],"image":"https://github.com/Azure/Cloud-Native/blob/main/website/static/img/ogImage.png","description":"Exciting news! As we approach the close of #JavaScript on #Azure Global Hack today, we are thrilled to announce another exciting opportunity for all JavaScript developers!! Find a recap of Hack together and read all about the upcoming #FallIntoIA on this post!","tags":["Fall-For-IA","30-days-of-IA","learn-live","hack-together","community-buzz","ask-the-expert","azure-kubernetes-service","azure-functions","azure-openai","azure-container-apps","azure-cosmos-db","github-copilot","github-codespaces","github-actions"]},"nextItem":{"title":"Fall is Coming! \ud83c\udf42","permalink":"/Cloud-Native/30daysofIA/road-to-fallforIA"}},"content":"\\n\\n\\n\\n\\n\\n \\n \\n \\n \\n \\n \\n \\n \\n\\n\\n\x3c!-- End METADATA --\x3e\\n\\nContinue The Learning Journey through **Fall For Intelligent Apps!** \ud83c\udf42\\n\\n## What We\'ll Cover\\n * Thank you! \u2665\ufe0f \\n * Recap of The [JavaScript on Azure Global Hack-Together](https://aka.ms/JavaScripton_Azure)\\n * Continue the journey\\n * Hands-on practice: Make your first contribution to open-source!\\n * Resources: For self-study!\\n\\n\\n\x3c!-- ************************************* --\x3e\\n\x3c!-- AUTHORS: ONLY UPDATE BELOW THIS LINE --\x3e\\n\x3c!-- ************************************* --\x3e\\n\\n## Thank you! \u2665\ufe0f \\n![image](https://user-images.githubusercontent.com/40116776/264592120-1dc08b59-0555-40b2-8866-59248a573b83.png)\\n\\nIt\'s hard to believe that JavaScript on Azure hack-together is ending! It seems like just yesterday that we launched this initiative, and yet here we are, 15 days later, with an incredible amount of learning and growth behind us. So... it\'s time for a wrap!\\n\\nFrom the bottom of our hearts, we want to thank each and every one of you for your participation, engagement, and enthusiasm. It\'s been truly inspiring to see the passion and dedication from this strong community, and we\'re honored to be a part of it. \u2728\\n\\n## Recap of The [JavaScript on Azure Global Hack-Together](https://aka.ms/JavaScripton_Azure)\\n\\nAs we wrap up this exciting event, we wanted to take a moment to reflect on all that we\'ve accomplished together. Over the last 15 days, we\'ve covered a lot of ground, from the basics of contributing to Open source to the exploration of the Contoso Real Estate project from its Frontend to its Backend and future AI implementation. \\n\\nNow that the hack-together is ending, we want to make sure that you have all the resources you need to continue honing your skills in the future. Whether you\'re looking to make your fist contribution to open source, become an open source maintainers, collaborate with others, or simply keep learning, there are plenty of resources out there to help you achieve your goals. So, let\'s dive in and explore all the ways you can continue to grow your JavaScript skills on Azure!\\n\\n### JSonAzure Hack-together Roadmap \ud83d\udccd:\\n![hack-together-roadmap (2)](https://user-images.githubusercontent.com/40116776/264975573-85938fcc-b235-4b5b-b45a-f174d3cf560d.png)\\n\\n\\n### Recap on past Livestreams\ud83c\udf1f:\\n\\nDay 1\ufe0f\u20e3: [Opening Keynote (Hack-together Launch)](https://developer.microsoft.com/reactor/events/20275/?WT.mc_id=academic-98351-juliamuiruri): Introduction to the Contoso Real Estate Open-source project!, managing complex and complex enterprise architecture, new announcements for JavaScript developers on Azure\\n\\nDay 2\ufe0f\u20e3: [GitHub Copilot & Codespaces](https://developer.microsoft.com/reactor/events/20321/?WT.mc_id=academic-98351-juliamuiruri): Introduction to your AI pair programmer (GitHub Copilot) and your virtual developer environment on the cloud (GitHub Codespaces)\\n\\nDay 6\ufe0f\u20e3: [Build your Frontend using Static Web Apps](https://developer.microsoft.com/reactor/events/20276/?WT.mc_id=academic-98351-juliamuiruri) as part of a complex, modern composable frontends (or micro-frontends) and cloud-native applications.\\n\\nDay 9\ufe0f\u20e3: Build a Serverless Backend using [Azure Functions](https://developer.microsoft.com/reactor/events/20277/?WT.mc_id=academic-98351-juliamuiruri)\\n\\nDay 1\ufe0f\u20e33\ufe0f\u20e3: Easily connect to an [Azure Cosmos DB](https://developer.microsoft.com/reactor/events/20278/?WT.mc_id=academic-98351-juliamuiruri), exploring its benefits and how to get started\\n\\nDay 1\ufe0f\u20e35\ufe0f\u20e3: Being in the AI Era, we dive into the [Azure OpenAI Service](https://developer.microsoft.com/reactor/events/20322/?WT.mc_id=academic-98351-juliamuiruri) and how you can start to build intelligent JavaScript applications\\n\\n### \ud83d\udcd6 Self-Learning Resources\\n\\n1. JavaScript on Azure Global Hack Together [Module collection](https://aka.ms/JavaScriptonAzureCSC)\\n2. Lets #HackTogether: Javascript On Azure [Keynote](https://dev.to/azure/lets-hacktogether-javascript-on-azure-keynote-nml)\\n3. [Step by Step Guide: Migrating v3 to v4 programming model for Azure Functions for Node.Js Application](https://techcommunity.microsoft.com/t5/educator-developer-blog/step-by-step-guide-migrating-v3-to-v4-programming-model-for/ba-p/3897691?WT.mc_id=academic-98351-juliamuiruri)\\n\\n## Continue your journey with #FallForIntelligentApps\\nJoin us this Fall on a learning journey to explore building intelligent apps. Combine the power of AI, cloud-native app development, and cloud-scale data to build highly differentiated digital experiences. Develop adaptive, responsive, and personalized experiences by building and modernizing intelligent applications with Azure. Engage in a self-paced learning adventure by following along the #30Days of Intelligent Apps series, completing the Intelligent Apps Skills Challenge, joining the Product Group for a live Ask The Expert series or building an end to end solution architecture with a live guided experience through the Learn Live series. Discover more here.\\n\\n## Hands-on practice: Make your first contribution to open source!\\nJoin our GitHUb Discussion Forum to connect with developers from every part of world, see contributions from other, find collaborators and make your first contribution to a real-world project!\\nDon\'t forget to give the repo a star \u2b50\\n\\n## Resources\\nAll resources are accessible on our [landing page](https://aka.ms/JavaScripton_Azure)"},{"id":"road-to-fallforIA","metadata":{"permalink":"/Cloud-Native/30daysofIA/road-to-fallforIA","source":"@site/blog-30daysofIA/2023-08-28/road-to-fallforia.md","title":"Fall is Coming! \ud83c\udf42","description":"Combine the power of AI, cloud-scale data, and cloud-native app development to create highly differentiated digital experiences. Develop adaptive, responsive, and personalized experiences by building and modernizing intelligent applications with Azure.","date":"2023-08-28T00:00:00.000Z","formattedDate":"August 28, 2023","tags":[{"label":"Fall-For-IA","permalink":"/Cloud-Native/30daysofIA/tags/fall-for-ia"},{"label":"30-days-of-IA","permalink":"/Cloud-Native/30daysofIA/tags/30-days-of-ia"},{"label":"learn-live","permalink":"/Cloud-Native/30daysofIA/tags/learn-live"},{"label":"hack-together","permalink":"/Cloud-Native/30daysofIA/tags/hack-together"},{"label":"community-buzz","permalink":"/Cloud-Native/30daysofIA/tags/community-buzz"},{"label":"ask-the-expert","permalink":"/Cloud-Native/30daysofIA/tags/ask-the-expert"},{"label":"azure-kubernetes-service","permalink":"/Cloud-Native/30daysofIA/tags/azure-kubernetes-service"},{"label":"azure-functions","permalink":"/Cloud-Native/30daysofIA/tags/azure-functions"},{"label":"azure-openai","permalink":"/Cloud-Native/30daysofIA/tags/azure-openai"},{"label":"azure-container-apps","permalink":"/Cloud-Native/30daysofIA/tags/azure-container-apps"},{"label":"azure-cosmos-db","permalink":"/Cloud-Native/30daysofIA/tags/azure-cosmos-db"},{"label":"github-copilot","permalink":"/Cloud-Native/30daysofIA/tags/github-copilot"},{"label":"github-codespaces","permalink":"/Cloud-Native/30daysofIA/tags/github-codespaces"},{"label":"github-actions","permalink":"/Cloud-Native/30daysofIA/tags/github-actions"}],"readingTime":1.055,"hasTruncateMarker":false,"authors":[{"name":"It\'s 30DaysOfIA","title":"FallForIA Content Team","url":"https://github.com/cloud-native","imageURL":"https://azure.github.io/Cloud-Native/img/logo-ms-cloud-native.png","key":"cnteam"}],"frontMatter":{"slug":"road-to-fallforIA","title":"Fall is Coming! \ud83c\udf42","authors":["cnteam"],"draft":false,"hide_table_of_contents":false,"toc_min_heading_level":2,"toc_max_heading_level":3,"keywords":["Cloud-Scale","Data","AI","AI/ML","intelligent apps","cloud-native","30-days","enterprise apps","digital experiences","app modernization"],"image":"https://github.com/Azure/Cloud-Native/blob/main/website/static/img/ogImage.png","description":"Combine the power of AI, cloud-scale data, and cloud-native app development to create highly differentiated digital experiences. Develop adaptive, responsive, and personalized experiences by building and modernizing intelligent applications with Azure.","tags":["Fall-For-IA","30-days-of-IA","learn-live","hack-together","community-buzz","ask-the-expert","azure-kubernetes-service","azure-functions","azure-openai","azure-container-apps","azure-cosmos-db","github-copilot","github-codespaces","github-actions"]},"prevItem":{"title":"HackTogether Recap \ud83c\udf42","permalink":"/Cloud-Native/30daysofIA/hacktogether-recap"}},"content":"\\n \\n \\n \\n \\n \\n \\n \\n \\n \\n \\n \\n \\n \\n\\n\\n\x3c!-- End METADATA --\x3e\\n\\nSeptember is almost here - and that can only mean one thing!! It\'s time to **\ud83c\udf42 Fall for something new and exciting** and spend a few weeks skilling up on relevant tools, techologies and solutions!! \\n\\nLast year, we focused on #ServerlessSeptember. This year, we\'re building on that theme with the addition of cloud-scale **Data**, cloud-native **Technologies** and cloud-based **AI** integrations to help you modernize and build intelligent apps for the enterprise!\\n\\nWatch this space - and join us in September to learn more!"}]}')}}]); \ No newline at end of file diff --git a/assets/js/0f2db0e2.0cdcd502.js b/assets/js/0f2db0e2.0cdcd502.js deleted file mode 100644 index f7ee7e085b..0000000000 --- a/assets/js/0f2db0e2.0cdcd502.js +++ /dev/null @@ -1 +0,0 @@ -"use strict";(self.webpackChunkwebsite=self.webpackChunkwebsite||[]).push([[76508],{3905:(e,t,a)=>{a.d(t,{Zo:()=>u,kt:()=>d});var o=a(67294);function r(e,t,a){return t in e?Object.defineProperty(e,t,{value:a,enumerable:!0,configurable:!0,writable:!0}):e[t]=a,e}function n(e,t){var a=Object.keys(e);if(Object.getOwnPropertySymbols){var o=Object.getOwnPropertySymbols(e);t&&(o=o.filter((function(t){return Object.getOwnPropertyDescriptor(e,t).enumerable}))),a.push.apply(a,o)}return a}function i(e){for(var t=1;t=0||(r[a]=e[a]);return r}(e,t);if(Object.getOwnPropertySymbols){var n=Object.getOwnPropertySymbols(e);for(o=0;o=0||Object.prototype.propertyIsEnumerable.call(e,a)&&(r[a]=e[a])}return r}var s=o.createContext({}),c=function(e){var t=o.useContext(s),a=t;return e&&(a="function"==typeof e?e(t):i(i({},t),e)),a},u=function(e){var t=c(e.components);return o.createElement(s.Provider,{value:t},e.children)},p={inlineCode:"code",wrapper:function(e){var t=e.children;return o.createElement(o.Fragment,{},t)}},h=o.forwardRef((function(e,t){var a=e.components,r=e.mdxType,n=e.originalType,s=e.parentName,u=l(e,["components","mdxType","originalType","parentName"]),h=c(a),d=r,m=h["".concat(s,".").concat(d)]||h[d]||p[d]||n;return a?o.createElement(m,i(i({ref:t},u),{},{components:a})):o.createElement(m,i({ref:t},u))}));function d(e,t){var a=arguments,r=t&&t.mdxType;if("string"==typeof e||r){var n=a.length,i=new Array(n);i[0]=h;var l={};for(var s in t)hasOwnProperty.call(t,s)&&(l[s]=t[s]);l.originalType=e,l.mdxType="string"==typeof e?e:r,i[1]=l;for(var c=2;c{a.r(t),a.d(t,{assets:()=>s,contentTitle:()=>i,default:()=>p,frontMatter:()=>n,metadata:()=>l,toc:()=>c});var o=a(87462),r=(a(67294),a(3905));const n={slug:"hacktogether-recap",title:"HackTogether Recap \ud83c\udf42",authors:["cnteam"],draft:!1,hide_table_of_contents:!1,toc_min_heading_level:2,toc_max_heading_level:3,keywords:["Cloud-Scale","Data","AI","AI/ML","intelligent apps","cloud-native","30-days","enterprise apps","digital experiences","app modernization","hack-together"],image:"https://github.com/Azure/Cloud-Native/blob/main/website/static/img/ogImage.png",description:"Exciting news! As we approach the close of #JavaScript on #Azure Global Hack today, we are thrilled to announce another exciting opportunity for all JavaScript developers!! Find a recap of Hack together and read all about the upcoming #FallIntoIA on this post!",tags:["Fall-For-IA","30-days-of-IA","learn-live","hack-together","community-buzz","ask-the-expert","azure-kubernetes-service","azure-functions","azure-openai","azure-container-apps","azure-cosmos-db","github-copilot","github-codespaces","github-actions"]},i=void 0,l={permalink:"/Cloud-Native/30daysofIA/hacktogether-recap",source:"@site/blog-30daysofIA/2023-09-08/hack-together-recap.md",title:"HackTogether Recap \ud83c\udf42",description:"Exciting news! As we approach the close of #JavaScript on #Azure Global Hack today, we are thrilled to announce another exciting opportunity for all JavaScript developers!! Find a recap of Hack together and read all about the upcoming #FallIntoIA on this post!",date:"2023-09-08T00:00:00.000Z",formattedDate:"September 8, 2023",tags:[{label:"Fall-For-IA",permalink:"/Cloud-Native/30daysofIA/tags/fall-for-ia"},{label:"30-days-of-IA",permalink:"/Cloud-Native/30daysofIA/tags/30-days-of-ia"},{label:"learn-live",permalink:"/Cloud-Native/30daysofIA/tags/learn-live"},{label:"hack-together",permalink:"/Cloud-Native/30daysofIA/tags/hack-together"},{label:"community-buzz",permalink:"/Cloud-Native/30daysofIA/tags/community-buzz"},{label:"ask-the-expert",permalink:"/Cloud-Native/30daysofIA/tags/ask-the-expert"},{label:"azure-kubernetes-service",permalink:"/Cloud-Native/30daysofIA/tags/azure-kubernetes-service"},{label:"azure-functions",permalink:"/Cloud-Native/30daysofIA/tags/azure-functions"},{label:"azure-openai",permalink:"/Cloud-Native/30daysofIA/tags/azure-openai"},{label:"azure-container-apps",permalink:"/Cloud-Native/30daysofIA/tags/azure-container-apps"},{label:"azure-cosmos-db",permalink:"/Cloud-Native/30daysofIA/tags/azure-cosmos-db"},{label:"github-copilot",permalink:"/Cloud-Native/30daysofIA/tags/github-copilot"},{label:"github-codespaces",permalink:"/Cloud-Native/30daysofIA/tags/github-codespaces"},{label:"github-actions",permalink:"/Cloud-Native/30daysofIA/tags/github-actions"}],readingTime:3.675,hasTruncateMarker:!1,authors:[{name:"It's 30DaysOfIA",title:"FallForIA Content Team",url:"https://github.com/cloud-native",imageURL:"https://azure.github.io/Cloud-Native/img/logo-ms-cloud-native.png",key:"cnteam"}],frontMatter:{slug:"hacktogether-recap",title:"HackTogether Recap \ud83c\udf42",authors:["cnteam"],draft:!1,hide_table_of_contents:!1,toc_min_heading_level:2,toc_max_heading_level:3,keywords:["Cloud-Scale","Data","AI","AI/ML","intelligent apps","cloud-native","30-days","enterprise apps","digital experiences","app modernization","hack-together"],image:"https://github.com/Azure/Cloud-Native/blob/main/website/static/img/ogImage.png",description:"Exciting news! As we approach the close of #JavaScript on #Azure Global Hack today, we are thrilled to announce another exciting opportunity for all JavaScript developers!! Find a recap of Hack together and read all about the upcoming #FallIntoIA on this post!",tags:["Fall-For-IA","30-days-of-IA","learn-live","hack-together","community-buzz","ask-the-expert","azure-kubernetes-service","azure-functions","azure-openai","azure-container-apps","azure-cosmos-db","github-copilot","github-codespaces","github-actions"]},nextItem:{title:"Fall is Coming! \ud83c\udf42",permalink:"/Cloud-Native/30daysofIA/road-to-fallforIA"}},s={authorsImageUrls:[void 0]},c=[{value:"What We'll Cover",id:"what-well-cover",level:2},{value:"Thank you! \u2665\ufe0f",id:"thank-you-\ufe0f",level:2},{value:"Recap of The JavaScript on Azure Global Hack-Together",id:"recap-of-the-javascript-on-azure-global-hack-together",level:2},{value:"JSonAzure Hack-together Roadmap \ud83d\udccd:",id:"jsonazure-hack-together-roadmap-",level:3},{value:"Recap on past Livestreams\ud83c\udf1f:",id:"recap-on-past-livestreams",level:3},{value:"\ud83d\udcd6 Self-Learning Resources",id:"-self-learning-resources",level:3},{value:"Continue your journey with #FallForIntelligentApps",id:"continue-your-journey-with-fallforintelligentapps",level:2},{value:"Hands-on practice: Make your first contribution to open source!",id:"hands-on-practice-make-your-first-contribution-to-open-source",level:2},{value:"Resources",id:"resources",level:2}],u={toc:c};function p(e){let{components:t,...a}=e;return(0,r.kt)("wrapper",(0,o.Z)({},u,a,{components:t,mdxType:"MDXLayout"}),(0,r.kt)("head",null,(0,r.kt)("meta",{name:"twitter:url",content:"https://azure.github.io/Cloud-Native/30daysofIA/hacktogether-recap"}),(0,r.kt)("meta",{name:"twitter:title",content:"Continue The Learning Journey through **Fall For Intelligent Apps! \ud83c\udf42"}),(0,r.kt)("meta",{name:"twitter:description",content:"Exciting news! As we approach the close of #JavaScript on #Azure Global Hack today, we are thrilled to announce another exciting opportunity for all JavaScript developers!! Find a recap of Hack together and read all about the upcoming #FallIntoIA on this post!"}),(0,r.kt)("meta",{name:"twitter:image",content:"https://azure.github.io/Cloud-Native/img/ogImage.png"}),(0,r.kt)("meta",{name:"twitter:card",content:"summary_large_image"}),(0,r.kt)("meta",{name:"twitter:creator",content:"@nitya"}),(0,r.kt)("meta",{name:"twitter:site",content:"@AzureAdvocates"}),(0,r.kt)("link",{rel:"canonical",href:"https://azure.github.io/Cloud-Native/30daysofIA/hacktogether-recap"})),(0,r.kt)("p",null,"Continue The Learning Journey through ",(0,r.kt)("strong",{parentName:"p"},"Fall For Intelligent Apps!")," \ud83c\udf42"),(0,r.kt)("h2",{id:"what-well-cover"},"What We'll Cover"),(0,r.kt)("ul",null,(0,r.kt)("li",{parentName:"ul"},"Thank you! \u2665\ufe0f "),(0,r.kt)("li",{parentName:"ul"},"Recap of The ",(0,r.kt)("a",{parentName:"li",href:"https://aka.ms/JavaScripton_Azure"},"JavaScript on Azure Global Hack-Together")),(0,r.kt)("li",{parentName:"ul"},"Continue the journey"),(0,r.kt)("li",{parentName:"ul"},"Hands-on practice: Make your first contribution to open-source!"),(0,r.kt)("li",{parentName:"ul"},"Resources: For self-study!")),(0,r.kt)("h2",{id:"thank-you-\ufe0f"},"Thank you! \u2665\ufe0f"),(0,r.kt)("p",null,(0,r.kt)("img",{parentName:"p",src:"https://user-images.githubusercontent.com/40116776/264592120-1dc08b59-0555-40b2-8866-59248a573b83.png",alt:"image"})),(0,r.kt)("p",null,"It's hard to believe that JavaScript on Azure hack-together is ending! It seems like just yesterday that we launched this initiative, and yet here we are, 15 days later, with an incredible amount of learning and growth behind us. So... it's time for a wrap!"),(0,r.kt)("p",null,"From the bottom of our hearts, we want to thank each and every one of you for your participation, engagement, and enthusiasm. It's been truly inspiring to see the passion and dedication from this strong community, and we're honored to be a part of it. \u2728"),(0,r.kt)("h2",{id:"recap-of-the-javascript-on-azure-global-hack-together"},"Recap of The ",(0,r.kt)("a",{parentName:"h2",href:"https://aka.ms/JavaScripton_Azure"},"JavaScript on Azure Global Hack-Together")),(0,r.kt)("p",null,"As we wrap up this exciting event, we wanted to take a moment to reflect on all that we've accomplished together. Over the last 15 days, we've covered a lot of ground, from the basics of contributing to Open source to the exploration of the Contoso Real Estate project from its Frontend to its Backend and future AI implementation. "),(0,r.kt)("p",null,"Now that the hack-together is ending, we want to make sure that you have all the resources you need to continue honing your skills in the future. Whether you're looking to make your fist contribution to open source, become an open source maintainers, collaborate with others, or simply keep learning, there are plenty of resources out there to help you achieve your goals. So, let's dive in and explore all the ways you can continue to grow your JavaScript skills on Azure!"),(0,r.kt)("h3",{id:"jsonazure-hack-together-roadmap-"},"JSonAzure Hack-together Roadmap \ud83d\udccd:"),(0,r.kt)("p",null,(0,r.kt)("img",{parentName:"p",src:"https://user-images.githubusercontent.com/40116776/264975573-85938fcc-b235-4b5b-b45a-f174d3cf560d.png",alt:"hack-together-roadmap (2)"})),(0,r.kt)("h3",{id:"recap-on-past-livestreams"},"Recap on past Livestreams\ud83c\udf1f:"),(0,r.kt)("p",null,"Day 1\ufe0f\u20e3: ",(0,r.kt)("a",{parentName:"p",href:"https://developer.microsoft.com/reactor/events/20275/?WT.mc_id=academic-98351-juliamuiruri"},"Opening Keynote (Hack-together Launch)"),": Introduction to the Contoso Real Estate Open-source project!, managing complex and complex enterprise architecture, new announcements for JavaScript developers on Azure"),(0,r.kt)("p",null,"Day 2\ufe0f\u20e3: ",(0,r.kt)("a",{parentName:"p",href:"https://developer.microsoft.com/reactor/events/20321/?WT.mc_id=academic-98351-juliamuiruri"},"GitHub Copilot & Codespaces"),": Introduction to your AI pair programmer (GitHub Copilot) and your virtual developer environment on the cloud (GitHub Codespaces)"),(0,r.kt)("p",null,"Day 6\ufe0f\u20e3: ",(0,r.kt)("a",{parentName:"p",href:"https://developer.microsoft.com/reactor/events/20276/?WT.mc_id=academic-98351-juliamuiruri"},"Build your Frontend using Static Web Apps")," as part of a complex, modern composable frontends (or micro-frontends) and cloud-native applications."),(0,r.kt)("p",null,"Day 9\ufe0f\u20e3: Build a Serverless Backend using ",(0,r.kt)("a",{parentName:"p",href:"https://developer.microsoft.com/reactor/events/20277/?WT.mc_id=academic-98351-juliamuiruri"},"Azure Functions")),(0,r.kt)("p",null,"Day 1\ufe0f\u20e33\ufe0f\u20e3: Easily connect to an ",(0,r.kt)("a",{parentName:"p",href:"https://developer.microsoft.com/reactor/events/20278/?WT.mc_id=academic-98351-juliamuiruri"},"Azure Cosmos DB"),", exploring its benefits and how to get started"),(0,r.kt)("p",null,"Day 1\ufe0f\u20e35\ufe0f\u20e3: Being in the AI Era, we dive into the ",(0,r.kt)("a",{parentName:"p",href:"https://developer.microsoft.com/reactor/events/20322/?WT.mc_id=academic-98351-juliamuiruri"},"Azure OpenAI Service")," and how you can start to build intelligent JavaScript applications"),(0,r.kt)("h3",{id:"-self-learning-resources"},"\ud83d\udcd6 Self-Learning Resources"),(0,r.kt)("ol",null,(0,r.kt)("li",{parentName:"ol"},"JavaScript on Azure Global Hack Together ",(0,r.kt)("a",{parentName:"li",href:"https://aka.ms/JavaScriptonAzureCSC"},"Module collection")),(0,r.kt)("li",{parentName:"ol"},"Lets #HackTogether: Javascript On Azure ",(0,r.kt)("a",{parentName:"li",href:"https://dev.to/azure/lets-hacktogether-javascript-on-azure-keynote-nml"},"Keynote")),(0,r.kt)("li",{parentName:"ol"},(0,r.kt)("a",{parentName:"li",href:"https://techcommunity.microsoft.com/t5/educator-developer-blog/step-by-step-guide-migrating-v3-to-v4-programming-model-for/ba-p/3897691?WT.mc_id=academic-98351-juliamuiruri"},"Step by Step Guide: Migrating v3 to v4 programming model for Azure Functions for Node.Js Application"))),(0,r.kt)("h2",{id:"continue-your-journey-with-fallforintelligentapps"},"Continue your journey with #FallForIntelligentApps"),(0,r.kt)("p",null,"Join us this Fall on a learning journey to explore building intelligent apps. Combine the power of AI, cloud-native app development, and cloud-scale data to build highly differentiated digital experiences. Develop adaptive, responsive, and personalized experiences by building and modernizing intelligent applications with Azure. Engage in a self-paced learning adventure by following along the ",(0,r.kt)("a",{href:"https://aka.ms/FallForIA/30days",target:"_blank"},"#30Days of Intelligent Apps")," series, completing the ",(0,r.kt)("a",{href:"https://aka.ms/FallForIA/csc",target:"_blank"},"Intelligent Apps Skills Challenge"),", joining the Product Group for a live ",(0,r.kt)("a",{href:"http://aka.ms/FallforIA/ATE-series",target:"_blank"},"Ask The Expert")," series or building an end to end solution architecture with a live guided experience through the ",(0,r.kt)("a",{href:"http://aka.ms/FallforIA/LearnLive",target:"_blank"},"Learn Live")," series. Discover more ",(0,r.kt)("a",{href:"https://aka.ms/FallForIA",target:"_blank"},"here"),"."),(0,r.kt)("h2",{id:"hands-on-practice-make-your-first-contribution-to-open-source"},"Hands-on practice: Make your first contribution to open source!"),(0,r.kt)("p",null,"Join our GitHUb Discussion Forum to connect with developers from every part of world, see contributions from other, find collaborators and make your first contribution to a real-world project!\nDon't forget to give the repo a star \u2b50"),(0,r.kt)("h2",{id:"resources"},"Resources"),(0,r.kt)("p",null,"All resources are accessible on our ",(0,r.kt)("a",{parentName:"p",href:"https://aka.ms/JavaScripton_Azure"},"landing page")))}p.isMDXComponent=!0}}]); \ No newline at end of file diff --git a/assets/js/0f2db0e2.7ef32f7a.js b/assets/js/0f2db0e2.7ef32f7a.js new file mode 100644 index 0000000000..dfa18f442e --- /dev/null +++ b/assets/js/0f2db0e2.7ef32f7a.js @@ -0,0 +1 @@ +"use strict";(self.webpackChunkwebsite=self.webpackChunkwebsite||[]).push([[76508],{3905:(e,t,a)=>{a.d(t,{Zo:()=>p,kt:()=>d});var o=a(67294);function r(e,t,a){return t in e?Object.defineProperty(e,t,{value:a,enumerable:!0,configurable:!0,writable:!0}):e[t]=a,e}function n(e,t){var a=Object.keys(e);if(Object.getOwnPropertySymbols){var o=Object.getOwnPropertySymbols(e);t&&(o=o.filter((function(t){return Object.getOwnPropertyDescriptor(e,t).enumerable}))),a.push.apply(a,o)}return a}function i(e){for(var t=1;t=0||(r[a]=e[a]);return r}(e,t);if(Object.getOwnPropertySymbols){var n=Object.getOwnPropertySymbols(e);for(o=0;o=0||Object.prototype.propertyIsEnumerable.call(e,a)&&(r[a]=e[a])}return r}var c=o.createContext({}),s=function(e){var t=o.useContext(c),a=t;return e&&(a="function"==typeof e?e(t):i(i({},t),e)),a},p=function(e){var t=s(e.components);return o.createElement(c.Provider,{value:t},e.children)},u={inlineCode:"code",wrapper:function(e){var t=e.children;return o.createElement(o.Fragment,{},t)}},h=o.forwardRef((function(e,t){var a=e.components,r=e.mdxType,n=e.originalType,c=e.parentName,p=l(e,["components","mdxType","originalType","parentName"]),h=s(a),d=r,m=h["".concat(c,".").concat(d)]||h[d]||u[d]||n;return a?o.createElement(m,i(i({ref:t},p),{},{components:a})):o.createElement(m,i({ref:t},p))}));function d(e,t){var a=arguments,r=t&&t.mdxType;if("string"==typeof e||r){var n=a.length,i=new Array(n);i[0]=h;var l={};for(var c in t)hasOwnProperty.call(t,c)&&(l[c]=t[c]);l.originalType=e,l.mdxType="string"==typeof e?e:r,i[1]=l;for(var s=2;s{a.r(t),a.d(t,{assets:()=>c,contentTitle:()=>i,default:()=>u,frontMatter:()=>n,metadata:()=>l,toc:()=>s});var o=a(87462),r=(a(67294),a(3905));const n={slug:"hacktogether-recap",title:"HackTogether Recap \ud83c\udf42",authors:["cnteam"],draft:!1,hide_table_of_contents:!1,toc_min_heading_level:2,toc_max_heading_level:3,keywords:["Cloud-Scale","Data","AI","AI/ML","intelligent apps","cloud-native","30-days","enterprise apps","digital experiences","app modernization","hack-together"],image:"https://github.com/Azure/Cloud-Native/blob/main/website/static/img/ogImage.png",description:"Exciting news! As we approach the close of #JavaScript on #Azure Global Hack today, we are thrilled to announce another exciting opportunity for all JavaScript developers!! Find a recap of Hack together and read all about the upcoming #FallIntoIA on this post!",tags:["Fall-For-IA","30-days-of-IA","learn-live","hack-together","community-buzz","ask-the-expert","azure-kubernetes-service","azure-functions","azure-openai","azure-container-apps","azure-cosmos-db","github-copilot","github-codespaces","github-actions"]},i=void 0,l={permalink:"/Cloud-Native/30daysofIA/hacktogether-recap",source:"@site/blog-30daysofIA/2023-09-08/hack-together-recap.md",title:"HackTogether Recap \ud83c\udf42",description:"Exciting news! As we approach the close of #JavaScript on #Azure Global Hack today, we are thrilled to announce another exciting opportunity for all JavaScript developers!! Find a recap of Hack together and read all about the upcoming #FallIntoIA on this post!",date:"2023-09-08T00:00:00.000Z",formattedDate:"September 8, 2023",tags:[{label:"Fall-For-IA",permalink:"/Cloud-Native/30daysofIA/tags/fall-for-ia"},{label:"30-days-of-IA",permalink:"/Cloud-Native/30daysofIA/tags/30-days-of-ia"},{label:"learn-live",permalink:"/Cloud-Native/30daysofIA/tags/learn-live"},{label:"hack-together",permalink:"/Cloud-Native/30daysofIA/tags/hack-together"},{label:"community-buzz",permalink:"/Cloud-Native/30daysofIA/tags/community-buzz"},{label:"ask-the-expert",permalink:"/Cloud-Native/30daysofIA/tags/ask-the-expert"},{label:"azure-kubernetes-service",permalink:"/Cloud-Native/30daysofIA/tags/azure-kubernetes-service"},{label:"azure-functions",permalink:"/Cloud-Native/30daysofIA/tags/azure-functions"},{label:"azure-openai",permalink:"/Cloud-Native/30daysofIA/tags/azure-openai"},{label:"azure-container-apps",permalink:"/Cloud-Native/30daysofIA/tags/azure-container-apps"},{label:"azure-cosmos-db",permalink:"/Cloud-Native/30daysofIA/tags/azure-cosmos-db"},{label:"github-copilot",permalink:"/Cloud-Native/30daysofIA/tags/github-copilot"},{label:"github-codespaces",permalink:"/Cloud-Native/30daysofIA/tags/github-codespaces"},{label:"github-actions",permalink:"/Cloud-Native/30daysofIA/tags/github-actions"}],readingTime:3.995,hasTruncateMarker:!1,authors:[{name:"It's 30DaysOfIA",title:"FallForIA Content Team",url:"https://github.com/cloud-native",imageURL:"https://azure.github.io/Cloud-Native/img/logo-ms-cloud-native.png",key:"cnteam"}],frontMatter:{slug:"hacktogether-recap",title:"HackTogether Recap \ud83c\udf42",authors:["cnteam"],draft:!1,hide_table_of_contents:!1,toc_min_heading_level:2,toc_max_heading_level:3,keywords:["Cloud-Scale","Data","AI","AI/ML","intelligent apps","cloud-native","30-days","enterprise apps","digital experiences","app modernization","hack-together"],image:"https://github.com/Azure/Cloud-Native/blob/main/website/static/img/ogImage.png",description:"Exciting news! As we approach the close of #JavaScript on #Azure Global Hack today, we are thrilled to announce another exciting opportunity for all JavaScript developers!! Find a recap of Hack together and read all about the upcoming #FallIntoIA on this post!",tags:["Fall-For-IA","30-days-of-IA","learn-live","hack-together","community-buzz","ask-the-expert","azure-kubernetes-service","azure-functions","azure-openai","azure-container-apps","azure-cosmos-db","github-copilot","github-codespaces","github-actions"]},nextItem:{title:"Fall is Coming! \ud83c\udf42",permalink:"/Cloud-Native/30daysofIA/road-to-fallforIA"}},c={authorsImageUrls:[void 0]},s=[{value:"What We'll Cover",id:"what-well-cover",level:2},{value:"Thank you! \u2665\ufe0f",id:"thank-you-\ufe0f",level:2},{value:"Recap of The JavaScript on Azure Global Hack-Together",id:"recap-of-the-javascript-on-azure-global-hack-together",level:2},{value:"JSonAzure Hack-together Roadmap \ud83d\udccd:",id:"jsonazure-hack-together-roadmap-",level:3},{value:"Recap on past Livestreams\ud83c\udf1f:",id:"recap-on-past-livestreams",level:3},{value:"\ud83d\udcd6 Self-Learning Resources",id:"-self-learning-resources",level:3},{value:"Continue your journey with #FallForIntelligentApps",id:"continue-your-journey-with-fallforintelligentapps",level:2},{value:"Hands-on practice: Make your first contribution to open source!",id:"hands-on-practice-make-your-first-contribution-to-open-source",level:2},{value:"Resources",id:"resources",level:2}],p={toc:s};function u(e){let{components:t,...a}=e;return(0,r.kt)("wrapper",(0,o.Z)({},p,a,{components:t,mdxType:"MDXLayout"}),(0,r.kt)("head",null,(0,r.kt)("meta",{property:"og:url",content:"https://azure.github.io/cloud-native/30daysofia/hacktogether-recap"}),(0,r.kt)("meta",{property:"og:type",content:"website"}),(0,r.kt)("meta",{property:"og:title",content:"HackTogether Recap \ud83c\udf42 | Build Intelligent Apps On Azure"}),(0,r.kt)("meta",{property:"og:description",content:"Exciting news! As we approach the close of #JavaScript on #Azure Global Hack today, we are thrilled to announce another exciting opportunity for all JavaScript developers!! Find a recap of Hack together and read all about the upcoming #FallIntoIA on this post!"}),(0,r.kt)("meta",{property:"og:image",content:"https://github.com/Azure/Cloud-Native/blob/main/website/static/img/ogImage.png"}),(0,r.kt)("meta",{name:"twitter:url",content:"https://azure.github.io/Cloud-Native/30daysofIA/hacktogether-recap"}),(0,r.kt)("meta",{name:"twitter:title",content:"Continue The Learning Journey through **Fall For Intelligent Apps! \ud83c\udf42"}),(0,r.kt)("meta",{name:"twitter:description",content:"Exciting news! As we approach the close of #JavaScript on #Azure Global Hack today, we are thrilled to announce another exciting opportunity for all JavaScript developers!! Find a recap of Hack together and read all about the upcoming #FallIntoIA on this post!"}),(0,r.kt)("meta",{name:"twitter:image",content:"https://azure.github.io/Cloud-Native/img/ogImage.png"}),(0,r.kt)("meta",{name:"twitter:card",content:"summary_large_image"}),(0,r.kt)("meta",{name:"twitter:creator",content:"@nitya"}),(0,r.kt)("meta",{name:"twitter:site",content:"@AzureAdvocates"}),(0,r.kt)("link",{rel:"canonical",href:"https://azure.github.io/Cloud-Native/30daysofIA/hacktogether-recap"})),(0,r.kt)("p",null,"Continue The Learning Journey through ",(0,r.kt)("strong",{parentName:"p"},"Fall For Intelligent Apps!")," \ud83c\udf42"),(0,r.kt)("h2",{id:"what-well-cover"},"What We'll Cover"),(0,r.kt)("ul",null,(0,r.kt)("li",{parentName:"ul"},"Thank you! \u2665\ufe0f "),(0,r.kt)("li",{parentName:"ul"},"Recap of The ",(0,r.kt)("a",{parentName:"li",href:"https://aka.ms/JavaScripton_Azure"},"JavaScript on Azure Global Hack-Together")),(0,r.kt)("li",{parentName:"ul"},"Continue the journey"),(0,r.kt)("li",{parentName:"ul"},"Hands-on practice: Make your first contribution to open-source!"),(0,r.kt)("li",{parentName:"ul"},"Resources: For self-study!")),(0,r.kt)("h2",{id:"thank-you-\ufe0f"},"Thank you! \u2665\ufe0f"),(0,r.kt)("p",null,(0,r.kt)("img",{parentName:"p",src:"https://user-images.githubusercontent.com/40116776/264592120-1dc08b59-0555-40b2-8866-59248a573b83.png",alt:"image"})),(0,r.kt)("p",null,"It's hard to believe that JavaScript on Azure hack-together is ending! It seems like just yesterday that we launched this initiative, and yet here we are, 15 days later, with an incredible amount of learning and growth behind us. So... it's time for a wrap!"),(0,r.kt)("p",null,"From the bottom of our hearts, we want to thank each and every one of you for your participation, engagement, and enthusiasm. It's been truly inspiring to see the passion and dedication from this strong community, and we're honored to be a part of it. \u2728"),(0,r.kt)("h2",{id:"recap-of-the-javascript-on-azure-global-hack-together"},"Recap of The ",(0,r.kt)("a",{parentName:"h2",href:"https://aka.ms/JavaScripton_Azure"},"JavaScript on Azure Global Hack-Together")),(0,r.kt)("p",null,"As we wrap up this exciting event, we wanted to take a moment to reflect on all that we've accomplished together. Over the last 15 days, we've covered a lot of ground, from the basics of contributing to Open source to the exploration of the Contoso Real Estate project from its Frontend to its Backend and future AI implementation. "),(0,r.kt)("p",null,"Now that the hack-together is ending, we want to make sure that you have all the resources you need to continue honing your skills in the future. Whether you're looking to make your fist contribution to open source, become an open source maintainers, collaborate with others, or simply keep learning, there are plenty of resources out there to help you achieve your goals. So, let's dive in and explore all the ways you can continue to grow your JavaScript skills on Azure!"),(0,r.kt)("h3",{id:"jsonazure-hack-together-roadmap-"},"JSonAzure Hack-together Roadmap \ud83d\udccd:"),(0,r.kt)("p",null,(0,r.kt)("img",{parentName:"p",src:"https://user-images.githubusercontent.com/40116776/264975573-85938fcc-b235-4b5b-b45a-f174d3cf560d.png",alt:"hack-together-roadmap (2)"})),(0,r.kt)("h3",{id:"recap-on-past-livestreams"},"Recap on past Livestreams\ud83c\udf1f:"),(0,r.kt)("p",null,"Day 1\ufe0f\u20e3: ",(0,r.kt)("a",{parentName:"p",href:"https://developer.microsoft.com/reactor/events/20275/?WT.mc_id=academic-98351-juliamuiruri"},"Opening Keynote (Hack-together Launch)"),": Introduction to the Contoso Real Estate Open-source project!, managing complex and complex enterprise architecture, new announcements for JavaScript developers on Azure"),(0,r.kt)("p",null,"Day 2\ufe0f\u20e3: ",(0,r.kt)("a",{parentName:"p",href:"https://developer.microsoft.com/reactor/events/20321/?WT.mc_id=academic-98351-juliamuiruri"},"GitHub Copilot & Codespaces"),": Introduction to your AI pair programmer (GitHub Copilot) and your virtual developer environment on the cloud (GitHub Codespaces)"),(0,r.kt)("p",null,"Day 6\ufe0f\u20e3: ",(0,r.kt)("a",{parentName:"p",href:"https://developer.microsoft.com/reactor/events/20276/?WT.mc_id=academic-98351-juliamuiruri"},"Build your Frontend using Static Web Apps")," as part of a complex, modern composable frontends (or micro-frontends) and cloud-native applications."),(0,r.kt)("p",null,"Day 9\ufe0f\u20e3: Build a Serverless Backend using ",(0,r.kt)("a",{parentName:"p",href:"https://developer.microsoft.com/reactor/events/20277/?WT.mc_id=academic-98351-juliamuiruri"},"Azure Functions")),(0,r.kt)("p",null,"Day 1\ufe0f\u20e33\ufe0f\u20e3: Easily connect to an ",(0,r.kt)("a",{parentName:"p",href:"https://developer.microsoft.com/reactor/events/20278/?WT.mc_id=academic-98351-juliamuiruri"},"Azure Cosmos DB"),", exploring its benefits and how to get started"),(0,r.kt)("p",null,"Day 1\ufe0f\u20e35\ufe0f\u20e3: Being in the AI Era, we dive into the ",(0,r.kt)("a",{parentName:"p",href:"https://developer.microsoft.com/reactor/events/20322/?WT.mc_id=academic-98351-juliamuiruri"},"Azure OpenAI Service")," and how you can start to build intelligent JavaScript applications"),(0,r.kt)("h3",{id:"-self-learning-resources"},"\ud83d\udcd6 Self-Learning Resources"),(0,r.kt)("ol",null,(0,r.kt)("li",{parentName:"ol"},"JavaScript on Azure Global Hack Together ",(0,r.kt)("a",{parentName:"li",href:"https://aka.ms/JavaScriptonAzureCSC"},"Module collection")),(0,r.kt)("li",{parentName:"ol"},"Lets #HackTogether: Javascript On Azure ",(0,r.kt)("a",{parentName:"li",href:"https://dev.to/azure/lets-hacktogether-javascript-on-azure-keynote-nml"},"Keynote")),(0,r.kt)("li",{parentName:"ol"},(0,r.kt)("a",{parentName:"li",href:"https://techcommunity.microsoft.com/t5/educator-developer-blog/step-by-step-guide-migrating-v3-to-v4-programming-model-for/ba-p/3897691?WT.mc_id=academic-98351-juliamuiruri"},"Step by Step Guide: Migrating v3 to v4 programming model for Azure Functions for Node.Js Application"))),(0,r.kt)("h2",{id:"continue-your-journey-with-fallforintelligentapps"},"Continue your journey with #FallForIntelligentApps"),(0,r.kt)("p",null,"Join us this Fall on a learning journey to explore building intelligent apps. Combine the power of AI, cloud-native app development, and cloud-scale data to build highly differentiated digital experiences. Develop adaptive, responsive, and personalized experiences by building and modernizing intelligent applications with Azure. Engage in a self-paced learning adventure by following along the ",(0,r.kt)("a",{href:"https://aka.ms/FallForIA/30days",target:"_blank"},"#30Days of Intelligent Apps")," series, completing the ",(0,r.kt)("a",{href:"https://aka.ms/FallForIA/csc",target:"_blank"},"Intelligent Apps Skills Challenge"),", joining the Product Group for a live ",(0,r.kt)("a",{href:"http://aka.ms/FallforIA/ATE-series",target:"_blank"},"Ask The Expert")," series or building an end to end solution architecture with a live guided experience through the ",(0,r.kt)("a",{href:"http://aka.ms/FallforIA/LearnLive",target:"_blank"},"Learn Live")," series. Discover more ",(0,r.kt)("a",{href:"https://aka.ms/FallForIA",target:"_blank"},"here"),"."),(0,r.kt)("h2",{id:"hands-on-practice-make-your-first-contribution-to-open-source"},"Hands-on practice: Make your first contribution to open source!"),(0,r.kt)("p",null,"Join our GitHUb Discussion Forum to connect with developers from every part of world, see contributions from other, find collaborators and make your first contribution to a real-world project!\nDon't forget to give the repo a star \u2b50"),(0,r.kt)("h2",{id:"resources"},"Resources"),(0,r.kt)("p",null,"All resources are accessible on our ",(0,r.kt)("a",{parentName:"p",href:"https://aka.ms/JavaScripton_Azure"},"landing page")))}u.isMDXComponent=!0}}]); \ No newline at end of file diff --git a/assets/js/532dad37.df49261f.js b/assets/js/532dad37.406d9826.js similarity index 50% rename from assets/js/532dad37.df49261f.js rename to assets/js/532dad37.406d9826.js index 89d2850721..923172ef0f 100644 --- a/assets/js/532dad37.df49261f.js +++ b/assets/js/532dad37.406d9826.js @@ -1 +1 @@ -"use strict";(self.webpackChunkwebsite=self.webpackChunkwebsite||[]).push([[97655],{3905:(e,t,a)=>{a.d(t,{Zo:()=>c,kt:()=>m});var n=a(67294);function i(e,t,a){return t in e?Object.defineProperty(e,t,{value:a,enumerable:!0,configurable:!0,writable:!0}):e[t]=a,e}function o(e,t){var a=Object.keys(e);if(Object.getOwnPropertySymbols){var n=Object.getOwnPropertySymbols(e);t&&(n=n.filter((function(t){return Object.getOwnPropertyDescriptor(e,t).enumerable}))),a.push.apply(a,n)}return a}function r(e){for(var t=1;t=0||(i[a]=e[a]);return i}(e,t);if(Object.getOwnPropertySymbols){var o=Object.getOwnPropertySymbols(e);for(n=0;n=0||Object.prototype.propertyIsEnumerable.call(e,a)&&(i[a]=e[a])}return i}var s=n.createContext({}),p=function(e){var t=n.useContext(s),a=t;return e&&(a="function"==typeof e?e(t):r(r({},t),e)),a},c=function(e){var t=p(e.components);return n.createElement(s.Provider,{value:t},e.children)},d={inlineCode:"code",wrapper:function(e){var t=e.children;return n.createElement(n.Fragment,{},t)}},u=n.forwardRef((function(e,t){var a=e.components,i=e.mdxType,o=e.originalType,s=e.parentName,c=l(e,["components","mdxType","originalType","parentName"]),u=p(a),m=i,g=u["".concat(s,".").concat(m)]||u[m]||d[m]||o;return a?n.createElement(g,r(r({ref:t},c),{},{components:a})):n.createElement(g,r({ref:t},c))}));function m(e,t){var a=arguments,i=t&&t.mdxType;if("string"==typeof e||i){var o=a.length,r=new Array(o);r[0]=u;var l={};for(var s in t)hasOwnProperty.call(t,s)&&(l[s]=t[s]);l.originalType=e,l.mdxType="string"==typeof e?e:i,r[1]=l;for(var p=2;p{a.r(t),a.d(t,{assets:()=>s,contentTitle:()=>r,default:()=>d,frontMatter:()=>o,metadata:()=>l,toc:()=>p});var n=a(87462),i=(a(67294),a(3905));const o={slug:"road-to-fallforIA",title:"Fall is Coming! \ud83c\udf42",authors:["cnteam"],draft:!1,hide_table_of_contents:!1,toc_min_heading_level:2,toc_max_heading_level:3,keywords:["Cloud-Scale","Data","AI","AI/ML","intelligent apps","cloud-native","30-days","enterprise apps","digital experiences","app modernization"],image:"https://github.com/Azure/Cloud-Native/blob/main/website/static/img/ogImage.png",description:"Combine the power of AI, cloud-scale data, and cloud-native app development to create highly differentiated digital experiences. Develop adaptive, responsive, and personalized experiences by building and modernizing intelligent applications with Azure.",tags:["Fall-For-IA","30-days-of-IA","learn-live","hack-together","community-buzz","ask-the-expert","azure-kubernetes-service","azure-functions","azure-openai","azure-container-apps","azure-cosmos-db","github-copilot","github-codespaces","github-actions"]},r=void 0,l={permalink:"/Cloud-Native/30daysofIA/road-to-fallforIA",source:"@site/blog-30daysofIA/2023-08-28/road-to-fallforia.md",title:"Fall is Coming! \ud83c\udf42",description:"Combine the power of AI, cloud-scale data, and cloud-native app development to create highly differentiated digital experiences. Develop adaptive, responsive, and personalized experiences by building and modernizing intelligent applications with Azure.",date:"2023-08-28T00:00:00.000Z",formattedDate:"August 28, 2023",tags:[{label:"Fall-For-IA",permalink:"/Cloud-Native/30daysofIA/tags/fall-for-ia"},{label:"30-days-of-IA",permalink:"/Cloud-Native/30daysofIA/tags/30-days-of-ia"},{label:"learn-live",permalink:"/Cloud-Native/30daysofIA/tags/learn-live"},{label:"hack-together",permalink:"/Cloud-Native/30daysofIA/tags/hack-together"},{label:"community-buzz",permalink:"/Cloud-Native/30daysofIA/tags/community-buzz"},{label:"ask-the-expert",permalink:"/Cloud-Native/30daysofIA/tags/ask-the-expert"},{label:"azure-kubernetes-service",permalink:"/Cloud-Native/30daysofIA/tags/azure-kubernetes-service"},{label:"azure-functions",permalink:"/Cloud-Native/30daysofIA/tags/azure-functions"},{label:"azure-openai",permalink:"/Cloud-Native/30daysofIA/tags/azure-openai"},{label:"azure-container-apps",permalink:"/Cloud-Native/30daysofIA/tags/azure-container-apps"},{label:"azure-cosmos-db",permalink:"/Cloud-Native/30daysofIA/tags/azure-cosmos-db"},{label:"github-copilot",permalink:"/Cloud-Native/30daysofIA/tags/github-copilot"},{label:"github-codespaces",permalink:"/Cloud-Native/30daysofIA/tags/github-codespaces"},{label:"github-actions",permalink:"/Cloud-Native/30daysofIA/tags/github-actions"}],readingTime:.785,hasTruncateMarker:!1,authors:[{name:"It's 30DaysOfIA",title:"FallForIA Content Team",url:"https://github.com/cloud-native",imageURL:"https://azure.github.io/Cloud-Native/img/logo-ms-cloud-native.png",key:"cnteam"}],frontMatter:{slug:"road-to-fallforIA",title:"Fall is Coming! \ud83c\udf42",authors:["cnteam"],draft:!1,hide_table_of_contents:!1,toc_min_heading_level:2,toc_max_heading_level:3,keywords:["Cloud-Scale","Data","AI","AI/ML","intelligent apps","cloud-native","30-days","enterprise apps","digital experiences","app modernization"],image:"https://github.com/Azure/Cloud-Native/blob/main/website/static/img/ogImage.png",description:"Combine the power of AI, cloud-scale data, and cloud-native app development to create highly differentiated digital experiences. Develop adaptive, responsive, and personalized experiences by building and modernizing intelligent applications with Azure.",tags:["Fall-For-IA","30-days-of-IA","learn-live","hack-together","community-buzz","ask-the-expert","azure-kubernetes-service","azure-functions","azure-openai","azure-container-apps","azure-cosmos-db","github-copilot","github-codespaces","github-actions"]},prevItem:{title:"HackTogether Recap \ud83c\udf42",permalink:"/Cloud-Native/30daysofIA/hacktogether-recap"}},s={authorsImageUrls:[void 0]},p=[],c={toc:p};function d(e){let{components:t,...a}=e;return(0,i.kt)("wrapper",(0,n.Z)({},c,a,{components:t,mdxType:"MDXLayout"}),(0,i.kt)("head",null,(0,i.kt)("meta",{name:"twitter:url",content:"https://azure.github.io/Cloud-Native/30daysofIA/road-to-fallforIA"}),(0,i.kt)("meta",{name:"twitter:title",content:"It's Time to Fall For Intelligent Apps"}),(0,i.kt)("meta",{name:"twitter:description",content:"Combine the power of AI, cloud-scale data, and cloud-native app development to create highly differentiated digital experiences. Develop adaptive, responsive, and personalized experiences by building and modernizing intelligent applications with Azure."}),(0,i.kt)("meta",{name:"twitter:image",content:"https://azure.github.io/Cloud-Native/img/ogImage.png"}),(0,i.kt)("meta",{name:"twitter:card",content:"summary_large_image"}),(0,i.kt)("meta",{name:"twitter:creator",content:"@nitya"}),(0,i.kt)("meta",{name:"twitter:site",content:"@AzureAdvocates"}),(0,i.kt)("link",{rel:"canonical",href:"https://azure.github.io/Cloud-Native/30daysofIA/road-to-fallforIA"})),(0,i.kt)("p",null,"September is almost here - and that can only mean one thing!! It's time to ",(0,i.kt)("strong",{parentName:"p"},"\ud83c\udf42 Fall for something new and exciting")," and spend a few weeks skilling up on relevant tools, techologies and solutions!! "),(0,i.kt)("p",null,"Last year, we focused on #ServerlessSeptember. This year, we're building on that theme with the addition of cloud-scale ",(0,i.kt)("strong",{parentName:"p"},"Data"),", cloud-native ",(0,i.kt)("strong",{parentName:"p"},"Technologies")," and cloud-based ",(0,i.kt)("strong",{parentName:"p"},"AI")," integrations to help you modernize and build intelligent apps for the enterprise!"),(0,i.kt)("p",null,"Watch this space - and join us in September to learn more!"))}d.isMDXComponent=!0}}]); \ No newline at end of file +"use strict";(self.webpackChunkwebsite=self.webpackChunkwebsite||[]).push([[97655],{3905:(e,t,a)=>{a.d(t,{Zo:()=>d,kt:()=>m});var n=a(67294);function i(e,t,a){return t in e?Object.defineProperty(e,t,{value:a,enumerable:!0,configurable:!0,writable:!0}):e[t]=a,e}function o(e,t){var a=Object.keys(e);if(Object.getOwnPropertySymbols){var n=Object.getOwnPropertySymbols(e);t&&(n=n.filter((function(t){return Object.getOwnPropertyDescriptor(e,t).enumerable}))),a.push.apply(a,n)}return a}function r(e){for(var t=1;t=0||(i[a]=e[a]);return i}(e,t);if(Object.getOwnPropertySymbols){var o=Object.getOwnPropertySymbols(e);for(n=0;n=0||Object.prototype.propertyIsEnumerable.call(e,a)&&(i[a]=e[a])}return i}var s=n.createContext({}),p=function(e){var t=n.useContext(s),a=t;return e&&(a="function"==typeof e?e(t):r(r({},t),e)),a},d=function(e){var t=p(e.components);return n.createElement(s.Provider,{value:t},e.children)},c={inlineCode:"code",wrapper:function(e){var t=e.children;return n.createElement(n.Fragment,{},t)}},u=n.forwardRef((function(e,t){var a=e.components,i=e.mdxType,o=e.originalType,s=e.parentName,d=l(e,["components","mdxType","originalType","parentName"]),u=p(a),m=i,g=u["".concat(s,".").concat(m)]||u[m]||c[m]||o;return a?n.createElement(g,r(r({ref:t},d),{},{components:a})):n.createElement(g,r({ref:t},d))}));function m(e,t){var a=arguments,i=t&&t.mdxType;if("string"==typeof e||i){var o=a.length,r=new Array(o);r[0]=u;var l={};for(var s in t)hasOwnProperty.call(t,s)&&(l[s]=t[s]);l.originalType=e,l.mdxType="string"==typeof e?e:i,r[1]=l;for(var p=2;p{a.r(t),a.d(t,{assets:()=>s,contentTitle:()=>r,default:()=>c,frontMatter:()=>o,metadata:()=>l,toc:()=>p});var n=a(87462),i=(a(67294),a(3905));const o={slug:"road-to-fallforIA",title:"Fall is Coming! \ud83c\udf42",authors:["cnteam"],draft:!1,hide_table_of_contents:!1,toc_min_heading_level:2,toc_max_heading_level:3,keywords:["Cloud-Scale","Data","AI","AI/ML","intelligent apps","cloud-native","30-days","enterprise apps","digital experiences","app modernization"],image:"https://github.com/Azure/Cloud-Native/blob/main/website/static/img/ogImage.png",description:"Combine the power of AI, cloud-scale data, and cloud-native app development to create highly differentiated digital experiences. Develop adaptive, responsive, and personalized experiences by building and modernizing intelligent applications with Azure.",tags:["Fall-For-IA","30-days-of-IA","learn-live","hack-together","community-buzz","ask-the-expert","azure-kubernetes-service","azure-functions","azure-openai","azure-container-apps","azure-cosmos-db","github-copilot","github-codespaces","github-actions"]},r=void 0,l={permalink:"/Cloud-Native/30daysofIA/road-to-fallforIA",source:"@site/blog-30daysofIA/2023-08-28/road-to-fallforia.md",title:"Fall is Coming! \ud83c\udf42",description:"Combine the power of AI, cloud-scale data, and cloud-native app development to create highly differentiated digital experiences. Develop adaptive, responsive, and personalized experiences by building and modernizing intelligent applications with Azure.",date:"2023-08-28T00:00:00.000Z",formattedDate:"August 28, 2023",tags:[{label:"Fall-For-IA",permalink:"/Cloud-Native/30daysofIA/tags/fall-for-ia"},{label:"30-days-of-IA",permalink:"/Cloud-Native/30daysofIA/tags/30-days-of-ia"},{label:"learn-live",permalink:"/Cloud-Native/30daysofIA/tags/learn-live"},{label:"hack-together",permalink:"/Cloud-Native/30daysofIA/tags/hack-together"},{label:"community-buzz",permalink:"/Cloud-Native/30daysofIA/tags/community-buzz"},{label:"ask-the-expert",permalink:"/Cloud-Native/30daysofIA/tags/ask-the-expert"},{label:"azure-kubernetes-service",permalink:"/Cloud-Native/30daysofIA/tags/azure-kubernetes-service"},{label:"azure-functions",permalink:"/Cloud-Native/30daysofIA/tags/azure-functions"},{label:"azure-openai",permalink:"/Cloud-Native/30daysofIA/tags/azure-openai"},{label:"azure-container-apps",permalink:"/Cloud-Native/30daysofIA/tags/azure-container-apps"},{label:"azure-cosmos-db",permalink:"/Cloud-Native/30daysofIA/tags/azure-cosmos-db"},{label:"github-copilot",permalink:"/Cloud-Native/30daysofIA/tags/github-copilot"},{label:"github-codespaces",permalink:"/Cloud-Native/30daysofIA/tags/github-codespaces"},{label:"github-actions",permalink:"/Cloud-Native/30daysofIA/tags/github-actions"}],readingTime:1.055,hasTruncateMarker:!1,authors:[{name:"It's 30DaysOfIA",title:"FallForIA Content Team",url:"https://github.com/cloud-native",imageURL:"https://azure.github.io/Cloud-Native/img/logo-ms-cloud-native.png",key:"cnteam"}],frontMatter:{slug:"road-to-fallforIA",title:"Fall is Coming! \ud83c\udf42",authors:["cnteam"],draft:!1,hide_table_of_contents:!1,toc_min_heading_level:2,toc_max_heading_level:3,keywords:["Cloud-Scale","Data","AI","AI/ML","intelligent apps","cloud-native","30-days","enterprise apps","digital experiences","app modernization"],image:"https://github.com/Azure/Cloud-Native/blob/main/website/static/img/ogImage.png",description:"Combine the power of AI, cloud-scale data, and cloud-native app development to create highly differentiated digital experiences. Develop adaptive, responsive, and personalized experiences by building and modernizing intelligent applications with Azure.",tags:["Fall-For-IA","30-days-of-IA","learn-live","hack-together","community-buzz","ask-the-expert","azure-kubernetes-service","azure-functions","azure-openai","azure-container-apps","azure-cosmos-db","github-copilot","github-codespaces","github-actions"]},prevItem:{title:"HackTogether Recap \ud83c\udf42",permalink:"/Cloud-Native/30daysofIA/hacktogether-recap"}},s={authorsImageUrls:[void 0]},p=[],d={toc:p};function c(e){let{components:t,...a}=e;return(0,i.kt)("wrapper",(0,n.Z)({},d,a,{components:t,mdxType:"MDXLayout"}),(0,i.kt)("head",null,(0,i.kt)("meta",{property:"og:url",content:"https://azure.github.io/cloud-native/30daysofia/road-to-fallforia"}),(0,i.kt)("meta",{property:"og:type",content:"website"}),(0,i.kt)("meta",{property:"og:title",content:"Fall is Coming! \ud83c\udf42 | Build Intelligent Apps On Azure"}),(0,i.kt)("meta",{property:"og:description",content:"Combine the power of AI, cloud-scale data, and cloud-native app development to create highly differentiated digital experiences. Develop adaptive, responsive, and personalized experiences by building and modernizing intelligent applications with Azure."}),(0,i.kt)("meta",{property:"og:image",content:"https://github.com/Azure/Cloud-Native/blob/main/website/static/img/ogImage.png"}),(0,i.kt)("meta",{name:"twitter:url",content:"https://azure.github.io/Cloud-Native/30daysofIA/road-to-fallforIA"}),(0,i.kt)("meta",{name:"twitter:title",content:"It's Time to Fall For Intelligent Apps"}),(0,i.kt)("meta",{name:"twitter:description",content:"Combine the power of AI, cloud-scale data, and cloud-native app development to create highly differentiated digital experiences. Develop adaptive, responsive, and personalized experiences by building and modernizing intelligent applications with Azure."}),(0,i.kt)("meta",{name:"twitter:image",content:"https://azure.github.io/Cloud-Native/img/ogImage.png"}),(0,i.kt)("meta",{name:"twitter:card",content:"summary_large_image"}),(0,i.kt)("meta",{name:"twitter:creator",content:"@nitya"}),(0,i.kt)("meta",{name:"twitter:site",content:"@AzureAdvocates"}),(0,i.kt)("link",{rel:"canonical",href:"https://azure.github.io/Cloud-Native/30daysofIA/road-to-fallforIA"})),(0,i.kt)("p",null,"September is almost here - and that can only mean one thing!! It's time to ",(0,i.kt)("strong",{parentName:"p"},"\ud83c\udf42 Fall for something new and exciting")," and spend a few weeks skilling up on relevant tools, techologies and solutions!! "),(0,i.kt)("p",null,"Last year, we focused on #ServerlessSeptember. This year, we're building on that theme with the addition of cloud-scale ",(0,i.kt)("strong",{parentName:"p"},"Data"),", cloud-native ",(0,i.kt)("strong",{parentName:"p"},"Technologies")," and cloud-based ",(0,i.kt)("strong",{parentName:"p"},"AI")," integrations to help you modernize and build intelligent apps for the enterprise!"),(0,i.kt)("p",null,"Watch this space - and join us in September to learn more!"))}c.isMDXComponent=!0}}]); \ No newline at end of file diff --git a/assets/js/5979b063.36716d44.js b/assets/js/5979b063.36716d44.js deleted file mode 100644 index dd82013420..0000000000 --- a/assets/js/5979b063.36716d44.js +++ /dev/null @@ -1 +0,0 @@ -"use strict";(self.webpackChunkwebsite=self.webpackChunkwebsite||[]).push([[37961],{3905:(e,t,a)=>{a.d(t,{Zo:()=>u,kt:()=>d});var o=a(67294);function r(e,t,a){return t in e?Object.defineProperty(e,t,{value:a,enumerable:!0,configurable:!0,writable:!0}):e[t]=a,e}function n(e,t){var a=Object.keys(e);if(Object.getOwnPropertySymbols){var o=Object.getOwnPropertySymbols(e);t&&(o=o.filter((function(t){return Object.getOwnPropertyDescriptor(e,t).enumerable}))),a.push.apply(a,o)}return a}function i(e){for(var t=1;t=0||(r[a]=e[a]);return r}(e,t);if(Object.getOwnPropertySymbols){var n=Object.getOwnPropertySymbols(e);for(o=0;o=0||Object.prototype.propertyIsEnumerable.call(e,a)&&(r[a]=e[a])}return r}var s=o.createContext({}),c=function(e){var t=o.useContext(s),a=t;return e&&(a="function"==typeof e?e(t):i(i({},t),e)),a},u=function(e){var t=c(e.components);return o.createElement(s.Provider,{value:t},e.children)},p={inlineCode:"code",wrapper:function(e){var t=e.children;return o.createElement(o.Fragment,{},t)}},h=o.forwardRef((function(e,t){var a=e.components,r=e.mdxType,n=e.originalType,s=e.parentName,u=l(e,["components","mdxType","originalType","parentName"]),h=c(a),d=r,m=h["".concat(s,".").concat(d)]||h[d]||p[d]||n;return a?o.createElement(m,i(i({ref:t},u),{},{components:a})):o.createElement(m,i({ref:t},u))}));function d(e,t){var a=arguments,r=t&&t.mdxType;if("string"==typeof e||r){var n=a.length,i=new Array(n);i[0]=h;var l={};for(var s in t)hasOwnProperty.call(t,s)&&(l[s]=t[s]);l.originalType=e,l.mdxType="string"==typeof e?e:r,i[1]=l;for(var c=2;c{a.r(t),a.d(t,{assets:()=>s,contentTitle:()=>i,default:()=>p,frontMatter:()=>n,metadata:()=>l,toc:()=>c});var o=a(87462),r=(a(67294),a(3905));const n={slug:"hacktogether-recap",title:"HackTogether Recap \ud83c\udf42",authors:["cnteam"],draft:!1,hide_table_of_contents:!1,toc_min_heading_level:2,toc_max_heading_level:3,keywords:["Cloud-Scale","Data","AI","AI/ML","intelligent apps","cloud-native","30-days","enterprise apps","digital experiences","app modernization","hack-together"],image:"https://github.com/Azure/Cloud-Native/blob/main/website/static/img/ogImage.png",description:"Exciting news! As we approach the close of #JavaScript on #Azure Global Hack today, we are thrilled to announce another exciting opportunity for all JavaScript developers!! Find a recap of Hack together and read all about the upcoming #FallIntoIA on this post!",tags:["Fall-For-IA","30-days-of-IA","learn-live","hack-together","community-buzz","ask-the-expert","azure-kubernetes-service","azure-functions","azure-openai","azure-container-apps","azure-cosmos-db","github-copilot","github-codespaces","github-actions"]},i=void 0,l={permalink:"/Cloud-Native/30daysofIA/hacktogether-recap",source:"@site/blog-30daysofIA/2023-09-08/hack-together-recap.md",title:"HackTogether Recap \ud83c\udf42",description:"Exciting news! As we approach the close of #JavaScript on #Azure Global Hack today, we are thrilled to announce another exciting opportunity for all JavaScript developers!! Find a recap of Hack together and read all about the upcoming #FallIntoIA on this post!",date:"2023-09-08T00:00:00.000Z",formattedDate:"September 8, 2023",tags:[{label:"Fall-For-IA",permalink:"/Cloud-Native/30daysofIA/tags/fall-for-ia"},{label:"30-days-of-IA",permalink:"/Cloud-Native/30daysofIA/tags/30-days-of-ia"},{label:"learn-live",permalink:"/Cloud-Native/30daysofIA/tags/learn-live"},{label:"hack-together",permalink:"/Cloud-Native/30daysofIA/tags/hack-together"},{label:"community-buzz",permalink:"/Cloud-Native/30daysofIA/tags/community-buzz"},{label:"ask-the-expert",permalink:"/Cloud-Native/30daysofIA/tags/ask-the-expert"},{label:"azure-kubernetes-service",permalink:"/Cloud-Native/30daysofIA/tags/azure-kubernetes-service"},{label:"azure-functions",permalink:"/Cloud-Native/30daysofIA/tags/azure-functions"},{label:"azure-openai",permalink:"/Cloud-Native/30daysofIA/tags/azure-openai"},{label:"azure-container-apps",permalink:"/Cloud-Native/30daysofIA/tags/azure-container-apps"},{label:"azure-cosmos-db",permalink:"/Cloud-Native/30daysofIA/tags/azure-cosmos-db"},{label:"github-copilot",permalink:"/Cloud-Native/30daysofIA/tags/github-copilot"},{label:"github-codespaces",permalink:"/Cloud-Native/30daysofIA/tags/github-codespaces"},{label:"github-actions",permalink:"/Cloud-Native/30daysofIA/tags/github-actions"}],readingTime:3.675,hasTruncateMarker:!1,authors:[{name:"It's 30DaysOfIA",title:"FallForIA Content Team",url:"https://github.com/cloud-native",imageURL:"https://azure.github.io/Cloud-Native/img/logo-ms-cloud-native.png",key:"cnteam"}],frontMatter:{slug:"hacktogether-recap",title:"HackTogether Recap \ud83c\udf42",authors:["cnteam"],draft:!1,hide_table_of_contents:!1,toc_min_heading_level:2,toc_max_heading_level:3,keywords:["Cloud-Scale","Data","AI","AI/ML","intelligent apps","cloud-native","30-days","enterprise apps","digital experiences","app modernization","hack-together"],image:"https://github.com/Azure/Cloud-Native/blob/main/website/static/img/ogImage.png",description:"Exciting news! As we approach the close of #JavaScript on #Azure Global Hack today, we are thrilled to announce another exciting opportunity for all JavaScript developers!! Find a recap of Hack together and read all about the upcoming #FallIntoIA on this post!",tags:["Fall-For-IA","30-days-of-IA","learn-live","hack-together","community-buzz","ask-the-expert","azure-kubernetes-service","azure-functions","azure-openai","azure-container-apps","azure-cosmos-db","github-copilot","github-codespaces","github-actions"]},nextItem:{title:"Fall is Coming! \ud83c\udf42",permalink:"/Cloud-Native/30daysofIA/road-to-fallforIA"}},s={authorsImageUrls:[void 0]},c=[{value:"What We'll Cover",id:"what-well-cover",level:2},{value:"Thank you! \u2665\ufe0f",id:"thank-you-\ufe0f",level:2},{value:"Recap of The JavaScript on Azure Global Hack-Together",id:"recap-of-the-javascript-on-azure-global-hack-together",level:2},{value:"JSonAzure Hack-together Roadmap \ud83d\udccd:",id:"jsonazure-hack-together-roadmap-",level:3},{value:"Recap on past Livestreams\ud83c\udf1f:",id:"recap-on-past-livestreams",level:3},{value:"\ud83d\udcd6 Self-Learning Resources",id:"-self-learning-resources",level:3},{value:"Continue your journey with #FallForIntelligentApps",id:"continue-your-journey-with-fallforintelligentapps",level:2},{value:"Hands-on practice: Make your first contribution to open source!",id:"hands-on-practice-make-your-first-contribution-to-open-source",level:2},{value:"Resources",id:"resources",level:2}],u={toc:c};function p(e){let{components:t,...a}=e;return(0,r.kt)("wrapper",(0,o.Z)({},u,a,{components:t,mdxType:"MDXLayout"}),(0,r.kt)("head",null,(0,r.kt)("meta",{name:"twitter:url",content:"https://azure.github.io/Cloud-Native/30daysofIA/hacktogether-recap"}),(0,r.kt)("meta",{name:"twitter:title",content:"Continue The Learning Journey through **Fall For Intelligent Apps! \ud83c\udf42"}),(0,r.kt)("meta",{name:"twitter:description",content:"Exciting news! As we approach the close of #JavaScript on #Azure Global Hack today, we are thrilled to announce another exciting opportunity for all JavaScript developers!! Find a recap of Hack together and read all about the upcoming #FallIntoIA on this post!"}),(0,r.kt)("meta",{name:"twitter:image",content:"https://azure.github.io/Cloud-Native/img/ogImage.png"}),(0,r.kt)("meta",{name:"twitter:card",content:"summary_large_image"}),(0,r.kt)("meta",{name:"twitter:creator",content:"@nitya"}),(0,r.kt)("meta",{name:"twitter:site",content:"@AzureAdvocates"}),(0,r.kt)("link",{rel:"canonical",href:"https://azure.github.io/Cloud-Native/30daysofIA/hacktogether-recap"})),(0,r.kt)("p",null,"Continue The Learning Journey through ",(0,r.kt)("strong",{parentName:"p"},"Fall For Intelligent Apps!")," \ud83c\udf42"),(0,r.kt)("h2",{id:"what-well-cover"},"What We'll Cover"),(0,r.kt)("ul",null,(0,r.kt)("li",{parentName:"ul"},"Thank you! \u2665\ufe0f "),(0,r.kt)("li",{parentName:"ul"},"Recap of The ",(0,r.kt)("a",{parentName:"li",href:"https://aka.ms/JavaScripton_Azure"},"JavaScript on Azure Global Hack-Together")),(0,r.kt)("li",{parentName:"ul"},"Continue the journey"),(0,r.kt)("li",{parentName:"ul"},"Hands-on practice: Make your first contribution to open-source!"),(0,r.kt)("li",{parentName:"ul"},"Resources: For self-study!")),(0,r.kt)("h2",{id:"thank-you-\ufe0f"},"Thank you! \u2665\ufe0f"),(0,r.kt)("p",null,(0,r.kt)("img",{parentName:"p",src:"https://user-images.githubusercontent.com/40116776/264592120-1dc08b59-0555-40b2-8866-59248a573b83.png",alt:"image"})),(0,r.kt)("p",null,"It's hard to believe that JavaScript on Azure hack-together is ending! It seems like just yesterday that we launched this initiative, and yet here we are, 15 days later, with an incredible amount of learning and growth behind us. So... it's time for a wrap!"),(0,r.kt)("p",null,"From the bottom of our hearts, we want to thank each and every one of you for your participation, engagement, and enthusiasm. It's been truly inspiring to see the passion and dedication from this strong community, and we're honored to be a part of it. \u2728"),(0,r.kt)("h2",{id:"recap-of-the-javascript-on-azure-global-hack-together"},"Recap of The ",(0,r.kt)("a",{parentName:"h2",href:"https://aka.ms/JavaScripton_Azure"},"JavaScript on Azure Global Hack-Together")),(0,r.kt)("p",null,"As we wrap up this exciting event, we wanted to take a moment to reflect on all that we've accomplished together. Over the last 15 days, we've covered a lot of ground, from the basics of contributing to Open source to the exploration of the Contoso Real Estate project from its Frontend to its Backend and future AI implementation. "),(0,r.kt)("p",null,"Now that the hack-together is ending, we want to make sure that you have all the resources you need to continue honing your skills in the future. Whether you're looking to make your fist contribution to open source, become an open source maintainers, collaborate with others, or simply keep learning, there are plenty of resources out there to help you achieve your goals. So, let's dive in and explore all the ways you can continue to grow your JavaScript skills on Azure!"),(0,r.kt)("h3",{id:"jsonazure-hack-together-roadmap-"},"JSonAzure Hack-together Roadmap \ud83d\udccd:"),(0,r.kt)("p",null,(0,r.kt)("img",{parentName:"p",src:"https://user-images.githubusercontent.com/40116776/264975573-85938fcc-b235-4b5b-b45a-f174d3cf560d.png",alt:"hack-together-roadmap (2)"})),(0,r.kt)("h3",{id:"recap-on-past-livestreams"},"Recap on past Livestreams\ud83c\udf1f:"),(0,r.kt)("p",null,"Day 1\ufe0f\u20e3: ",(0,r.kt)("a",{parentName:"p",href:"https://developer.microsoft.com/reactor/events/20275/?WT.mc_id=academic-98351-juliamuiruri"},"Opening Keynote (Hack-together Launch)"),": Introduction to the Contoso Real Estate Open-source project!, managing complex and complex enterprise architecture, new announcements for JavaScript developers on Azure"),(0,r.kt)("p",null,"Day 2\ufe0f\u20e3: ",(0,r.kt)("a",{parentName:"p",href:"https://developer.microsoft.com/reactor/events/20321/?WT.mc_id=academic-98351-juliamuiruri"},"GitHub Copilot & Codespaces"),": Introduction to your AI pair programmer (GitHub Copilot) and your virtual developer environment on the cloud (GitHub Codespaces)"),(0,r.kt)("p",null,"Day 6\ufe0f\u20e3: ",(0,r.kt)("a",{parentName:"p",href:"https://developer.microsoft.com/reactor/events/20276/?WT.mc_id=academic-98351-juliamuiruri"},"Build your Frontend using Static Web Apps")," as part of a complex, modern composable frontends (or micro-frontends) and cloud-native applications."),(0,r.kt)("p",null,"Day 9\ufe0f\u20e3: Build a Serverless Backend using ",(0,r.kt)("a",{parentName:"p",href:"https://developer.microsoft.com/reactor/events/20277/?WT.mc_id=academic-98351-juliamuiruri"},"Azure Functions")),(0,r.kt)("p",null,"Day 1\ufe0f\u20e33\ufe0f\u20e3: Easily connect to an ",(0,r.kt)("a",{parentName:"p",href:"https://developer.microsoft.com/reactor/events/20278/?WT.mc_id=academic-98351-juliamuiruri"},"Azure Cosmos DB"),", exploring its benefits and how to get started"),(0,r.kt)("p",null,"Day 1\ufe0f\u20e35\ufe0f\u20e3: Being in the AI Era, we dive into the ",(0,r.kt)("a",{parentName:"p",href:"https://developer.microsoft.com/reactor/events/20322/?WT.mc_id=academic-98351-juliamuiruri"},"Azure OpenAI Service")," and how you can start to build intelligent JavaScript applications"),(0,r.kt)("h3",{id:"-self-learning-resources"},"\ud83d\udcd6 Self-Learning Resources"),(0,r.kt)("ol",null,(0,r.kt)("li",{parentName:"ol"},"JavaScript on Azure Global Hack Together ",(0,r.kt)("a",{parentName:"li",href:"https://aka.ms/JavaScriptonAzureCSC"},"Module collection")),(0,r.kt)("li",{parentName:"ol"},"Lets #HackTogether: Javascript On Azure ",(0,r.kt)("a",{parentName:"li",href:"https://dev.to/azure/lets-hacktogether-javascript-on-azure-keynote-nml"},"Keynote")),(0,r.kt)("li",{parentName:"ol"},(0,r.kt)("a",{parentName:"li",href:"https://techcommunity.microsoft.com/t5/educator-developer-blog/step-by-step-guide-migrating-v3-to-v4-programming-model-for/ba-p/3897691?WT.mc_id=academic-98351-juliamuiruri"},"Step by Step Guide: Migrating v3 to v4 programming model for Azure Functions for Node.Js Application"))),(0,r.kt)("h2",{id:"continue-your-journey-with-fallforintelligentapps"},"Continue your journey with #FallForIntelligentApps"),(0,r.kt)("p",null,"Join us this Fall on a learning journey to explore building intelligent apps. Combine the power of AI, cloud-native app development, and cloud-scale data to build highly differentiated digital experiences. Develop adaptive, responsive, and personalized experiences by building and modernizing intelligent applications with Azure. Engage in a self-paced learning adventure by following along the ",(0,r.kt)("a",{href:"https://aka.ms/FallForIA/30days",target:"_blank"},"#30Days of Intelligent Apps")," series, completing the ",(0,r.kt)("a",{href:"https://aka.ms/FallForIA/csc",target:"_blank"},"Intelligent Apps Skills Challenge"),", joining the Product Group for a live ",(0,r.kt)("a",{href:"http://aka.ms/FallforIA/ATE-series",target:"_blank"},"Ask The Expert")," series or building an end to end solution architecture with a live guided experience through the ",(0,r.kt)("a",{href:"http://aka.ms/FallforIA/LearnLive",target:"_blank"},"Learn Live")," series. Discover more ",(0,r.kt)("a",{href:"https://aka.ms/FallForIA",target:"_blank"},"here"),"."),(0,r.kt)("h2",{id:"hands-on-practice-make-your-first-contribution-to-open-source"},"Hands-on practice: Make your first contribution to open source!"),(0,r.kt)("p",null,"Join our GitHUb Discussion Forum to connect with developers from every part of world, see contributions from other, find collaborators and make your first contribution to a real-world project!\nDon't forget to give the repo a star \u2b50"),(0,r.kt)("h2",{id:"resources"},"Resources"),(0,r.kt)("p",null,"All resources are accessible on our ",(0,r.kt)("a",{parentName:"p",href:"https://aka.ms/JavaScripton_Azure"},"landing page")))}p.isMDXComponent=!0}}]); \ No newline at end of file diff --git a/assets/js/5979b063.b519f53a.js b/assets/js/5979b063.b519f53a.js new file mode 100644 index 0000000000..4ced371435 --- /dev/null +++ b/assets/js/5979b063.b519f53a.js @@ -0,0 +1 @@ +"use strict";(self.webpackChunkwebsite=self.webpackChunkwebsite||[]).push([[37961],{3905:(e,t,a)=>{a.d(t,{Zo:()=>p,kt:()=>d});var o=a(67294);function r(e,t,a){return t in e?Object.defineProperty(e,t,{value:a,enumerable:!0,configurable:!0,writable:!0}):e[t]=a,e}function n(e,t){var a=Object.keys(e);if(Object.getOwnPropertySymbols){var o=Object.getOwnPropertySymbols(e);t&&(o=o.filter((function(t){return Object.getOwnPropertyDescriptor(e,t).enumerable}))),a.push.apply(a,o)}return a}function i(e){for(var t=1;t=0||(r[a]=e[a]);return r}(e,t);if(Object.getOwnPropertySymbols){var n=Object.getOwnPropertySymbols(e);for(o=0;o=0||Object.prototype.propertyIsEnumerable.call(e,a)&&(r[a]=e[a])}return r}var c=o.createContext({}),s=function(e){var t=o.useContext(c),a=t;return e&&(a="function"==typeof e?e(t):i(i({},t),e)),a},p=function(e){var t=s(e.components);return o.createElement(c.Provider,{value:t},e.children)},u={inlineCode:"code",wrapper:function(e){var t=e.children;return o.createElement(o.Fragment,{},t)}},h=o.forwardRef((function(e,t){var a=e.components,r=e.mdxType,n=e.originalType,c=e.parentName,p=l(e,["components","mdxType","originalType","parentName"]),h=s(a),d=r,m=h["".concat(c,".").concat(d)]||h[d]||u[d]||n;return a?o.createElement(m,i(i({ref:t},p),{},{components:a})):o.createElement(m,i({ref:t},p))}));function d(e,t){var a=arguments,r=t&&t.mdxType;if("string"==typeof e||r){var n=a.length,i=new Array(n);i[0]=h;var l={};for(var c in t)hasOwnProperty.call(t,c)&&(l[c]=t[c]);l.originalType=e,l.mdxType="string"==typeof e?e:r,i[1]=l;for(var s=2;s{a.r(t),a.d(t,{assets:()=>c,contentTitle:()=>i,default:()=>u,frontMatter:()=>n,metadata:()=>l,toc:()=>s});var o=a(87462),r=(a(67294),a(3905));const n={slug:"hacktogether-recap",title:"HackTogether Recap \ud83c\udf42",authors:["cnteam"],draft:!1,hide_table_of_contents:!1,toc_min_heading_level:2,toc_max_heading_level:3,keywords:["Cloud-Scale","Data","AI","AI/ML","intelligent apps","cloud-native","30-days","enterprise apps","digital experiences","app modernization","hack-together"],image:"https://github.com/Azure/Cloud-Native/blob/main/website/static/img/ogImage.png",description:"Exciting news! As we approach the close of #JavaScript on #Azure Global Hack today, we are thrilled to announce another exciting opportunity for all JavaScript developers!! Find a recap of Hack together and read all about the upcoming #FallIntoIA on this post!",tags:["Fall-For-IA","30-days-of-IA","learn-live","hack-together","community-buzz","ask-the-expert","azure-kubernetes-service","azure-functions","azure-openai","azure-container-apps","azure-cosmos-db","github-copilot","github-codespaces","github-actions"]},i=void 0,l={permalink:"/Cloud-Native/30daysofIA/hacktogether-recap",source:"@site/blog-30daysofIA/2023-09-08/hack-together-recap.md",title:"HackTogether Recap \ud83c\udf42",description:"Exciting news! As we approach the close of #JavaScript on #Azure Global Hack today, we are thrilled to announce another exciting opportunity for all JavaScript developers!! Find a recap of Hack together and read all about the upcoming #FallIntoIA on this post!",date:"2023-09-08T00:00:00.000Z",formattedDate:"September 8, 2023",tags:[{label:"Fall-For-IA",permalink:"/Cloud-Native/30daysofIA/tags/fall-for-ia"},{label:"30-days-of-IA",permalink:"/Cloud-Native/30daysofIA/tags/30-days-of-ia"},{label:"learn-live",permalink:"/Cloud-Native/30daysofIA/tags/learn-live"},{label:"hack-together",permalink:"/Cloud-Native/30daysofIA/tags/hack-together"},{label:"community-buzz",permalink:"/Cloud-Native/30daysofIA/tags/community-buzz"},{label:"ask-the-expert",permalink:"/Cloud-Native/30daysofIA/tags/ask-the-expert"},{label:"azure-kubernetes-service",permalink:"/Cloud-Native/30daysofIA/tags/azure-kubernetes-service"},{label:"azure-functions",permalink:"/Cloud-Native/30daysofIA/tags/azure-functions"},{label:"azure-openai",permalink:"/Cloud-Native/30daysofIA/tags/azure-openai"},{label:"azure-container-apps",permalink:"/Cloud-Native/30daysofIA/tags/azure-container-apps"},{label:"azure-cosmos-db",permalink:"/Cloud-Native/30daysofIA/tags/azure-cosmos-db"},{label:"github-copilot",permalink:"/Cloud-Native/30daysofIA/tags/github-copilot"},{label:"github-codespaces",permalink:"/Cloud-Native/30daysofIA/tags/github-codespaces"},{label:"github-actions",permalink:"/Cloud-Native/30daysofIA/tags/github-actions"}],readingTime:3.995,hasTruncateMarker:!1,authors:[{name:"It's 30DaysOfIA",title:"FallForIA Content Team",url:"https://github.com/cloud-native",imageURL:"https://azure.github.io/Cloud-Native/img/logo-ms-cloud-native.png",key:"cnteam"}],frontMatter:{slug:"hacktogether-recap",title:"HackTogether Recap \ud83c\udf42",authors:["cnteam"],draft:!1,hide_table_of_contents:!1,toc_min_heading_level:2,toc_max_heading_level:3,keywords:["Cloud-Scale","Data","AI","AI/ML","intelligent apps","cloud-native","30-days","enterprise apps","digital experiences","app modernization","hack-together"],image:"https://github.com/Azure/Cloud-Native/blob/main/website/static/img/ogImage.png",description:"Exciting news! As we approach the close of #JavaScript on #Azure Global Hack today, we are thrilled to announce another exciting opportunity for all JavaScript developers!! Find a recap of Hack together and read all about the upcoming #FallIntoIA on this post!",tags:["Fall-For-IA","30-days-of-IA","learn-live","hack-together","community-buzz","ask-the-expert","azure-kubernetes-service","azure-functions","azure-openai","azure-container-apps","azure-cosmos-db","github-copilot","github-codespaces","github-actions"]},nextItem:{title:"Fall is Coming! \ud83c\udf42",permalink:"/Cloud-Native/30daysofIA/road-to-fallforIA"}},c={authorsImageUrls:[void 0]},s=[{value:"What We'll Cover",id:"what-well-cover",level:2},{value:"Thank you! \u2665\ufe0f",id:"thank-you-\ufe0f",level:2},{value:"Recap of The JavaScript on Azure Global Hack-Together",id:"recap-of-the-javascript-on-azure-global-hack-together",level:2},{value:"JSonAzure Hack-together Roadmap \ud83d\udccd:",id:"jsonazure-hack-together-roadmap-",level:3},{value:"Recap on past Livestreams\ud83c\udf1f:",id:"recap-on-past-livestreams",level:3},{value:"\ud83d\udcd6 Self-Learning Resources",id:"-self-learning-resources",level:3},{value:"Continue your journey with #FallForIntelligentApps",id:"continue-your-journey-with-fallforintelligentapps",level:2},{value:"Hands-on practice: Make your first contribution to open source!",id:"hands-on-practice-make-your-first-contribution-to-open-source",level:2},{value:"Resources",id:"resources",level:2}],p={toc:s};function u(e){let{components:t,...a}=e;return(0,r.kt)("wrapper",(0,o.Z)({},p,a,{components:t,mdxType:"MDXLayout"}),(0,r.kt)("head",null,(0,r.kt)("meta",{property:"og:url",content:"https://azure.github.io/cloud-native/30daysofia/hacktogether-recap"}),(0,r.kt)("meta",{property:"og:type",content:"website"}),(0,r.kt)("meta",{property:"og:title",content:"HackTogether Recap \ud83c\udf42 | Build Intelligent Apps On Azure"}),(0,r.kt)("meta",{property:"og:description",content:"Exciting news! As we approach the close of #JavaScript on #Azure Global Hack today, we are thrilled to announce another exciting opportunity for all JavaScript developers!! Find a recap of Hack together and read all about the upcoming #FallIntoIA on this post!"}),(0,r.kt)("meta",{property:"og:image",content:"https://github.com/Azure/Cloud-Native/blob/main/website/static/img/ogImage.png"}),(0,r.kt)("meta",{name:"twitter:url",content:"https://azure.github.io/Cloud-Native/30daysofIA/hacktogether-recap"}),(0,r.kt)("meta",{name:"twitter:title",content:"Continue The Learning Journey through **Fall For Intelligent Apps! \ud83c\udf42"}),(0,r.kt)("meta",{name:"twitter:description",content:"Exciting news! As we approach the close of #JavaScript on #Azure Global Hack today, we are thrilled to announce another exciting opportunity for all JavaScript developers!! Find a recap of Hack together and read all about the upcoming #FallIntoIA on this post!"}),(0,r.kt)("meta",{name:"twitter:image",content:"https://azure.github.io/Cloud-Native/img/ogImage.png"}),(0,r.kt)("meta",{name:"twitter:card",content:"summary_large_image"}),(0,r.kt)("meta",{name:"twitter:creator",content:"@nitya"}),(0,r.kt)("meta",{name:"twitter:site",content:"@AzureAdvocates"}),(0,r.kt)("link",{rel:"canonical",href:"https://azure.github.io/Cloud-Native/30daysofIA/hacktogether-recap"})),(0,r.kt)("p",null,"Continue The Learning Journey through ",(0,r.kt)("strong",{parentName:"p"},"Fall For Intelligent Apps!")," \ud83c\udf42"),(0,r.kt)("h2",{id:"what-well-cover"},"What We'll Cover"),(0,r.kt)("ul",null,(0,r.kt)("li",{parentName:"ul"},"Thank you! \u2665\ufe0f "),(0,r.kt)("li",{parentName:"ul"},"Recap of The ",(0,r.kt)("a",{parentName:"li",href:"https://aka.ms/JavaScripton_Azure"},"JavaScript on Azure Global Hack-Together")),(0,r.kt)("li",{parentName:"ul"},"Continue the journey"),(0,r.kt)("li",{parentName:"ul"},"Hands-on practice: Make your first contribution to open-source!"),(0,r.kt)("li",{parentName:"ul"},"Resources: For self-study!")),(0,r.kt)("h2",{id:"thank-you-\ufe0f"},"Thank you! \u2665\ufe0f"),(0,r.kt)("p",null,(0,r.kt)("img",{parentName:"p",src:"https://user-images.githubusercontent.com/40116776/264592120-1dc08b59-0555-40b2-8866-59248a573b83.png",alt:"image"})),(0,r.kt)("p",null,"It's hard to believe that JavaScript on Azure hack-together is ending! It seems like just yesterday that we launched this initiative, and yet here we are, 15 days later, with an incredible amount of learning and growth behind us. So... it's time for a wrap!"),(0,r.kt)("p",null,"From the bottom of our hearts, we want to thank each and every one of you for your participation, engagement, and enthusiasm. It's been truly inspiring to see the passion and dedication from this strong community, and we're honored to be a part of it. \u2728"),(0,r.kt)("h2",{id:"recap-of-the-javascript-on-azure-global-hack-together"},"Recap of The ",(0,r.kt)("a",{parentName:"h2",href:"https://aka.ms/JavaScripton_Azure"},"JavaScript on Azure Global Hack-Together")),(0,r.kt)("p",null,"As we wrap up this exciting event, we wanted to take a moment to reflect on all that we've accomplished together. Over the last 15 days, we've covered a lot of ground, from the basics of contributing to Open source to the exploration of the Contoso Real Estate project from its Frontend to its Backend and future AI implementation. "),(0,r.kt)("p",null,"Now that the hack-together is ending, we want to make sure that you have all the resources you need to continue honing your skills in the future. Whether you're looking to make your fist contribution to open source, become an open source maintainers, collaborate with others, or simply keep learning, there are plenty of resources out there to help you achieve your goals. So, let's dive in and explore all the ways you can continue to grow your JavaScript skills on Azure!"),(0,r.kt)("h3",{id:"jsonazure-hack-together-roadmap-"},"JSonAzure Hack-together Roadmap \ud83d\udccd:"),(0,r.kt)("p",null,(0,r.kt)("img",{parentName:"p",src:"https://user-images.githubusercontent.com/40116776/264975573-85938fcc-b235-4b5b-b45a-f174d3cf560d.png",alt:"hack-together-roadmap (2)"})),(0,r.kt)("h3",{id:"recap-on-past-livestreams"},"Recap on past Livestreams\ud83c\udf1f:"),(0,r.kt)("p",null,"Day 1\ufe0f\u20e3: ",(0,r.kt)("a",{parentName:"p",href:"https://developer.microsoft.com/reactor/events/20275/?WT.mc_id=academic-98351-juliamuiruri"},"Opening Keynote (Hack-together Launch)"),": Introduction to the Contoso Real Estate Open-source project!, managing complex and complex enterprise architecture, new announcements for JavaScript developers on Azure"),(0,r.kt)("p",null,"Day 2\ufe0f\u20e3: ",(0,r.kt)("a",{parentName:"p",href:"https://developer.microsoft.com/reactor/events/20321/?WT.mc_id=academic-98351-juliamuiruri"},"GitHub Copilot & Codespaces"),": Introduction to your AI pair programmer (GitHub Copilot) and your virtual developer environment on the cloud (GitHub Codespaces)"),(0,r.kt)("p",null,"Day 6\ufe0f\u20e3: ",(0,r.kt)("a",{parentName:"p",href:"https://developer.microsoft.com/reactor/events/20276/?WT.mc_id=academic-98351-juliamuiruri"},"Build your Frontend using Static Web Apps")," as part of a complex, modern composable frontends (or micro-frontends) and cloud-native applications."),(0,r.kt)("p",null,"Day 9\ufe0f\u20e3: Build a Serverless Backend using ",(0,r.kt)("a",{parentName:"p",href:"https://developer.microsoft.com/reactor/events/20277/?WT.mc_id=academic-98351-juliamuiruri"},"Azure Functions")),(0,r.kt)("p",null,"Day 1\ufe0f\u20e33\ufe0f\u20e3: Easily connect to an ",(0,r.kt)("a",{parentName:"p",href:"https://developer.microsoft.com/reactor/events/20278/?WT.mc_id=academic-98351-juliamuiruri"},"Azure Cosmos DB"),", exploring its benefits and how to get started"),(0,r.kt)("p",null,"Day 1\ufe0f\u20e35\ufe0f\u20e3: Being in the AI Era, we dive into the ",(0,r.kt)("a",{parentName:"p",href:"https://developer.microsoft.com/reactor/events/20322/?WT.mc_id=academic-98351-juliamuiruri"},"Azure OpenAI Service")," and how you can start to build intelligent JavaScript applications"),(0,r.kt)("h3",{id:"-self-learning-resources"},"\ud83d\udcd6 Self-Learning Resources"),(0,r.kt)("ol",null,(0,r.kt)("li",{parentName:"ol"},"JavaScript on Azure Global Hack Together ",(0,r.kt)("a",{parentName:"li",href:"https://aka.ms/JavaScriptonAzureCSC"},"Module collection")),(0,r.kt)("li",{parentName:"ol"},"Lets #HackTogether: Javascript On Azure ",(0,r.kt)("a",{parentName:"li",href:"https://dev.to/azure/lets-hacktogether-javascript-on-azure-keynote-nml"},"Keynote")),(0,r.kt)("li",{parentName:"ol"},(0,r.kt)("a",{parentName:"li",href:"https://techcommunity.microsoft.com/t5/educator-developer-blog/step-by-step-guide-migrating-v3-to-v4-programming-model-for/ba-p/3897691?WT.mc_id=academic-98351-juliamuiruri"},"Step by Step Guide: Migrating v3 to v4 programming model for Azure Functions for Node.Js Application"))),(0,r.kt)("h2",{id:"continue-your-journey-with-fallforintelligentapps"},"Continue your journey with #FallForIntelligentApps"),(0,r.kt)("p",null,"Join us this Fall on a learning journey to explore building intelligent apps. Combine the power of AI, cloud-native app development, and cloud-scale data to build highly differentiated digital experiences. Develop adaptive, responsive, and personalized experiences by building and modernizing intelligent applications with Azure. Engage in a self-paced learning adventure by following along the ",(0,r.kt)("a",{href:"https://aka.ms/FallForIA/30days",target:"_blank"},"#30Days of Intelligent Apps")," series, completing the ",(0,r.kt)("a",{href:"https://aka.ms/FallForIA/csc",target:"_blank"},"Intelligent Apps Skills Challenge"),", joining the Product Group for a live ",(0,r.kt)("a",{href:"http://aka.ms/FallforIA/ATE-series",target:"_blank"},"Ask The Expert")," series or building an end to end solution architecture with a live guided experience through the ",(0,r.kt)("a",{href:"http://aka.ms/FallforIA/LearnLive",target:"_blank"},"Learn Live")," series. Discover more ",(0,r.kt)("a",{href:"https://aka.ms/FallForIA",target:"_blank"},"here"),"."),(0,r.kt)("h2",{id:"hands-on-practice-make-your-first-contribution-to-open-source"},"Hands-on practice: Make your first contribution to open source!"),(0,r.kt)("p",null,"Join our GitHUb Discussion Forum to connect with developers from every part of world, see contributions from other, find collaborators and make your first contribution to a real-world project!\nDon't forget to give the repo a star \u2b50"),(0,r.kt)("h2",{id:"resources"},"Resources"),(0,r.kt)("p",null,"All resources are accessible on our ",(0,r.kt)("a",{parentName:"p",href:"https://aka.ms/JavaScripton_Azure"},"landing page")))}u.isMDXComponent=!0}}]); \ No newline at end of file diff --git a/assets/js/d41d8467.04e2e265.js b/assets/js/d41d8467.fd7f8ab9.js similarity index 78% rename from assets/js/d41d8467.04e2e265.js rename to assets/js/d41d8467.fd7f8ab9.js index 7b406e0eeb..dc771fa9cb 100644 --- a/assets/js/d41d8467.04e2e265.js +++ b/assets/js/d41d8467.fd7f8ab9.js @@ -1 +1 @@ -"use strict";(self.webpackChunkwebsite=self.webpackChunkwebsite||[]).push([[38057],{2569:(e,t,o)=>{o.r(t),o.d(t,{default:()=>J,prepareUserState:()=>K});var r=o(67294),s=o(91764),a=o(86010),n=o(87462),i=o(76775);function c(e,t){const o=[...e];return o.sort(((e,o)=>t(e)>t(o)?1:t(o)>t(e)?-1:0)),o}const l="checkboxLabel_pwqD",u="tags";function d(e){return new URLSearchParams(e).getAll(u)}function p(e,t){let{id:o,icon:s,label:a,tag:c,...p}=e;const h=(0,i.TH)(),m=(0,i.k6)(),[v,b]=(0,r.useState)(!1);(0,r.useEffect)((()=>{const e=d(h.search);b(e.includes(c))}),[c,h]);const f=(0,r.useCallback)((()=>{const e=function(e,t){const o=e.indexOf(t);if(-1===o)return e.concat(t);const r=[...e];return r.splice(o,1),r}(d(h.search),c),t=function(e,t){const o=new URLSearchParams(e);return o.delete(u),t.forEach((e=>o.append(u,e))),o.toString()}(h.search,e);m.push({...h,search:t,state:K()})}),[c,h,m]);return r.createElement(r.Fragment,null,r.createElement("input",(0,n.Z)({type:"checkbox",id:o,className:"screen-reader-only",onKeyDown:e=>{"Enter"===e.key&&f()},onFocus:e=>{var t;e.relatedTarget&&(null==(t=e.target.nextElementSibling)||t.dispatchEvent(new KeyboardEvent("focus")))},onBlur:e=>{var t;null==(t=e.target.nextElementSibling)||t.dispatchEvent(new KeyboardEvent("blur"))},onChange:f,checked:v},p)),r.createElement("label",{ref:t,htmlFor:o,className:l},a,s))}const h=r.forwardRef(p),m={checkboxLabel:"checkboxLabel_FmrE"},v="operator";function b(e){return new URLSearchParams(e).get(v)??"OR"}function f(){const e="showcase_filter_toggle",t=(0,i.TH)(),o=(0,i.k6)(),[s,n]=(0,r.useState)(!1);(0,r.useEffect)((()=>{n("AND"===b(t.search))}),[t]);const c=(0,r.useCallback)((()=>{n((e=>!e));const e=new URLSearchParams(t.search);e.delete(v),s||e.append(v,s?"OR":"AND"),o.push({...t,search:e.toString(),state:K()})}),[s,t,o]);return r.createElement("div",null,r.createElement("input",{type:"checkbox",id:e,className:"screen-reader-only","aria-label":"Toggle between or and and for the tags you selected",onChange:c,onKeyDown:e=>{"Enter"===e.key&&c()},checked:s}),r.createElement("label",{htmlFor:e,className:(0,a.Z)(m.checkboxLabel,"shadow--md")},r.createElement("span",{className:m.checkboxLabelOr},"OR"),r.createElement("span",{className:m.checkboxLabelAnd},"AND")))}var g=o(83699);const w={showcaseCardImage:"showcaseCardImage_qZMA",showcaseCardHeader:"showcaseCardHeader_tfIV",showcaseCardTitle:"showcaseCardTitle_PRHG",svgIconFavorite:"svgIconFavorite_RKtI",showcaseCardSrcBtn:"showcaseCardSrcBtn_AI8i",showcaseCardBody:"showcaseCardBody_I0O5",cardFooter:"cardFooter_EuCG",tag:"tag_Aixk",textLabel:"textLabel_SLNc",colorLabel:"colorLabel_q5Sy"};var A=o(73935),y=o(95237);const E="tooltip_hKx1",z="tooltipArrow_yATY";function k(e){let{children:t,id:o,anchorEl:s,text:a,delay:i}=e;const[c,l]=(0,r.useState)(!1),[u,d]=(0,r.useState)(null),[p,h]=(0,r.useState)(null),[m,v]=(0,r.useState)(null),[b,f]=(0,r.useState)(null),{styles:g,attributes:w}=(0,y.D)(u,p,{modifiers:[{name:"arrow",options:{element:m}},{name:"offset",options:{offset:[0,8]}}]}),k=(0,r.useRef)(null),C=`${o}_tooltip`;return(0,r.useEffect)((()=>{f(s?"string"==typeof s?document.querySelector(s):s:document.body)}),[b,s]),(0,r.useEffect)((()=>{const e=["mouseenter","focus"],t=["mouseleave","blur"],o=()=>{""!==a&&(null==u||u.removeAttribute("title"),k.current=window.setTimeout((()=>{l(!0)}),i||400))},r=()=>{clearInterval(k.current),l(!1)};return u&&(e.forEach((e=>{u.addEventListener(e,o)})),t.forEach((e=>{u.addEventListener(e,r)}))),()=>{u&&(e.forEach((e=>{u.removeEventListener(e,o)})),t.forEach((e=>{u.removeEventListener(e,r)})))}}),[u,a,i]),r.createElement(r.Fragment,null,r.cloneElement(t,{ref:d,"aria-describedby":c?C:void 0}),b?A.createPortal(c&&r.createElement("div",(0,n.Z)({id:C,role:"tooltip",ref:h,className:E,style:g.popper},w.popper),a,r.createElement("span",{ref:v,className:z,style:g.arrow})),b):b)}const C={featured:{label:"\xa0\u2665\ufe0f Featured",description:"This tag is used for admin-curated templates that represent high-quality (community) or official (Microsoft) azd templates.",color:"red"},azurekubernetesservice:{label:"Azure Kubernetes Service",description:"Azure Kubernetes Service",color:"#5A57E6"},azurecontainerapps:{label:"Azure Container Apps",description:"Azure Container Apps",color:"#5A57E6"},azurefunctions:{label:"Azure Functions",description:"Azure Functions",color:"#5A57E6"},azureopenai:{label:"Azure OpenAI",description:"Azure OpenAI",color:"#5A57E6"},azureeventgrid:{label:"Azure Event Grid",description:"Azure Event Grid",color:"#5A57E6"},azurelogicapps:{label:"Azure Logic Apps",description:"Azure Logic Apps",color:"#5A57E6"},github:{label:"GitHub",description:"GitHub",color:"#5A57E6"},cosmosdb:{label:"Cosmos DB",description:"Cosmos DM",color:"#5A57E6"},serverless:{label:"Serverless",description:"Serverless",color:"#8661c5"},cloudnative:{label:"Cloud-native",description:"Cloud-native",color:"#8661c5"},ai:{label:"AI",description:"AI",color:"#8661c5"},database:{label:"Database",description:"Database",color:"#8661c5"},devtools:{label:"Dev Tools",description:"Dev Tools",color:"#8661c5"},kubernetes:{label:"Kubernetes",description:"Kubernetes",color:"#8661c5"},blog:{label:"Blog",description:"Blog",color:"#C03BC4"},codesample:{label:"Code Sample",description:"Code Sample",color:"#C03BC4"},video:{label:"Video",description:"Video",color:"#C03BC4"}},S=JSON.parse('[{"title":"Cloud-Native New Year - Azure Kubernetes Service","description":"Join the Azure Kubernetes Service Product Group this New Year to learn about cloud-native development using Kubernetes on Azure computing. It is time to accelerate your cloud-native application development leveraging the de-facto container platform, Kubernetes. Discuss with the experts on how to develop, manage, scale and secure managed Kubernetes clusters on Azure with an end-to-end development and management experience using Azure Kubernetes Service and Azure Fleet Manager.","preview":"","website":"https://learn.microsoft.com/en-us/shows/Ask-the-Expert/","author":"Ask the Expert","source":"https://learn.microsoft.com/en-us/shows/ask-the-expert/cloud-native-new-year-azure-kubernetes-service","tags":["video","azurekubernetesservice","kubernetes"]},{"title":"Ask the Expert: Serverless September | Azure Container Apps","description":"Join the Azure Container Apps Product Group this Serverless September to learn about serverless containers purpose-built for microservices. Azure Container Apps is an app-centric service, empowering developers to focus on the differentiating business logic of their apps rather than on cloud infrastructure management. Discuss with the experts on how to build and deploy modern apps and microservices using serverless containers with Azure Container Apps.","preview":"","website":"https://learn.microsoft.com/en-us/shows/Ask-the-Expert/","author":"Ask the Expert","source":"https://learn.microsoft.com/en-us/shows/ask-the-expert/serverless-september-azure-container-apps","tags":["video","azurecontainerapps","serverless"]},{"title":"Ask the Expert: Serverless September | Azure Functions","description":"Join the Azure Functions Product Group this Serverless September to learn about FaaS or Functions-as-a-Service in Azure serverless computing. It is time to focus on the pieces of code that matter most to you while Azure Functions handles the rest. Discuss with the experts on how to execute event-driven serverless code functions with an end-to-end development experience using Azure Functions.","preview":"","website":"https://learn.microsoft.com/en-us/shows/Ask-the-Expert/","author":"Ask the Expert","source":"https://learn.microsoft.com/en-us/shows/ask-the-expert/serverless-september-azure-functions","tags":["video","azurefunctions","serverless"]},{"title":"What the Hack: Serverless walkthrough","description":"The Azure Serverless What The Hack will take you through architecting a serverless solution on Azure for the use case of a Tollbooth Application that needs to meet demand for event driven scale. This is a challenge-based hack. It\u2019s NOT step-by-step. Don\u2019t worry, you will do great whatever your level of experience!","preview":"","website":"https://developer.microsoft.com/en-us/reactor/home/index/","author":"Microsoft Reactor","source":"https://youtube.com/playlist?list=PLmsFUfdnGr3wg9NCWGYGw0IJORaqXhzLP","tags":["video","serverless","cloudnative"]},{"title":"Building and scaling cloud-native intelligent applications on Azure","description":"Learn how to run cloud-native serverless and container applications in Azure using Azure Kubernetes Service and Azure Container Apps. We help you choose the right service for your apps. We show you how Azure is the best platform for hosting cloud native and intelligent apps, and an app using Azure OpenAI Service and Azure Data. Learn all the new capabilities of our container platforms including how to deploy, test for scale, monitor, and much more.","preview":"","website":"https://developer.microsoft.com/en-us/","author":"Microsoft Developer","source":"https://www.youtube.com/watch?v=luu54Z1-45Y&pp=ygVEQnVpbGRpbmcgYW5kIHNjYWxpbmcgY2xvdWQtbmF0aXZlLCBpbnRlbGxpZ2VudCBhcHBsaWNhdGlvbnMgb24gQXp1cmU%3D","tags":["video","azurekubernetesservice","azurecontainerapps","azureopenai","ai","kubernetes"]},{"title":"Build scalable, cloud-native apps with AKS and Azure Cosmos DB","description":"Develop, deploy, and scale cloud-native applications that are high-performance, fast, and can handle traffic bursts with ease. Explore the latest news and capabilities for Azure Kubernetes Service (AKS) and Azure Cosmos DB, and hear from Rockwell Automation about how they\'ve used Azure\'s cloud-scale app and data services to create global applications.","preview":"","website":"https://azure.microsoft.com/en-us/products/cosmos-db","author":"Azure Cosmos DB","source":"https://www.youtube.com/watch?v=sL-aUxmEHEE&ab_channel=AzureCosmosDB","tags":["video","azurekubernetesservice","cosmosdb","kubernetes"]},{"title":"Integrating Azure AI and Azure Kubernetes Service to build intelligent apps","description":"Build intelligent apps that leverage Azure AI services for natural language processing, machine learning, Azure OpenAI Service with Azure Kubernetes Service (AKS) and other Azure application platform services. Learn best practices to help you achieve optimal scalability, reliability and automation with CI/CD using GitHub. By the end of this session, you will have a better understanding of how to build and deploy intelligent applications on Azure that deliver measurable impact.","preview":"","website":"https://developer.microsoft.com/en-us/","author":"Microsoft Developer","source":"https://www.youtube.com/watch?v=LhJODembils&ab_channel=MicrosoftDeveloper","tags":["video","azurekubernetesservice","azureopenai","github","ai","kubernetes"]},{"title":"Build an intelligent application fast and flexibly using Open Source on Azure","description":"Watch this end-to-end demo of an intelligent app that was built using a combination of open source technologies developed by Microsoft and the community. Highlights of the demo include announcements and key technologies.","preview":"","website":"https://developer.microsoft.com/en-us/","author":"Microsoft Developer","source":"https://www.youtube.com/watch?v=Dm9GoPit53w&ab_channel=MicrosoftAzure","tags":["video","azurefunctions"]},{"title":"Build Intelligent Microservices with Azure Container Apps","description":"Azure Container Apps (ACA) is a great place to run intelligent microservices, APIs, event-driven apps, and more. Infuse AI with Azure Container Apps jobs, leverage adaptable design patterns with Dapr, and explore flexible containerized compute for microservices across serverless or dedicated options.","preview":"","website":"https://developer.microsoft.com/en-us/","author":"Microsoft Developer","source":"https://www.youtube.com/watch?v=G55ivXuwwOY&ab_channel=MicrosoftDeveloper","tags":["video","azurecontainerapps"]},{"title":"Deliver apps from code to cloud with Azure Kubernetes Service","description":"Do you want to build and run cloud-native apps in Microsoft Azure with ease and confidence? Do you want to leverage the power and flexibility of Kubernetes, without the hassle and complexity of managing it yourself? Or maybe you want to learn about the latest and greatest features and integrations that Azure Kubernetes Service (AKS) has to offer? If you answered yes to any of these questions, then this session is for you!","preview":"","website":"https://developer.microsoft.com/en-us/","author":"Microsoft Developer","source":"https://www.youtube.com/watch?v=PZxGu7DllJA&ab_channel=MicrosoftDeveloper","tags":["video","azurekubernetesservice","kubernetes"]},{"title":"Modernizing your applications with containers and serverless","description":"Dive into how cloud-native architectures and technologies can be applied to help build resilient and modern applications. Learn how to use technologies like containers, Kubernetes and serverless integrated with other application ecosystem services to build and deploy microservices architecture on Microsoft Azure. This discussion is ideal for developers, architects, and IT pros who want to learn how to effectively leverage Azure services to build, run and scale modern cloud-native applications.","preview":"","website":"https://developer.microsoft.com/en-us/","author":"Microsoft Developer","source":"https://www.youtube.com/watch?v=S5xk3w4zJxw&ab_channel=MicrosoftDeveloper","tags":["video","azurekubernetesservice","kubernetes"]},{"title":"Modernizing with containers and serverless Q&A","description":"Join the Azure cloud-native team to dive deeper into developing modern apps on cloud with containers and serverless technologies. Explore how to leverage the latest product advancements in Azure Kubernetes Service, Azure Container Apps and Azure Functions for scenarios that work best for cloud-native development. The experts cover best practices on how to develop with in-built open-source components like Kubernetes, KEDA, and Dapr to achieve high performance along with dynamic scaling.","preview":"","website":"https://developer.microsoft.com/en-us/","author":"Microsoft Developer","source":"https://www.youtube.com/watch?v=_MnRYGtvJDI&ab_channel=MicrosoftDeveloper","tags":["video","azurekubernetesservice","azurecontainerapps","azurefunctions","kubernetes"]},{"title":"Focus on code not infra with Azure Functions Azure Spring Apps Dapr","description":"Explore an easy on-ramp to build your cloud-native APIs with containers in the cloud. Build an application using Azure Spring APIs to send messages to Dapr enabled message broker, triggering optimized processing with Azure Functions, all hosted in the same Azure Container Apps environment. This unified experience for microservices hosts multitype apps that interact with each other using Dapr, scale dynamically with KEDA, and focus on code, offering a true high productivity developer experience.","preview":"","website":"https://developer.microsoft.com/en-us/","author":"Microsoft Developer","source":"https://www.youtube.com/watch?v=_MnRYGtvJDI&ab_channel=MicrosoftDeveloper","tags":["video","azurefunctions","azurecontainerapps"]},{"title":"Hack Together Launch \u2013 Opening Keynote","description":"Join us for an in-depth walkthrough of the Contoso Real Estate project, with a focus on the portal app architecture (Full stack application). During this session, we\'ll guide you through the key components of the architecture and show you how to set up your own environment for the project. We\'ll also provide detailed contribution instructions to help you get involved in the project and make a meaningful impact. Whether you\'re a seasoned developer or just getting started, this session is a must-attend for anyone interested in building scalable, modern web applications.","preview":"","website":"https://developer.microsoft.com/en-us/reactor/home/index/","author":"Microsoft Reactor","source":"https://developer.microsoft.com/reactor/events/20275/","tags":["video","azurefunctions","cosmosdb","featured"]},{"title":"Introduction to GitHub Copilot","description":"Join us for an exciting introduction to GitHub Copilot, the revolutionary AI-powered coding assistant. In this session, you\'ll learn how to harness the power of Copilot to write code faster and more efficiently than ever before. We\'ll cover the basics of Copilot, including how to install and configure it, and walk you through a series of hands-on exercises to help you get started. Whether you\'re a seasoned developer or just starting out, this session is the perfect way to take your coding skills to the next level.","preview":"","website":"https://developer.microsoft.com/en-us/reactor/home/index/","author":"Microsoft Reactor","source":"https://developer.microsoft.com/reactor/events/20321/","tags":["video","github","ai","featured"]},{"title":"Build your Frontend with Azure Static Web Apps","description":"In this session, we\'ll give you a gentle introduction to Static Web Apps and the SWA CLI. We\'ll start by discussing the benefits of using Static Web Apps for your web projects and how the SWA CLI can help you get started quickly. Then, we\'ll dive into a demo of the Contoso Real Estate project, showing you how it uses Static Web Apps to deploy changes quickly and easily.","preview":"","website":"https://developer.microsoft.com/en-us/reactor/home/index/","author":"Microsoft Reactor","source":"https://developer.microsoft.com/reactor/events/20276/","tags":["video","azurefunctions","cosmosdb","featured"]},{"title":"Build a Serverless Backend with Azure Functions","description":"In this session, we\'ll give you a gentle introduction to serverless backends and how Azure Functions can help you build them quickly and easily. We\'ll start by discussing the benefits of using serverless backends for your web projects and how Azure Functions can help you get started quickly. Then, we\'ll dive into a demo of the Contoso Real Estate project, showing you how it uses Azure Functions to power its backend.","preview":"","website":"https://developer.microsoft.com/en-us/reactor/home/index/","author":"Microsoft Reactor","source":"https://developer.microsoft.com/reactor/events/20277/","tags":["video","azurefunctions","cosmosdb","serverless","featured"]},{"title":"Build and connect to a Database using Azure Cosmos DB","description":"In this session, we\'ll give you a gentle introduction to Azure Cosmos DB and how it can help you store and manage your data in the cloud. We\'ll start by discussing the benefits of using Azure Cosmos DB for your data storage needs, including its global distribution and scalability. Then, we\'ll dive into a demo of the Contoso Real Estate project, showing you how it uses Azure Cosmos DB to store its data.","preview":"","website":"https://developer.microsoft.com/en-us/reactor/home/index/","author":"Microsoft Reactor","source":"https://developer.microsoft.com/reactor/events/20278/","tags":["video","cosmosdb","featured"]},{"title":"Introduction to Azure OpenAI Service","description":"Join us for an exciting introduction to the world of AI with Azure OpenAI. In this session, you\'ll learn how to harness the power of OpenAI to build intelligent applications that can learn, reason, and adapt. We\'ll cover the basics of Azure OpenAI, including how to set up and configure your environment, and walk you through a series of hands-on exercises to help you get started. Whether you\'re a seasoned developer or just starting out, this session is the perfect way to unlock the full potential of AI and take your applications to the next level.","preview":"","website":"https://developer.microsoft.com/en-us/reactor/home/index/","author":"Microsoft Reactor","source":"https://developer.microsoft.com/reactor/events/20322/","tags":["video","azureopenai","ai","featured"]},{"title":"Azure Samples / Contoso Real Estate","description":"This repository contains the reference architecture and components for building enterprise-grade modern composable frontends (or micro-frontends) and cloud-native applications. It is a collection of best practices, architecture patterns, and functional components that can be used to build and deploy modern JavaScript applications to Azure.","preview":"","website":"https://github.com/Azure-Samples","author":"Azure Samples","source":"https://aka.ms/contoso-real-estate-github","tags":["codesample","azurefunctions","azurecontainerapps","github","cosmosdb"]},{"title":"Azure Samples / Azure Container Apps That Use OpenAI","description":"This sample demonstrates how to quickly build chat applications using Python and leveraging powerful technologies such as OpenAI ChatGPT models, Embedding models, LangChain framework, ChromaDB vector database, and Chainlit, an open-source Python package that is specifically designed to create user interfaces (UIs) for AI applications. These applications are hosted on Azure Container Apps, a fully managed environment that enables you to run microservices and containerized applications on a serverless platform.","preview":"","website":"https://github.com/Azure-Samples","author":"Azure Samples","source":"https://github.com/Azure-Samples/container-apps-openai","tags":["codesample","azurecontainerapps","azureopenai"]}]'),x=Object.keys(C);const I=function(){let e=S;return e=c(e,(e=>e.title.toLowerCase())),e}(),_=r.forwardRef(((e,t)=>{let{label:o,color:s,description:a}=e;return r.createElement("li",{ref:t,className:w.tag,title:a},r.createElement("span",{className:w.textLabel},o.toLowerCase()),r.createElement("span",{className:w.colorLabel,style:{backgroundColor:s}}))}));function D(e){let{tags:t}=e;const o=c(t.map((e=>({tag:e,...C[e]}))),(e=>x.indexOf(e.tag)));return r.createElement(r.Fragment,null,o.map(((e,t)=>{const o=`showcase_card_tag_${e.tag}`;return r.createElement(k,{key:t,text:e.description,anchorEl:"#__docusaurus",id:o},r.createElement(_,(0,n.Z)({key:t},e)))})))}function L(e){let{user:t}=e;const o=t.author,s=t.website;if(o.includes("|")){var n=s.split("|"),i=o.split("|");return r.createElement("div",{className:"dropdown dropdown--right dropdown--hoverable"},r.createElement("button",{className:(0,a.Z)("button button--secondary button--sm",w.showcaseCardSrcBtn)},"Author"),r.createElement("ul",{className:"dropdown__menu"},n.map(((e,t)=>{return o=i[t],s=n[t],r.createElement("li",null,r.createElement("a",{className:"dropdown__link",href:s},o));var o,s}))))}return r.createElement("div",{className:"author"},r.createElement("p",{className:"margin-bottom--none"},"Author"),r.createElement("a",{href:s},o))}function N(e){let{user:t}=e;return r.createElement("li",{key:t.title,className:"card"},r.createElement("div",{className:"card__body"},r.createElement("div",null,r.createElement("h3",null,t.title),t.source&&r.createElement(L,{user:t})),r.createElement("p",null,t.description),r.createElement(D,{tags:t.tags})),r.createElement("ul",{className:(0,a.Z)("card__footer",w.cardFooter)},r.createElement("div",{className:w.buttons},r.createElement(g.Z,{className:"button button--block button--secondary button--md",href:t.source},"View this post"))))}const F=r.memo(N);var B=o(36136),T=o(97325),M=o(23777),Z=o(90771);function K(){var e;if(B.Z.canUseDOM)return{scrollTopPosition:window.scrollY,focusedElementId:null==(e=document.activeElement)?void 0:e.id}}const R="name";function O(e){return new URLSearchParams(e).get(R)}function P(){const e=(0,i.TH)(),[t,o]=(0,r.useState)("OR"),[s,a]=(0,r.useState)([]),[n,c]=(0,r.useState)(null);return(0,r.useEffect)((()=>{a(d(e.search)),o(b(e.search)),c(O(e.search)),function(e){var t;const{scrollTopPosition:o,focusedElementId:r}=e??{scrollTopPosition:0,focusedElementId:void 0};null==(t=document.getElementById(r))||t.focus(),window.scrollTo({top:o})}(e.state)}),[e]),(0,r.useMemo)((()=>function(e,t,o,r){return r&&(e=e.filter((e=>e.title.toLowerCase().includes(r.toLowerCase())))),0===t.length?e:e.filter((e=>0!==e.tags.length&&("AND"===o?t.every((t=>e.tags.includes(t))):t.some((t=>e.tags.includes(t))))))}(I,s,t,n)),[s,t,n])}function W(){return r.createElement("header",{className:(0,a.Z)("hero hero--primary",Z.Z.heroBanner)},r.createElement("div",{className:"container text--center"},r.createElement("h1",{className:"hero__title"},"Community Gallery"),r.createElement("p",null,"Explore the Community Showcase for videos, blog posts and open-source projects from the community.")))}function G(){const e=P(),t=function(){const{selectMessage:e}=(0,M.c)();return t=>e(t,(0,T.I)({id:"showcase.filters.resultCount",description:'Pluralized label for the number of sites found on the showcase. Use as much plural forms (separated by "|") as your language support (see https://www.unicode.org/cldr/cldr-aux/charts/34/supplemental/language_plural_rules.html)',message:"1 post|{sitesCount} posts"},{sitesCount:t}))}();return r.createElement("section",{className:"container margin-top--lg"},r.createElement("div",{className:(0,a.Z)("margin-bottom--sm",Z.Z.filterCheckbox)},r.createElement("div",null,r.createElement("h2",null,r.createElement(T.Z,{id:"showcase.filters.title"},"Filters")),r.createElement("span",null,t(e.length))),r.createElement(f,null)),r.createElement("hr",null),r.createElement("ul",{className:Z.Z.checkboxList},x.map(((e,t)=>{const{label:o,description:s,color:a}=C[e],n=`showcase_checkbox_id_${e}`;return r.createElement("li",{key:t,className:Z.Z.checkboxListItem},r.createElement(k,{id:n,text:s,anchorEl:"#__docusaurus"},r.createElement(h,{tag:e,id:n,label:o,icon:r.createElement("span",{style:{backgroundColor:a,width:10,height:10,borderRadius:"50%",marginLeft:8}})})))}))))}const H=I.filter((e=>e.tags.includes("featured")));function j(){const e=(0,i.k6)(),t=(0,i.TH)(),[o,s]=(0,r.useState)(null);return(0,r.useEffect)((()=>{s(O(t.search))}),[t]),r.createElement("div",{className:Z.Z.searchContainer},r.createElement("input",{id:"searchbar",placeholder:(0,T.I)({message:"Search posts by name...",id:"showcase.searchBar.placeholder"}),value:o??void 0,"aria-label":"Search posts by name...",onInput:o=>{s(o.currentTarget.value);const r=new URLSearchParams(t.search);r.delete(R),o.currentTarget.value&&r.set(R,o.currentTarget.value),e.push({...t,search:r.toString(),state:K()}),setTimeout((()=>{var e;null==(e=document.getElementById("searchbar"))||e.focus()}),0)}}))}function U(){const e=P();return 0===e.length?r.createElement("section",{className:"margin-top--lg margin-bottom--xl"},r.createElement("div",{className:"container padding-vert--md text--center"},r.createElement("h2",null,r.createElement(T.Z,{id:"showcase.usersList.noResult"},"No results found for this search")),r.createElement(j,null))):r.createElement("section",{className:"margin-top--lg margin-bottom--xl"},e.length===I.length?r.createElement(r.Fragment,null,r.createElement("div",{className:Z.Z.showcaseFavorite},r.createElement("div",{className:"container"},r.createElement("div",{className:(0,a.Z)("margin-bottom--md",Z.Z.showcaseFavoriteHeader)},r.createElement("h2",null,r.createElement(T.Z,{id:"showcase.favoritesList.title"},"Featured Posts")),r.createElement(j,null)),r.createElement("hr",null),r.createElement("ul",{className:(0,a.Z)("container",Z.Z.showcaseList)},H.map((e=>r.createElement(F,{key:e.title,user:e})))))),r.createElement("div",{className:"container margin-top--lg"},r.createElement("h2",{className:Z.Z.showcaseHeader},r.createElement(T.Z,{id:"showcase.usersList.allUsers"},"All Posts")),r.createElement("hr",null),r.createElement("ul",{className:Z.Z.showcaseList},I.map((e=>r.createElement(F,{key:e.title,user:e})))))):r.createElement("div",{className:"container"},r.createElement("div",{className:(0,a.Z)("margin-bottom--md",Z.Z.showcaseFavoriteHeader)},r.createElement(j,null)),r.createElement("ul",{className:Z.Z.showcaseList},e.map((e=>r.createElement(F,{key:e.title,user:e}))))))}function J(){return r.createElement(s.Z,{title:"#FallForIA | Community Gallery",description:"A community-contributed showcase gallery"},r.createElement("main",null,r.createElement(W,null),r.createElement(G,null),r.createElement(U,null)))}},90771:(e,t,o)=>{o.d(t,{Z:()=>r});const r={heroBanner:"heroBanner_Lyfz",featureImg:"featureImg_Pn4X",features:"features_lsQP",featureSvg:"featureSvg_TGID",filterCheckbox:"filterCheckbox_Zhje",checkboxList:"checkboxList__B7U",showcaseList:"showcaseList_VnWw",checkboxListItem:"checkboxListItem_h7pj",searchContainer:"searchContainer_AsVt",showcaseFavorite:"showcaseFavorite_j9VZ",showcaseHelpWanted:"showcaseHelpWanted_AzKS",helpText:"helpText_Bk3N",showcaseFavoriteHeader:"showcaseFavoriteHeader_orWO",svgIconFavoriteXs:"svgIconFavoriteXs_nM3j",svgIconFavorite:"svgIconFavorite_Ks9A",hide:"hide_Cov8"}}}]); \ No newline at end of file +"use strict";(self.webpackChunkwebsite=self.webpackChunkwebsite||[]).push([[38057],{2569:(e,t,o)=>{o.r(t),o.d(t,{default:()=>J,prepareUserState:()=>K});var r=o(67294),s=o(91764),a=o(86010),n=o(87462),i=o(76775);function c(e,t){const o=[...e];return o.sort(((e,o)=>t(e)>t(o)?1:t(o)>t(e)?-1:0)),o}const l="checkboxLabel_pwqD",u="tags";function d(e){return new URLSearchParams(e).getAll(u)}function p(e,t){let{id:o,icon:s,label:a,tag:c,...p}=e;const h=(0,i.TH)(),m=(0,i.k6)(),[v,b]=(0,r.useState)(!1);(0,r.useEffect)((()=>{const e=d(h.search);b(e.includes(c))}),[c,h]);const f=(0,r.useCallback)((()=>{const e=function(e,t){const o=e.indexOf(t);if(-1===o)return e.concat(t);const r=[...e];return r.splice(o,1),r}(d(h.search),c),t=function(e,t){const o=new URLSearchParams(e);return o.delete(u),t.forEach((e=>o.append(u,e))),o.toString()}(h.search,e);m.push({...h,search:t,state:K()})}),[c,h,m]);return r.createElement(r.Fragment,null,r.createElement("input",(0,n.Z)({type:"checkbox",id:o,className:"screen-reader-only",onKeyDown:e=>{"Enter"===e.key&&f()},onFocus:e=>{var t;e.relatedTarget&&(null==(t=e.target.nextElementSibling)||t.dispatchEvent(new KeyboardEvent("focus")))},onBlur:e=>{var t;null==(t=e.target.nextElementSibling)||t.dispatchEvent(new KeyboardEvent("blur"))},onChange:f,checked:v},p)),r.createElement("label",{ref:t,htmlFor:o,className:l},a,s))}const h=r.forwardRef(p),m={checkboxLabel:"checkboxLabel_FmrE"},v="operator";function b(e){return new URLSearchParams(e).get(v)??"OR"}function f(){const e="showcase_filter_toggle",t=(0,i.TH)(),o=(0,i.k6)(),[s,n]=(0,r.useState)(!1);(0,r.useEffect)((()=>{n("AND"===b(t.search))}),[t]);const c=(0,r.useCallback)((()=>{n((e=>!e));const e=new URLSearchParams(t.search);e.delete(v),s||e.append(v,s?"OR":"AND"),o.push({...t,search:e.toString(),state:K()})}),[s,t,o]);return r.createElement("div",null,r.createElement("input",{type:"checkbox",id:e,className:"screen-reader-only","aria-label":"Toggle between or and and for the tags you selected",onChange:c,onKeyDown:e=>{"Enter"===e.key&&c()},checked:s}),r.createElement("label",{htmlFor:e,className:(0,a.Z)(m.checkboxLabel,"shadow--md")},r.createElement("span",{className:m.checkboxLabelOr},"OR"),r.createElement("span",{className:m.checkboxLabelAnd},"AND")))}var g=o(83699);const w={showcaseCardImage:"showcaseCardImage_qZMA",showcaseCardHeader:"showcaseCardHeader_tfIV",showcaseCardTitle:"showcaseCardTitle_PRHG",svgIconFavorite:"svgIconFavorite_RKtI",showcaseCardSrcBtn:"showcaseCardSrcBtn_AI8i",showcaseCardBody:"showcaseCardBody_I0O5",cardFooter:"cardFooter_EuCG",tag:"tag_Aixk",textLabel:"textLabel_SLNc",colorLabel:"colorLabel_q5Sy"};var A=o(73935),y=o(95237);const E="tooltip_hKx1",z="tooltipArrow_yATY";function k(e){let{children:t,id:o,anchorEl:s,text:a,delay:i}=e;const[c,l]=(0,r.useState)(!1),[u,d]=(0,r.useState)(null),[p,h]=(0,r.useState)(null),[m,v]=(0,r.useState)(null),[b,f]=(0,r.useState)(null),{styles:g,attributes:w}=(0,y.D)(u,p,{modifiers:[{name:"arrow",options:{element:m}},{name:"offset",options:{offset:[0,8]}}]}),k=(0,r.useRef)(null),C=`${o}_tooltip`;return(0,r.useEffect)((()=>{f(s?"string"==typeof s?document.querySelector(s):s:document.body)}),[b,s]),(0,r.useEffect)((()=>{const e=["mouseenter","focus"],t=["mouseleave","blur"],o=()=>{""!==a&&(null==u||u.removeAttribute("title"),k.current=window.setTimeout((()=>{l(!0)}),i||400))},r=()=>{clearInterval(k.current),l(!1)};return u&&(e.forEach((e=>{u.addEventListener(e,o)})),t.forEach((e=>{u.addEventListener(e,r)}))),()=>{u&&(e.forEach((e=>{u.removeEventListener(e,o)})),t.forEach((e=>{u.removeEventListener(e,r)})))}}),[u,a,i]),r.createElement(r.Fragment,null,r.cloneElement(t,{ref:d,"aria-describedby":c?C:void 0}),b?A.createPortal(c&&r.createElement("div",(0,n.Z)({id:C,role:"tooltip",ref:h,className:E,style:g.popper},w.popper),a,r.createElement("span",{ref:v,className:z,style:g.arrow})),b):b)}const C={featured:{label:"\xa0\u2665\ufe0f Featured",description:"This tag is used for admin-curated templates that represent high-quality (community) or official (Microsoft) azd templates.",color:"red"},azurekubernetesservice:{label:"Azure Kubernetes Service",description:"Azure Kubernetes Service",color:"#5A57E6"},azurecontainerapps:{label:"Azure Container Apps",description:"Azure Container Apps",color:"#5A57E6"},azurefunctions:{label:"Azure Functions",description:"Azure Functions",color:"#5A57E6"},azureopenai:{label:"Azure OpenAI",description:"Azure OpenAI",color:"#5A57E6"},azureeventgrid:{label:"Azure Event Grid",description:"Azure Event Grid",color:"#5A57E6"},azurelogicapps:{label:"Azure Logic Apps",description:"Azure Logic Apps",color:"#5A57E6"},github:{label:"GitHub",description:"GitHub",color:"#5A57E6"},cosmosdb:{label:"Cosmos DB",description:"Cosmos DM",color:"#5A57E6"},serverless:{label:"Serverless",description:"Serverless",color:"#8661c5"},cloudnative:{label:"Cloud-native",description:"Cloud-native",color:"#8661c5"},ai:{label:"AI",description:"AI",color:"#8661c5"},database:{label:"Database",description:"Database",color:"#8661c5"},devtools:{label:"Dev Tools",description:"Dev Tools",color:"#8661c5"},kubernetes:{label:"Kubernetes",description:"Kubernetes",color:"#8661c5"},blog:{label:"Blog",description:"Blog",color:"#C03BC4"},codesample:{label:"Code Sample",description:"Code Sample",color:"#C03BC4"},video:{label:"Video",description:"Video",color:"#C03BC4"}},S=JSON.parse('[{"title":"Cloud-Native New Year - Azure Kubernetes Service","description":"Join the Azure Kubernetes Service Product Group this New Year to learn about cloud-native development using Kubernetes on Azure computing. It is time to accelerate your cloud-native application development leveraging the de-facto container platform, Kubernetes. Discuss with the experts on how to develop, manage, scale and secure managed Kubernetes clusters on Azure with an end-to-end development and management experience using Azure Kubernetes Service and Azure Fleet Manager.","preview":"","website":"https://learn.microsoft.com/en-us/shows/Ask-the-Expert/","author":"Ask the Expert","source":"https://learn.microsoft.com/en-us/shows/ask-the-expert/cloud-native-new-year-azure-kubernetes-service","tags":["video","azurekubernetesservice","kubernetes"]},{"title":"Ask the Expert: Serverless September | Azure Container Apps","description":"Join the Azure Container Apps Product Group this Serverless September to learn about serverless containers purpose-built for microservices. Azure Container Apps is an app-centric service, empowering developers to focus on the differentiating business logic of their apps rather than on cloud infrastructure management. Discuss with the experts on how to build and deploy modern apps and microservices using serverless containers with Azure Container Apps.","preview":"","website":"https://learn.microsoft.com/en-us/shows/Ask-the-Expert/","author":"Ask the Expert","source":"https://learn.microsoft.com/en-us/shows/ask-the-expert/serverless-september-azure-container-apps","tags":["video","azurecontainerapps","serverless"]},{"title":"Ask the Expert: Serverless September | Azure Functions","description":"Join the Azure Functions Product Group this Serverless September to learn about FaaS or Functions-as-a-Service in Azure serverless computing. It is time to focus on the pieces of code that matter most to you while Azure Functions handles the rest. Discuss with the experts on how to execute event-driven serverless code functions with an end-to-end development experience using Azure Functions.","preview":"","website":"https://learn.microsoft.com/en-us/shows/Ask-the-Expert/","author":"Ask the Expert","source":"https://learn.microsoft.com/en-us/shows/ask-the-expert/serverless-september-azure-functions","tags":["video","azurefunctions","serverless"]},{"title":"What the Hack: Serverless walkthrough","description":"The Azure Serverless What The Hack will take you through architecting a serverless solution on Azure for the use case of a Tollbooth Application that needs to meet demand for event driven scale. This is a challenge-based hack. It\u2019s NOT step-by-step. Don\u2019t worry, you will do great whatever your level of experience!","preview":"","website":"https://developer.microsoft.com/en-us/reactor/home/index/","author":"Microsoft Reactor","source":"https://youtube.com/playlist?list=PLmsFUfdnGr3wg9NCWGYGw0IJORaqXhzLP","tags":["video","serverless","cloudnative"]},{"title":"Building and scaling cloud-native intelligent applications on Azure","description":"Learn how to run cloud-native serverless and container applications in Azure using Azure Kubernetes Service and Azure Container Apps. We help you choose the right service for your apps. We show you how Azure is the best platform for hosting cloud native and intelligent apps, and an app using Azure OpenAI Service and Azure Data. Learn all the new capabilities of our container platforms including how to deploy, test for scale, monitor, and much more.","preview":"","website":"https://developer.microsoft.com/en-us/","author":"Microsoft Developer","source":"https://www.youtube.com/watch?v=luu54Z1-45Y&pp=ygVEQnVpbGRpbmcgYW5kIHNjYWxpbmcgY2xvdWQtbmF0aXZlLCBpbnRlbGxpZ2VudCBhcHBsaWNhdGlvbnMgb24gQXp1cmU%3D","tags":["video","azurekubernetesservice","azurecontainerapps","azureopenai","ai","kubernetes"]},{"title":"Build scalable, cloud-native apps with AKS and Azure Cosmos DB","description":"Develop, deploy, and scale cloud-native applications that are high-performance, fast, and can handle traffic bursts with ease. Explore the latest news and capabilities for Azure Kubernetes Service (AKS) and Azure Cosmos DB, and hear from Rockwell Automation about how they\'ve used Azure\'s cloud-scale app and data services to create global applications.","preview":"","website":"https://azure.microsoft.com/en-us/products/cosmos-db","author":"Azure Cosmos DB","source":"https://www.youtube.com/watch?v=sL-aUxmEHEE&ab_channel=AzureCosmosDB","tags":["video","azurekubernetesservice","cosmosdb","kubernetes"]},{"title":"Integrating Azure AI and Azure Kubernetes Service to build intelligent apps","description":"Build intelligent apps that leverage Azure AI services for natural language processing, machine learning, Azure OpenAI Service with Azure Kubernetes Service (AKS) and other Azure application platform services. Learn best practices to help you achieve optimal scalability, reliability and automation with CI/CD using GitHub. By the end of this session, you will have a better understanding of how to build and deploy intelligent applications on Azure that deliver measurable impact.","preview":"","website":"https://developer.microsoft.com/en-us/","author":"Microsoft Developer","source":"https://www.youtube.com/watch?v=LhJODembils&ab_channel=MicrosoftDeveloper","tags":["video","azurekubernetesservice","azureopenai","github","ai","kubernetes"]},{"title":"Build an intelligent application fast and flexibly using Open Source on Azure","description":"Watch this end-to-end demo of an intelligent app that was built using a combination of open source technologies developed by Microsoft and the community. Highlights of the demo include announcements and key technologies.","preview":"","website":"https://developer.microsoft.com/en-us/","author":"Microsoft Developer","source":"https://www.youtube.com/watch?v=Dm9GoPit53w&ab_channel=MicrosoftAzure","tags":["video","azurefunctions"]},{"title":"Build Intelligent Microservices with Azure Container Apps","description":"Azure Container Apps (ACA) is a great place to run intelligent microservices, APIs, event-driven apps, and more. Infuse AI with Azure Container Apps jobs, leverage adaptable design patterns with Dapr, and explore flexible containerized compute for microservices across serverless or dedicated options.","preview":"","website":"https://developer.microsoft.com/en-us/","author":"Microsoft Developer","source":"https://www.youtube.com/watch?v=G55ivXuwwOY&ab_channel=MicrosoftDeveloper","tags":["video","azurecontainerapps"]},{"title":"Deliver apps from code to cloud with Azure Kubernetes Service","description":"Do you want to build and run cloud-native apps in Microsoft Azure with ease and confidence? Do you want to leverage the power and flexibility of Kubernetes, without the hassle and complexity of managing it yourself? Or maybe you want to learn about the latest and greatest features and integrations that Azure Kubernetes Service (AKS) has to offer? If you answered yes to any of these questions, then this session is for you!","preview":"","website":"https://developer.microsoft.com/en-us/","author":"Microsoft Developer","source":"https://www.youtube.com/watch?v=PZxGu7DllJA&ab_channel=MicrosoftDeveloper","tags":["video","azurekubernetesservice","kubernetes"]},{"title":"Modernizing your applications with containers and serverless","description":"Dive into how cloud-native architectures and technologies can be applied to help build resilient and modern applications. Learn how to use technologies like containers, Kubernetes and serverless integrated with other application ecosystem services to build and deploy microservices architecture on Microsoft Azure. This discussion is ideal for developers, architects, and IT pros who want to learn how to effectively leverage Azure services to build, run and scale modern cloud-native applications.","preview":"","website":"https://developer.microsoft.com/en-us/","author":"Microsoft Developer","source":"https://www.youtube.com/watch?v=S5xk3w4zJxw&ab_channel=MicrosoftDeveloper","tags":["video","azurekubernetesservice","kubernetes"]},{"title":"Modernizing with containers and serverless Q&A","description":"Join the Azure cloud-native team to dive deeper into developing modern apps on cloud with containers and serverless technologies. Explore how to leverage the latest product advancements in Azure Kubernetes Service, Azure Container Apps and Azure Functions for scenarios that work best for cloud-native development. The experts cover best practices on how to develop with in-built open-source components like Kubernetes, KEDA, and Dapr to achieve high performance along with dynamic scaling.","preview":"","website":"https://developer.microsoft.com/en-us/","author":"Microsoft Developer","source":"https://www.youtube.com/watch?v=_MnRYGtvJDI&ab_channel=MicrosoftDeveloper","tags":["video","azurekubernetesservice","azurecontainerapps","azurefunctions","kubernetes"]},{"title":"Focus on code not infra with Azure Functions Azure Spring Apps Dapr","description":"Explore an easy on-ramp to build your cloud-native APIs with containers in the cloud. Build an application using Azure Spring APIs to send messages to Dapr enabled message broker, triggering optimized processing with Azure Functions, all hosted in the same Azure Container Apps environment. This unified experience for microservices hosts multitype apps that interact with each other using Dapr, scale dynamically with KEDA, and focus on code, offering a true high productivity developer experience.","preview":"","website":"https://developer.microsoft.com/en-us/","author":"Microsoft Developer","source":"https://www.youtube.com/watch?v=_MnRYGtvJDI&ab_channel=MicrosoftDeveloper","tags":["video","azurefunctions","azurecontainerapps"]},{"title":"Hack Together Launch \u2013 Opening Keynote","description":"Join us for an in-depth walkthrough of the Contoso Real Estate project, with a focus on the portal app architecture (Full stack application). During this session, we\'ll guide you through the key components of the architecture and show you how to set up your own environment for the project. We\'ll also provide detailed contribution instructions to help you get involved in the project and make a meaningful impact. Whether you\'re a seasoned developer or just getting started, this session is a must-attend for anyone interested in building scalable, modern web applications.","preview":"","website":"https://developer.microsoft.com/en-us/reactor/home/index/","author":"Microsoft Reactor","source":"https://developer.microsoft.com/reactor/events/20275/","tags":["video","azurefunctions","cosmosdb","featured"]},{"title":"Introduction to GitHub Copilot","description":"Join us for an exciting introduction to GitHub Copilot, the revolutionary AI-powered coding assistant. In this session, you\'ll learn how to harness the power of Copilot to write code faster and more efficiently than ever before. We\'ll cover the basics of Copilot, including how to install and configure it, and walk you through a series of hands-on exercises to help you get started. Whether you\'re a seasoned developer or just starting out, this session is the perfect way to take your coding skills to the next level.","preview":"","website":"https://developer.microsoft.com/en-us/reactor/home/index/","author":"Microsoft Reactor","source":"https://developer.microsoft.com/reactor/events/20321/","tags":["video","github","ai","featured"]},{"title":"Build your Frontend with Azure Static Web Apps","description":"In this session, we\'ll give you a gentle introduction to Static Web Apps and the SWA CLI. We\'ll start by discussing the benefits of using Static Web Apps for your web projects and how the SWA CLI can help you get started quickly. Then, we\'ll dive into a demo of the Contoso Real Estate project, showing you how it uses Static Web Apps to deploy changes quickly and easily.","preview":"","website":"https://developer.microsoft.com/en-us/reactor/home/index/","author":"Microsoft Reactor","source":"https://developer.microsoft.com/reactor/events/20276/","tags":["video","azurefunctions","cosmosdb","featured"]},{"title":"Build a Serverless Backend with Azure Functions","description":"In this session, we\'ll give you a gentle introduction to serverless backends and how Azure Functions can help you build them quickly and easily. We\'ll start by discussing the benefits of using serverless backends for your web projects and how Azure Functions can help you get started quickly. Then, we\'ll dive into a demo of the Contoso Real Estate project, showing you how it uses Azure Functions to power its backend.","preview":"","website":"https://developer.microsoft.com/en-us/reactor/home/index/","author":"Microsoft Reactor","source":"https://developer.microsoft.com/reactor/events/20277/","tags":["video","azurefunctions","cosmosdb","serverless","featured"]},{"title":"Build and connect to a Database using Azure Cosmos DB","description":"In this session, we\'ll give you a gentle introduction to Azure Cosmos DB and how it can help you store and manage your data in the cloud. We\'ll start by discussing the benefits of using Azure Cosmos DB for your data storage needs, including its global distribution and scalability. Then, we\'ll dive into a demo of the Contoso Real Estate project, showing you how it uses Azure Cosmos DB to store its data.","preview":"","website":"https://developer.microsoft.com/en-us/reactor/home/index/","author":"Microsoft Reactor","source":"https://developer.microsoft.com/reactor/events/20278/","tags":["video","cosmosdb","featured"]},{"title":"Introduction to Azure OpenAI Service","description":"Join us for an exciting introduction to the world of AI with Azure OpenAI. In this session, you\'ll learn how to harness the power of OpenAI to build intelligent applications that can learn, reason, and adapt. We\'ll cover the basics of Azure OpenAI, including how to set up and configure your environment, and walk you through a series of hands-on exercises to help you get started. Whether you\'re a seasoned developer or just starting out, this session is the perfect way to unlock the full potential of AI and take your applications to the next level.","preview":"","website":"https://developer.microsoft.com/en-us/reactor/home/index/","author":"Microsoft Reactor","source":"https://developer.microsoft.com/reactor/events/20322/","tags":["video","azureopenai","ai","featured"]},{"title":"Azure Samples / Contoso Real Estate","description":"This repository contains the reference architecture and components for building enterprise-grade modern composable frontends (or micro-frontends) and cloud-native applications. It is a collection of best practices, architecture patterns, and functional components that can be used to build and deploy modern JavaScript applications to Azure.","preview":"","website":"https://github.com/Azure-Samples","author":"Azure Samples","source":"https://aka.ms/contoso-real-estate-github","tags":["codesample","azurefunctions","azurecontainerapps","github","cosmosdb"]},{"title":"Azure Samples / Azure Container Apps That Use OpenAI","description":"This sample demonstrates how to quickly build chat applications using Python and leveraging powerful technologies such as OpenAI ChatGPT models, Embedding models, LangChain framework, ChromaDB vector database, and Chainlit, an open-source Python package that is specifically designed to create user interfaces (UIs) for AI applications. These applications are hosted on Azure Container Apps, a fully managed environment that enables you to run microservices and containerized applications on a serverless platform.","preview":"","website":"https://github.com/Azure-Samples","author":"Azure Samples","source":"https://github.com/Azure-Samples/container-apps-openai","tags":["codesample","azurecontainerapps","azureopenai"]}]'),x=Object.keys(C);const I=function(){let e=S;return e=c(e,(e=>e.title.toLowerCase())),e}(),_=r.forwardRef(((e,t)=>{let{label:o,color:s,description:a}=e;return r.createElement("li",{ref:t,className:w.tag,title:a},r.createElement("span",{className:w.textLabel},o.toLowerCase()),r.createElement("span",{className:w.colorLabel,style:{backgroundColor:s}}))}));function D(e){let{tags:t}=e;const o=c(t.map((e=>({tag:e,...C[e]}))),(e=>x.indexOf(e.tag)));return r.createElement(r.Fragment,null,o.map(((e,t)=>{const o=`showcase_card_tag_${e.tag}`;return r.createElement(k,{key:t,text:e.description,anchorEl:"#__docusaurus",id:o},r.createElement(_,(0,n.Z)({key:t},e)))})))}function N(e){let{user:t}=e;const o=t.author,s=t.website;if(o.includes("|")){var n=s.split("|"),i=o.split("|");return r.createElement("div",{className:"dropdown dropdown--right dropdown--hoverable"},r.createElement("button",{className:(0,a.Z)("button button--secondary button--sm",w.showcaseCardSrcBtn)},"Author"),r.createElement("ul",{className:"dropdown__menu"},n.map(((e,t)=>{return o=i[t],s=n[t],r.createElement("li",null,r.createElement("a",{className:"dropdown__link",href:s},o));var o,s}))))}return r.createElement("div",{className:"author"},r.createElement("p",{className:"margin-bottom--none"},"Author"),r.createElement("a",{href:s},o))}function L(e){let{user:t}=e;return r.createElement("li",{key:t.title,className:"card"},r.createElement("div",{className:"card__body"},r.createElement("div",null,r.createElement("h3",null,t.title),t.source&&r.createElement(N,{user:t})),r.createElement("p",null,t.description)),r.createElement("ul",{className:(0,a.Z)("card__footer",w.cardFooter)},r.createElement("div",{className:"margin-bottom--md"},r.createElement(D,{tags:t.tags})),r.createElement("div",{className:w.buttons},r.createElement(g.Z,{className:"button button--block button--secondary button--md",href:t.source},"View this post"))))}const F=r.memo(L);var B=o(36136),T=o(97325),M=o(23777),Z=o(90771);function K(){var e;if(B.Z.canUseDOM)return{scrollTopPosition:window.scrollY,focusedElementId:null==(e=document.activeElement)?void 0:e.id}}const R="name";function O(e){return new URLSearchParams(e).get(R)}function P(){const e=(0,i.TH)(),[t,o]=(0,r.useState)("OR"),[s,a]=(0,r.useState)([]),[n,c]=(0,r.useState)(null);return(0,r.useEffect)((()=>{a(d(e.search)),o(b(e.search)),c(O(e.search)),function(e){var t;const{scrollTopPosition:o,focusedElementId:r}=e??{scrollTopPosition:0,focusedElementId:void 0};null==(t=document.getElementById(r))||t.focus(),window.scrollTo({top:o})}(e.state)}),[e]),(0,r.useMemo)((()=>function(e,t,o,r){return r&&(e=e.filter((e=>e.title.toLowerCase().includes(r.toLowerCase())))),0===t.length?e:e.filter((e=>0!==e.tags.length&&("AND"===o?t.every((t=>e.tags.includes(t))):t.some((t=>e.tags.includes(t))))))}(I,s,t,n)),[s,t,n])}function W(){return r.createElement("header",{className:(0,a.Z)("hero hero--primary",Z.Z.heroBanner)},r.createElement("div",{className:"container text--center"},r.createElement("h1",{className:"hero__title"},"Community Gallery"),r.createElement("p",null,"Explore the Community Showcase for videos, blog posts and open-source projects from the community.")))}function G(){const e=P(),t=function(){const{selectMessage:e}=(0,M.c)();return t=>e(t,(0,T.I)({id:"showcase.filters.resultCount",description:'Pluralized label for the number of sites found on the showcase. Use as much plural forms (separated by "|") as your language support (see https://www.unicode.org/cldr/cldr-aux/charts/34/supplemental/language_plural_rules.html)',message:"1 post|{sitesCount} posts"},{sitesCount:t}))}();return r.createElement("section",{className:"container margin-top--lg"},r.createElement("div",{className:(0,a.Z)("margin-bottom--sm",Z.Z.filterCheckbox)},r.createElement("div",null,r.createElement("h2",null,r.createElement(T.Z,{id:"showcase.filters.title"},"Filters")),r.createElement("span",null,t(e.length))),r.createElement(f,null)),r.createElement("hr",null),r.createElement("ul",{className:Z.Z.checkboxList},x.map(((e,t)=>{const{label:o,description:s,color:a}=C[e],n=`showcase_checkbox_id_${e}`;return r.createElement("li",{key:t,className:Z.Z.checkboxListItem},r.createElement(k,{id:n,text:s,anchorEl:"#__docusaurus"},r.createElement(h,{tag:e,id:n,label:o,icon:r.createElement("span",{style:{backgroundColor:a,width:10,height:10,borderRadius:"50%",marginLeft:8}})})))}))))}const H=I.filter((e=>e.tags.includes("featured")));function j(){const e=(0,i.k6)(),t=(0,i.TH)(),[o,s]=(0,r.useState)(null);return(0,r.useEffect)((()=>{s(O(t.search))}),[t]),r.createElement("div",{className:Z.Z.searchContainer},r.createElement("input",{id:"searchbar",placeholder:(0,T.I)({message:"Search posts by name...",id:"showcase.searchBar.placeholder"}),value:o??void 0,"aria-label":"Search posts by name...",onInput:o=>{s(o.currentTarget.value);const r=new URLSearchParams(t.search);r.delete(R),o.currentTarget.value&&r.set(R,o.currentTarget.value),e.push({...t,search:r.toString(),state:K()}),setTimeout((()=>{var e;null==(e=document.getElementById("searchbar"))||e.focus()}),0)}}))}function U(){const e=P();return 0===e.length?r.createElement("section",{className:"margin-top--lg margin-bottom--xl"},r.createElement("div",{className:"container padding-vert--md text--center"},r.createElement("h2",null,r.createElement(T.Z,{id:"showcase.usersList.noResult"},"No results found for this search")),r.createElement(j,null))):r.createElement("section",{className:"margin-top--lg margin-bottom--xl"},e.length===I.length?r.createElement(r.Fragment,null,r.createElement("div",{className:Z.Z.showcaseFavorite},r.createElement("div",{className:"container"},r.createElement("div",{className:(0,a.Z)("margin-bottom--md",Z.Z.showcaseFavoriteHeader)},r.createElement("h2",null,r.createElement(T.Z,{id:"showcase.favoritesList.title"},"Featured Posts")),r.createElement(j,null)),r.createElement("hr",null),r.createElement("ul",{className:(0,a.Z)("container",Z.Z.showcaseList)},H.map((e=>r.createElement(F,{key:e.title,user:e})))))),r.createElement("div",{className:"container margin-top--lg"},r.createElement("h2",{className:Z.Z.showcaseHeader},r.createElement(T.Z,{id:"showcase.usersList.allUsers"},"All Posts")),r.createElement("hr",null),r.createElement("ul",{className:Z.Z.showcaseList},I.map((e=>r.createElement(F,{key:e.title,user:e})))))):r.createElement("div",{className:"container"},r.createElement("div",{className:(0,a.Z)("margin-bottom--md",Z.Z.showcaseFavoriteHeader)},r.createElement(j,null)),r.createElement("ul",{className:Z.Z.showcaseList},e.map((e=>r.createElement(F,{key:e.title,user:e}))))))}function J(){return r.createElement(s.Z,{title:"#FallForIA | Community Gallery",description:"A community-contributed showcase gallery"},r.createElement("main",null,r.createElement(W,null),r.createElement(G,null),r.createElement(U,null)))}},90771:(e,t,o)=>{o.d(t,{Z:()=>r});const r={heroBanner:"heroBanner_Lyfz",featureImg:"featureImg_Pn4X",features:"features_lsQP",featureSvg:"featureSvg_TGID",filterCheckbox:"filterCheckbox_Zhje",checkboxList:"checkboxList__B7U",showcaseList:"showcaseList_VnWw",checkboxListItem:"checkboxListItem_h7pj",searchContainer:"searchContainer_AsVt",showcaseFavorite:"showcaseFavorite_j9VZ",showcaseHelpWanted:"showcaseHelpWanted_AzKS",helpText:"helpText_Bk3N",showcaseFavoriteHeader:"showcaseFavoriteHeader_orWO",svgIconFavoriteXs:"svgIconFavoriteXs_nM3j",svgIconFavorite:"svgIconFavorite_Ks9A",hide:"hide_Cov8"}}}]); \ No newline at end of file diff --git a/assets/js/dc727da6.4b5830eb.js b/assets/js/dc727da6.8b05f328.js similarity index 50% rename from assets/js/dc727da6.4b5830eb.js rename to assets/js/dc727da6.8b05f328.js index 9c04bf3e9e..262d2df7cb 100644 --- a/assets/js/dc727da6.4b5830eb.js +++ b/assets/js/dc727da6.8b05f328.js @@ -1 +1 @@ -"use strict";(self.webpackChunkwebsite=self.webpackChunkwebsite||[]).push([[44193],{3905:(e,t,a)=>{a.d(t,{Zo:()=>c,kt:()=>m});var n=a(67294);function i(e,t,a){return t in e?Object.defineProperty(e,t,{value:a,enumerable:!0,configurable:!0,writable:!0}):e[t]=a,e}function o(e,t){var a=Object.keys(e);if(Object.getOwnPropertySymbols){var n=Object.getOwnPropertySymbols(e);t&&(n=n.filter((function(t){return Object.getOwnPropertyDescriptor(e,t).enumerable}))),a.push.apply(a,n)}return a}function r(e){for(var t=1;t=0||(i[a]=e[a]);return i}(e,t);if(Object.getOwnPropertySymbols){var o=Object.getOwnPropertySymbols(e);for(n=0;n=0||Object.prototype.propertyIsEnumerable.call(e,a)&&(i[a]=e[a])}return i}var s=n.createContext({}),p=function(e){var t=n.useContext(s),a=t;return e&&(a="function"==typeof e?e(t):r(r({},t),e)),a},c=function(e){var t=p(e.components);return n.createElement(s.Provider,{value:t},e.children)},d={inlineCode:"code",wrapper:function(e){var t=e.children;return n.createElement(n.Fragment,{},t)}},u=n.forwardRef((function(e,t){var a=e.components,i=e.mdxType,o=e.originalType,s=e.parentName,c=l(e,["components","mdxType","originalType","parentName"]),u=p(a),m=i,g=u["".concat(s,".").concat(m)]||u[m]||d[m]||o;return a?n.createElement(g,r(r({ref:t},c),{},{components:a})):n.createElement(g,r({ref:t},c))}));function m(e,t){var a=arguments,i=t&&t.mdxType;if("string"==typeof e||i){var o=a.length,r=new Array(o);r[0]=u;var l={};for(var s in t)hasOwnProperty.call(t,s)&&(l[s]=t[s]);l.originalType=e,l.mdxType="string"==typeof e?e:i,r[1]=l;for(var p=2;p{a.r(t),a.d(t,{assets:()=>s,contentTitle:()=>r,default:()=>d,frontMatter:()=>o,metadata:()=>l,toc:()=>p});var n=a(87462),i=(a(67294),a(3905));const o={slug:"road-to-fallforIA",title:"Fall is Coming! \ud83c\udf42",authors:["cnteam"],draft:!1,hide_table_of_contents:!1,toc_min_heading_level:2,toc_max_heading_level:3,keywords:["Cloud-Scale","Data","AI","AI/ML","intelligent apps","cloud-native","30-days","enterprise apps","digital experiences","app modernization"],image:"https://github.com/Azure/Cloud-Native/blob/main/website/static/img/ogImage.png",description:"Combine the power of AI, cloud-scale data, and cloud-native app development to create highly differentiated digital experiences. Develop adaptive, responsive, and personalized experiences by building and modernizing intelligent applications with Azure.",tags:["Fall-For-IA","30-days-of-IA","learn-live","hack-together","community-buzz","ask-the-expert","azure-kubernetes-service","azure-functions","azure-openai","azure-container-apps","azure-cosmos-db","github-copilot","github-codespaces","github-actions"]},r=void 0,l={permalink:"/Cloud-Native/30daysofIA/road-to-fallforIA",source:"@site/blog-30daysofIA/2023-08-28/road-to-fallforia.md",title:"Fall is Coming! \ud83c\udf42",description:"Combine the power of AI, cloud-scale data, and cloud-native app development to create highly differentiated digital experiences. Develop adaptive, responsive, and personalized experiences by building and modernizing intelligent applications with Azure.",date:"2023-08-28T00:00:00.000Z",formattedDate:"August 28, 2023",tags:[{label:"Fall-For-IA",permalink:"/Cloud-Native/30daysofIA/tags/fall-for-ia"},{label:"30-days-of-IA",permalink:"/Cloud-Native/30daysofIA/tags/30-days-of-ia"},{label:"learn-live",permalink:"/Cloud-Native/30daysofIA/tags/learn-live"},{label:"hack-together",permalink:"/Cloud-Native/30daysofIA/tags/hack-together"},{label:"community-buzz",permalink:"/Cloud-Native/30daysofIA/tags/community-buzz"},{label:"ask-the-expert",permalink:"/Cloud-Native/30daysofIA/tags/ask-the-expert"},{label:"azure-kubernetes-service",permalink:"/Cloud-Native/30daysofIA/tags/azure-kubernetes-service"},{label:"azure-functions",permalink:"/Cloud-Native/30daysofIA/tags/azure-functions"},{label:"azure-openai",permalink:"/Cloud-Native/30daysofIA/tags/azure-openai"},{label:"azure-container-apps",permalink:"/Cloud-Native/30daysofIA/tags/azure-container-apps"},{label:"azure-cosmos-db",permalink:"/Cloud-Native/30daysofIA/tags/azure-cosmos-db"},{label:"github-copilot",permalink:"/Cloud-Native/30daysofIA/tags/github-copilot"},{label:"github-codespaces",permalink:"/Cloud-Native/30daysofIA/tags/github-codespaces"},{label:"github-actions",permalink:"/Cloud-Native/30daysofIA/tags/github-actions"}],readingTime:.785,hasTruncateMarker:!1,authors:[{name:"It's 30DaysOfIA",title:"FallForIA Content Team",url:"https://github.com/cloud-native",imageURL:"https://azure.github.io/Cloud-Native/img/logo-ms-cloud-native.png",key:"cnteam"}],frontMatter:{slug:"road-to-fallforIA",title:"Fall is Coming! \ud83c\udf42",authors:["cnteam"],draft:!1,hide_table_of_contents:!1,toc_min_heading_level:2,toc_max_heading_level:3,keywords:["Cloud-Scale","Data","AI","AI/ML","intelligent apps","cloud-native","30-days","enterprise apps","digital experiences","app modernization"],image:"https://github.com/Azure/Cloud-Native/blob/main/website/static/img/ogImage.png",description:"Combine the power of AI, cloud-scale data, and cloud-native app development to create highly differentiated digital experiences. Develop adaptive, responsive, and personalized experiences by building and modernizing intelligent applications with Azure.",tags:["Fall-For-IA","30-days-of-IA","learn-live","hack-together","community-buzz","ask-the-expert","azure-kubernetes-service","azure-functions","azure-openai","azure-container-apps","azure-cosmos-db","github-copilot","github-codespaces","github-actions"]},prevItem:{title:"HackTogether Recap \ud83c\udf42",permalink:"/Cloud-Native/30daysofIA/hacktogether-recap"}},s={authorsImageUrls:[void 0]},p=[],c={toc:p};function d(e){let{components:t,...a}=e;return(0,i.kt)("wrapper",(0,n.Z)({},c,a,{components:t,mdxType:"MDXLayout"}),(0,i.kt)("head",null,(0,i.kt)("meta",{name:"twitter:url",content:"https://azure.github.io/Cloud-Native/30daysofIA/road-to-fallforIA"}),(0,i.kt)("meta",{name:"twitter:title",content:"It's Time to Fall For Intelligent Apps"}),(0,i.kt)("meta",{name:"twitter:description",content:"Combine the power of AI, cloud-scale data, and cloud-native app development to create highly differentiated digital experiences. Develop adaptive, responsive, and personalized experiences by building and modernizing intelligent applications with Azure."}),(0,i.kt)("meta",{name:"twitter:image",content:"https://azure.github.io/Cloud-Native/img/ogImage.png"}),(0,i.kt)("meta",{name:"twitter:card",content:"summary_large_image"}),(0,i.kt)("meta",{name:"twitter:creator",content:"@nitya"}),(0,i.kt)("meta",{name:"twitter:site",content:"@AzureAdvocates"}),(0,i.kt)("link",{rel:"canonical",href:"https://azure.github.io/Cloud-Native/30daysofIA/road-to-fallforIA"})),(0,i.kt)("p",null,"September is almost here - and that can only mean one thing!! It's time to ",(0,i.kt)("strong",{parentName:"p"},"\ud83c\udf42 Fall for something new and exciting")," and spend a few weeks skilling up on relevant tools, techologies and solutions!! "),(0,i.kt)("p",null,"Last year, we focused on #ServerlessSeptember. This year, we're building on that theme with the addition of cloud-scale ",(0,i.kt)("strong",{parentName:"p"},"Data"),", cloud-native ",(0,i.kt)("strong",{parentName:"p"},"Technologies")," and cloud-based ",(0,i.kt)("strong",{parentName:"p"},"AI")," integrations to help you modernize and build intelligent apps for the enterprise!"),(0,i.kt)("p",null,"Watch this space - and join us in September to learn more!"))}d.isMDXComponent=!0}}]); \ No newline at end of file +"use strict";(self.webpackChunkwebsite=self.webpackChunkwebsite||[]).push([[44193],{3905:(e,t,a)=>{a.d(t,{Zo:()=>d,kt:()=>m});var n=a(67294);function i(e,t,a){return t in e?Object.defineProperty(e,t,{value:a,enumerable:!0,configurable:!0,writable:!0}):e[t]=a,e}function o(e,t){var a=Object.keys(e);if(Object.getOwnPropertySymbols){var n=Object.getOwnPropertySymbols(e);t&&(n=n.filter((function(t){return Object.getOwnPropertyDescriptor(e,t).enumerable}))),a.push.apply(a,n)}return a}function r(e){for(var t=1;t=0||(i[a]=e[a]);return i}(e,t);if(Object.getOwnPropertySymbols){var o=Object.getOwnPropertySymbols(e);for(n=0;n=0||Object.prototype.propertyIsEnumerable.call(e,a)&&(i[a]=e[a])}return i}var s=n.createContext({}),p=function(e){var t=n.useContext(s),a=t;return e&&(a="function"==typeof e?e(t):r(r({},t),e)),a},d=function(e){var t=p(e.components);return n.createElement(s.Provider,{value:t},e.children)},c={inlineCode:"code",wrapper:function(e){var t=e.children;return n.createElement(n.Fragment,{},t)}},u=n.forwardRef((function(e,t){var a=e.components,i=e.mdxType,o=e.originalType,s=e.parentName,d=l(e,["components","mdxType","originalType","parentName"]),u=p(a),m=i,g=u["".concat(s,".").concat(m)]||u[m]||c[m]||o;return a?n.createElement(g,r(r({ref:t},d),{},{components:a})):n.createElement(g,r({ref:t},d))}));function m(e,t){var a=arguments,i=t&&t.mdxType;if("string"==typeof e||i){var o=a.length,r=new Array(o);r[0]=u;var l={};for(var s in t)hasOwnProperty.call(t,s)&&(l[s]=t[s]);l.originalType=e,l.mdxType="string"==typeof e?e:i,r[1]=l;for(var p=2;p{a.r(t),a.d(t,{assets:()=>s,contentTitle:()=>r,default:()=>c,frontMatter:()=>o,metadata:()=>l,toc:()=>p});var n=a(87462),i=(a(67294),a(3905));const o={slug:"road-to-fallforIA",title:"Fall is Coming! \ud83c\udf42",authors:["cnteam"],draft:!1,hide_table_of_contents:!1,toc_min_heading_level:2,toc_max_heading_level:3,keywords:["Cloud-Scale","Data","AI","AI/ML","intelligent apps","cloud-native","30-days","enterprise apps","digital experiences","app modernization"],image:"https://github.com/Azure/Cloud-Native/blob/main/website/static/img/ogImage.png",description:"Combine the power of AI, cloud-scale data, and cloud-native app development to create highly differentiated digital experiences. Develop adaptive, responsive, and personalized experiences by building and modernizing intelligent applications with Azure.",tags:["Fall-For-IA","30-days-of-IA","learn-live","hack-together","community-buzz","ask-the-expert","azure-kubernetes-service","azure-functions","azure-openai","azure-container-apps","azure-cosmos-db","github-copilot","github-codespaces","github-actions"]},r=void 0,l={permalink:"/Cloud-Native/30daysofIA/road-to-fallforIA",source:"@site/blog-30daysofIA/2023-08-28/road-to-fallforia.md",title:"Fall is Coming! \ud83c\udf42",description:"Combine the power of AI, cloud-scale data, and cloud-native app development to create highly differentiated digital experiences. Develop adaptive, responsive, and personalized experiences by building and modernizing intelligent applications with Azure.",date:"2023-08-28T00:00:00.000Z",formattedDate:"August 28, 2023",tags:[{label:"Fall-For-IA",permalink:"/Cloud-Native/30daysofIA/tags/fall-for-ia"},{label:"30-days-of-IA",permalink:"/Cloud-Native/30daysofIA/tags/30-days-of-ia"},{label:"learn-live",permalink:"/Cloud-Native/30daysofIA/tags/learn-live"},{label:"hack-together",permalink:"/Cloud-Native/30daysofIA/tags/hack-together"},{label:"community-buzz",permalink:"/Cloud-Native/30daysofIA/tags/community-buzz"},{label:"ask-the-expert",permalink:"/Cloud-Native/30daysofIA/tags/ask-the-expert"},{label:"azure-kubernetes-service",permalink:"/Cloud-Native/30daysofIA/tags/azure-kubernetes-service"},{label:"azure-functions",permalink:"/Cloud-Native/30daysofIA/tags/azure-functions"},{label:"azure-openai",permalink:"/Cloud-Native/30daysofIA/tags/azure-openai"},{label:"azure-container-apps",permalink:"/Cloud-Native/30daysofIA/tags/azure-container-apps"},{label:"azure-cosmos-db",permalink:"/Cloud-Native/30daysofIA/tags/azure-cosmos-db"},{label:"github-copilot",permalink:"/Cloud-Native/30daysofIA/tags/github-copilot"},{label:"github-codespaces",permalink:"/Cloud-Native/30daysofIA/tags/github-codespaces"},{label:"github-actions",permalink:"/Cloud-Native/30daysofIA/tags/github-actions"}],readingTime:1.055,hasTruncateMarker:!1,authors:[{name:"It's 30DaysOfIA",title:"FallForIA Content Team",url:"https://github.com/cloud-native",imageURL:"https://azure.github.io/Cloud-Native/img/logo-ms-cloud-native.png",key:"cnteam"}],frontMatter:{slug:"road-to-fallforIA",title:"Fall is Coming! \ud83c\udf42",authors:["cnteam"],draft:!1,hide_table_of_contents:!1,toc_min_heading_level:2,toc_max_heading_level:3,keywords:["Cloud-Scale","Data","AI","AI/ML","intelligent apps","cloud-native","30-days","enterprise apps","digital experiences","app modernization"],image:"https://github.com/Azure/Cloud-Native/blob/main/website/static/img/ogImage.png",description:"Combine the power of AI, cloud-scale data, and cloud-native app development to create highly differentiated digital experiences. Develop adaptive, responsive, and personalized experiences by building and modernizing intelligent applications with Azure.",tags:["Fall-For-IA","30-days-of-IA","learn-live","hack-together","community-buzz","ask-the-expert","azure-kubernetes-service","azure-functions","azure-openai","azure-container-apps","azure-cosmos-db","github-copilot","github-codespaces","github-actions"]},prevItem:{title:"HackTogether Recap \ud83c\udf42",permalink:"/Cloud-Native/30daysofIA/hacktogether-recap"}},s={authorsImageUrls:[void 0]},p=[],d={toc:p};function c(e){let{components:t,...a}=e;return(0,i.kt)("wrapper",(0,n.Z)({},d,a,{components:t,mdxType:"MDXLayout"}),(0,i.kt)("head",null,(0,i.kt)("meta",{property:"og:url",content:"https://azure.github.io/cloud-native/30daysofia/road-to-fallforia"}),(0,i.kt)("meta",{property:"og:type",content:"website"}),(0,i.kt)("meta",{property:"og:title",content:"Fall is Coming! \ud83c\udf42 | Build Intelligent Apps On Azure"}),(0,i.kt)("meta",{property:"og:description",content:"Combine the power of AI, cloud-scale data, and cloud-native app development to create highly differentiated digital experiences. Develop adaptive, responsive, and personalized experiences by building and modernizing intelligent applications with Azure."}),(0,i.kt)("meta",{property:"og:image",content:"https://github.com/Azure/Cloud-Native/blob/main/website/static/img/ogImage.png"}),(0,i.kt)("meta",{name:"twitter:url",content:"https://azure.github.io/Cloud-Native/30daysofIA/road-to-fallforIA"}),(0,i.kt)("meta",{name:"twitter:title",content:"It's Time to Fall For Intelligent Apps"}),(0,i.kt)("meta",{name:"twitter:description",content:"Combine the power of AI, cloud-scale data, and cloud-native app development to create highly differentiated digital experiences. Develop adaptive, responsive, and personalized experiences by building and modernizing intelligent applications with Azure."}),(0,i.kt)("meta",{name:"twitter:image",content:"https://azure.github.io/Cloud-Native/img/ogImage.png"}),(0,i.kt)("meta",{name:"twitter:card",content:"summary_large_image"}),(0,i.kt)("meta",{name:"twitter:creator",content:"@nitya"}),(0,i.kt)("meta",{name:"twitter:site",content:"@AzureAdvocates"}),(0,i.kt)("link",{rel:"canonical",href:"https://azure.github.io/Cloud-Native/30daysofIA/road-to-fallforIA"})),(0,i.kt)("p",null,"September is almost here - and that can only mean one thing!! It's time to ",(0,i.kt)("strong",{parentName:"p"},"\ud83c\udf42 Fall for something new and exciting")," and spend a few weeks skilling up on relevant tools, techologies and solutions!! "),(0,i.kt)("p",null,"Last year, we focused on #ServerlessSeptember. This year, we're building on that theme with the addition of cloud-scale ",(0,i.kt)("strong",{parentName:"p"},"Data"),", cloud-native ",(0,i.kt)("strong",{parentName:"p"},"Technologies")," and cloud-based ",(0,i.kt)("strong",{parentName:"p"},"AI")," integrations to help you modernize and build intelligent apps for the enterprise!"),(0,i.kt)("p",null,"Watch this space - and join us in September to learn more!"))}c.isMDXComponent=!0}}]); \ No newline at end of file diff --git a/assets/js/runtime~main.7606d5ec.js b/assets/js/runtime~main.4b0a93ff.js similarity index 99% rename from assets/js/runtime~main.7606d5ec.js rename to assets/js/runtime~main.4b0a93ff.js index 3f7bce7080..206abbd087 100644 --- a/assets/js/runtime~main.7606d5ec.js +++ b/assets/js/runtime~main.4b0a93ff.js @@ -1 +1 @@ -(()=>{"use strict";var e,f,b,a,d,c={},t={};function r(e){var f=t[e];if(void 0!==f)return f.exports;var b=t[e]={id:e,loaded:!1,exports:{}};return c[e].call(b.exports,b,b.exports,r),b.loaded=!0,b.exports}r.m=c,r.c=t,e=[],r.O=(f,b,a,d)=>{if(!b){var c=1/0;for(i=0;i=d)&&Object.keys(r.O).every((e=>r.O[e](b[o])))?b.splice(o--,1):(t=!1,d0&&e[i-1][2]>d;i--)e[i]=e[i-1];e[i]=[b,a,d]},r.n=e=>{var f=e&&e.__esModule?()=>e.default:()=>e;return r.d(f,{a:f}),f},b=Object.getPrototypeOf?e=>Object.getPrototypeOf(e):e=>e.__proto__,r.t=function(e,a){if(1&a&&(e=this(e)),8&a)return e;if("object"==typeof e&&e){if(4&a&&e.__esModule)return e;if(16&a&&"function"==typeof e.then)return e}var d=Object.create(null);r.r(d);var c={};f=f||[null,b({}),b([]),b(b)];for(var t=2&a&&e;"object"==typeof t&&!~f.indexOf(t);t=b(t))Object.getOwnPropertyNames(t).forEach((f=>c[f]=()=>e[f]));return c.default=()=>e,r.d(d,c),d},r.d=(e,f)=>{for(var b in f)r.o(f,b)&&!r.o(e,b)&&Object.defineProperty(e,b,{enumerable:!0,get:f[b]})},r.f={},r.e=e=>Promise.all(Object.keys(r.f).reduce(((f,b)=>(r.f[b](e,f),f)),[])),r.u=e=>"assets/js/"+({37:"ff37d1e9",48:"844f46a5",122:"4a8dbbc6",331:"8355b08c",604:"ed2b8e88",628:"19ce5436",638:"c983f72a",684:"79ca3466",733:"4fbefe4a",757:"52285efb",997:"010f538e",1053:"52fc18de",1178:"1bf711d8",1426:"fa0a96ff",1484:"232a5b9a",1738:"0e760f5c",2312:"f6bd7cc2",2453:"a3def401",2803:"7a79be67",2820:"e936f9f6",2856:"b31f0c62",3015:"290bbe6d",3099:"343a65b7",3273:"0a09fc38",3377:"c63bdbb4",3453:"9007293a",3664:"1ef7a213",3857:"c2319041",4237:"8085909f",4270:"ef64c709",4391:"0fc52616",4644:"9ac18e3c",5017:"f83e3e43",5044:"b430dbd6",5078:"ea385099",5172:"dbe9f459",5352:"c2c35f38",5482:"8445e33e",5558:"2cf5c9f6",5621:"6a04bf88",5627:"94fd4cc3",5710:"de95f9b4",6038:"98acceed",6050:"3852251c",6597:"3505d13c",6664:"114a0ea4",6800:"371b5a64",6994:"efe016d3",7010:"2aaf12ef",7477:"3f4f5020",7488:"735396ab",7615:"87bbf9f8",7617:"840d1cce",7623:"b144e829",7698:"7b92706b",7706:"9580127c",7861:"f3333784",7881:"b9e364fe",7945:"e94673f9",8106:"52052568",8109:"4220343e",8217:"f8223cf0",8229:"70037571",8258:"1730ce49",8325:"d49a3a10",8366:"70ec2d67",8410:"695b08bd",8420:"56ed19dd",8442:"b6c1521e",8726:"fcb7c80a",8863:"02ad56ee",9054:"a48a9539",9437:"5e4e61a3",9704:"b45174e4",9817:"14eb3368",9832:"6a6147d5",9885:"6fa36db2",10063:"30f527c7",10297:"13f90819",10322:"f7daf5fe",10387:"8ba97af6",10426:"6352e992",10552:"304f028d",10619:"d9293a3c",10853:"5cf46a9a",11142:"5b188835",11357:"9c3672b5",11575:"ba136adc",11579:"358ca55c",11933:"bbc2b27f",12045:"26a80f01",12046:"fa26972a",12230:"7520942f",12402:"a9ff5d75",12697:"63c787c2",12829:"8c56eedf",12915:"7097285a",13015:"b40db2ab",13085:"1f391b9e",13360:"4a4c152b",13398:"4058c823",13464:"3a7594cb",13497:"3ee42e3e",13545:"c1d8a90e",13596:"079fe0b3",13838:"b9c0af58",14051:"b4ea6d68",14094:"94c8d0a4",14219:"5ae2fd00",14220:"42917112",14611:"cb997589",14631:"1aaa8d08",14899:"3f49754a",14999:"2b94f1a7",15107:"90b498df",15133:"4a93df7c",15326:"e9ebb693",15344:"221d88a7",15399:"5517e946",15494:"14f6b037",15518:"6b9868e6",15602:"83f9535a",15717:"b9d35a0d",15734:"d69f1c18",15762:"a5de73d8",15851:"4f2455b0",15987:"b2c16f4e",16189:"8a2e5722",16355:"4052d3d5",16509:"00429eb7",16590:"8e5814b3",16651:"d22054a7",16663:"ec0ac9db",16781:"069faf48",16857:"3f5ea235",17420:"94ced535",17573:"63b6a597",17696:"2287f69c",17925:"f1fe5cc7",18031:"158ed014",18034:"75d4719a",18159:"311af35b",18390:"e703f3a7",18438:"9e2f083a",18631:"f5784bce",18656:"83bddd4b",18677:"ae3f1154",18870:"0182af35",19056:"4f63e6a8",19093:"5f86de18",19205:"86d446e2",19457:"60f25d92",19570:"6042952c",19608:"e1586f77",19661:"020a0ff4",19699:"9dbc57f1",19977:"171c08f2",19981:"d60c28fa",20006:"934b3b5c",20015:"4e1c0a1c",20116:"808b9749",20243:"dceeb781",20343:"3a0a8c2d",20426:"0154d667",20454:"81ce6d13",20703:"032f8ca1",21383:"2c0c4af3",21401:"9d765af9",21484:"ba7af0ad",21683:"7628c73f",21703:"b66d95b7",21852:"cac8e99d",21897:"ee9bc1a2",21899:"c2d8f9fd",22154:"225d85cd",22184:"1bdf368a",22422:"4edd86bf",22531:"60fcf8e3",22671:"de689bb5",22691:"4f2db759",22723:"69e6ed04",22854:"db1823d5",23064:"d11520dd",23089:"535cf760",23198:"66c5ad31",23372:"69c441bd",23454:"e2262ac9",23849:"543df9b7",23873:"bcbec2d9",24359:"5f63ac35",24766:"e9f92e0d",24913:"4988404d",24966:"08bdb996",25080:"e65795b9",25125:"4edf6cbb",25196:"faa3ccc1",25227:"bbf7817a",25231:"eacffe03",25286:"391aff16",25304:"eef2ff81",25481:"85ebb381",25623:"c327d421",25743:"915f7087",25794:"b34c50e5",25847:"b09e51a7",25857:"1ddf7480",26314:"7f177be4",26315:"4c97e608",26453:"082a9ce0",26553:"74bd70f4",26952:"ac6b5ff3",27019:"10362b01",27028:"a285ff6f",27063:"e934991f",27226:"9245a8c6",27242:"beaf9ddb",27244:"0254e92e",27256:"5c062db9",27319:"da0c8116",27837:"b4a8043d",27855:"541e2717",27918:"17896441",28058:"f34f398b",28145:"babe571b",28199:"a6767219",28275:"42fd365f",28369:"d307d5dd",28583:"22711736",28687:"57e8111f",28836:"dd2fa4fa",28843:"cf57d094",28920:"d4bbb0c6",29017:"23b53440",29129:"4a8159d5",29162:"7709179f",29227:"df7b95b8",29257:"d94a5b94",29327:"6e3cf958",29514:"1be78505",29566:"511592be",29604:"6a5b295a",29620:"3e423595",29789:"0fe97ad9",30018:"2efdc0bf",30040:"e1399b64",30214:"12d48ddb",30227:"5a27c07c",30248:"1c910b4c",30299:"15a2eb39",30335:"a648e1ec",30495:"8a1f6266",30611:"21a4b026",30700:"300f5a7a",30724:"2790a299",30743:"5e204f51",30753:"76723d32",31003:"515b0e16",31132:"f128a0f0",31135:"700e33eb",31348:"24634ed0",31609:"97ba9f7a",31757:"310da260",31972:"7d1b9d2c",32197:"a9e85955",32344:"01c7a724",32403:"c39a1b67",32574:"ff198b7d",32627:"a940942f",32708:"ac2c6f29",33133:"2f2b5329",33209:"a361f0db",33275:"485b9e1f",33280:"6cffcc32",33319:"ce53ffd6",33508:"584ccef3",33568:"9147217a",33581:"b53ee4cc",33630:"48ef63ed",33647:"808a5912",33733:"708744f2",33882:"85550d99",33990:"0b5f5bbf",34206:"8bdb3070",34214:"197575b4",34492:"195ec8c1",34594:"d64c3433",34749:"ae8acb83",34934:"6e13655f",34967:"9af977f2",34996:"be7dee77",35002:"4a945222",35090:"1fb2655d",35377:"6c2093fb",35398:"da11032a",35644:"c6c2f8a6",35680:"ffb3fd1a",35727:"4a85be1e",35782:"b589b176",35975:"da6fbf2a",36650:"57fa9de9",36686:"bf4ba93b",36770:"2294c633",36779:"1c7a0340",37050:"08c13187",37279:"f5ac3b90",37382:"e0be8f6f",37460:"8cacefc1",37582:"e6ac9ebe",37651:"37734e29",37728:"54e82dbd",37748:"19dd9a55",37754:"70978acc",37795:"feb943f9",37838:"1280d58f",37923:"28ea4247",37948:"8974c763",37961:"5979b063",37979:"40ec79c5",37984:"714230e1",38057:"d41d8467",38097:"edef2ad1",38130:"0e832ef5",38336:"a6400791",38600:"8c6b1b70",38780:"6a4ca75b",38781:"94607c5f",38794:"e8b803ba",38889:"eb8d02f3",38945:"78f7c451",39010:"f5b7c6a9",39073:"04288e05",39259:"4f0dde4f",39714:"99d1ecb8",39717:"f6df8ec8",39923:"c46736c1",40055:"c8594c9f",40117:"4254c5fd",40127:"0c5ad103",40208:"de48c1c2",40270:"8c7d23e7",40339:"4a100773",40418:"70b87c8a",40626:"f79c2b36",40766:"54e84bd7",40812:"17e657ee",40821:"f8bd7d44",40835:"6abbc264",40858:"ff734053",40869:"92851fbb",40882:"25076710",40976:"b3d197ad",41160:"de4b4bc0",41200:"d0db6cd5",41203:"4b7d35aa",41307:"4e85b922",41331:"d2487d2c",41566:"9f789b70",41571:"4a6890ba",41642:"2508adfd",41735:"563c77e7",41905:"bcf95b3c",42036:"e1bc2a63",42267:"a42917dd",42310:"4e4c4edb",42333:"d216db91",42346:"05309783",42374:"170f5865",42589:"5ebfacad",42613:"3d607786",42888:"f30b4e00",43018:"26fa933c",43049:"ddeb9c3b",43086:"444ef230",43148:"9de9dc34",43333:"82d96bf5",43345:"45b07980",43488:"4f088abf",43583:"fbfa5a90",43602:"470ed423",43871:"e16a4367",44113:"4d232fa6",44116:"ad46602f",44193:"dc727da6",44209:"e88e2b57",44353:"e8c81125",44608:"d0273d46",44702:"f248a380",45121:"ab87274c",45123:"167f29d7",45345:"00e5e0c6",45508:"64930ae0",45711:"940c9439",46103:"ccc49370",46142:"0c8d610a",46154:"6d71a54c",46250:"d584ff55",46305:"e89bd621",46643:"1cc51124",46781:"c32220c7",46822:"605b97c9",46828:"e5becd70",46989:"f5a85496",47145:"dd73d8cf",47246:"8dbb57bc",47335:"b675f7d6",47348:"129735db",47547:"82d2e731",47561:"cb2d3221",47652:"2e6fe460",47839:"be284c34",47857:"9444fc8d",48036:"a95a7c55",48155:"b1a57682",48236:"6f14a4c7",48494:"9241169f",48610:"6875c492",48625:"9aaaf4b8",48758:"937ccad8",48850:"a49e650c",48919:"6a312c97",49452:"c80c34af",49484:"1c6266f1",49623:"488446b3",49958:"6ef7e3d4",50070:"65cafd8e",50572:"93b8c5e1",50808:"e41794f0",51115:"5c2ccfbc",51237:"f3cb94e5",51445:"baf5811d",51494:"1224d608",51789:"fc042285",51828:"8baf15a9",52017:"0bcbab68",52071:"f5f8a48c",52238:"a6dcb37f",52338:"d667446e",52434:"b75b9dbd",52535:"814f3328",52569:"a24481e9",52579:"c33a8a7c",52932:"b46d1039",52950:"52c003c1",52989:"898c55cb",53118:"06db7cdd",53174:"3c2b2163",53389:"54a5ea7d",53470:"72e38fbe",53486:"79b2265b",53608:"9e4087bc",53698:"8866a401",53876:"39200a92",54012:"c2839c2d",54018:"d721da33",54202:"79b64cea",54236:"fb88b8ca",54293:"bb29086a",54647:"ff1fa6c9",54813:"e4fc1a09",54894:"6ac0f798",55084:"3b690a08",55191:"05cbd5e2",55224:"c2fb8e8b",55269:"0f519dc1",55411:"893f2e93",55558:"14e99011",55759:"e37c4032",55800:"a65e9479",55912:"05037b3e",56115:"c4dc1033",56414:"e9fc3e68",56462:"717ca7ad",56653:"48efa9f4",56686:"53b3fc79",57040:"5d9699b4",57050:"3402daf1",57077:"cea706bc",57119:"48d83bfd",57168:"41c52eb0",57169:"c44d11af",57550:"cdc79d9c",57832:"b44a2473",57849:"e5ae2d3d",58034:"bb1d8af3",58064:"d9a67898",58079:"488d465e",58280:"66ce2abc",58288:"ac8cc8fe",58428:"b004fb50",58497:"70e93a45",58714:"d475afe6",58738:"9750cd01",59070:"f830ec9e",59477:"c580cfa2",59543:"c44bb002",59593:"1a3abbc3",59978:"2007206c",59988:"d2aa22d4",60169:"30f26a7a",60197:"0abf7f02",60303:"48d7f22e",60606:"426f5ee7",60706:"eb884ce7",60738:"07182537",60840:"5db8c956",60841:"f6a05f02",60899:"41786804",60933:"a9bdffda",60952:"86a7690c",60981:"9fbb892a",61086:"1c8f664c",61154:"39704467",61209:"808beaf0",61520:"a404e9a0",61859:"329ba483",62240:"e1d438e9",62259:"3f534172",62455:"dccd6689",62497:"aaf8be7c",62516:"2891c2a3",62519:"65bd9c5f",62595:"31c97e84",62614:"6e09f910",62972:"1c9ffcde",63013:"35ac9352",63053:"c0df61e5",63167:"db9e00b3",63259:"2d86cfb6",63323:"425319e1",63439:"647961e4",63637:"09a8101c",63642:"5b222fc6",63741:"021c8d1d",63749:"9a3e0d8e",63776:"8c99d685",63981:"86b9f332",63991:"0e1333d1",64013:"01a85c17",64195:"c4f5d8e4",64232:"56ac2859",64243:"d05368c3",64336:"94d4ac07",64388:"e0ee4473",64419:"27dcd181",64675:"e4a2f027",64696:"6e8a7b67",64804:"742b38dc",64835:"6633d22a",65251:"8f4add25",65477:"300fad81",65487:"155d8733",65571:"73f0aa6e",65753:"89439f6f",65893:"a123ff76",65926:"2b022a0d",65992:"36385a98",66019:"0a90bd61",66033:"18b1ff93",66099:"ec244af9",66242:"bc0e8ad0",66444:"aa72c38b",66497:"c0a2372d",66537:"c4b4de0f",66627:"d64808fa",66651:"1fbd1224",66668:"b1db9e78",66672:"8bc7054e",66686:"a00df5b4",66814:"06b5abd5",66975:"afc3e988",67e3:"54e2ce19",67052:"b5dae24c",67197:"02dc33ee",67251:"ffe586b2",67434:"1933092b",67490:"3125c86a",67493:"89197f4f",67562:"36ea8d35",67622:"98735e69",67700:"a8ee6229",67856:"4d42bb9b",68216:"ef8eddd0",68368:"58413115",68468:"e82f66e0",68551:"c2d757e2",68648:"3052e807",68652:"996a3652",68927:"76df9d58",69171:"3e382c14",69189:"561eb05e",69197:"27a255b0",69253:"bb243f37",69407:"fd7a878f",69432:"8b5714b2",69964:"d04223d4",70177:"b425f106",70266:"ab0051c0",70440:"decd1b07",70727:"8178af10",70729:"25508138",70936:"2759b647",70982:"d50e2b40",71012:"d11663f1",71133:"297e3da8",71151:"d272aefc",71223:"597d409d",71303:"5d153b8f",71328:"2fff3a21",71359:"f2d08d34",71468:"2acb43b2",71515:"83f4d82c",71630:"e8dcc3fe",71791:"51698cc9",71898:"74a6c4d8",71926:"c212c0a6",71940:"b1059194",72050:"61d029d7",72235:"6f0c12c9",72601:"bb9438bd",72612:"aa826c81",72971:"16399044",73012:"9a8df0df",73323:"2605ac5e",73337:"c52c4229",73344:"25619125",73354:"f848febd",73945:"7e7aedec",74668:"2f5655a7",74710:"e159664d",74834:"cd336e02",74854:"3dd66ec8",74960:"276eee65",74970:"603045b5",75080:"5ac38b2f",75337:"6a5e520d",75555:"02eacc81",75678:"ae14fa1f",75927:"417f410e",76058:"fa6b5e6c",76299:"97c179be",76336:"7ce70624",76434:"19e5cabb",76508:"0f2db0e2",76611:"8136a61a",76785:"7d93b36b",76894:"763e49fc",77022:"bb1bd555",77034:"81eaba73",77046:"d1be9ff4",77099:"246d6ed0",77170:"0955c03d",77182:"5250d15a",77309:"18305907",77364:"edb3edba",77411:"f5ef3ca7",77795:"2e52b9a2",77916:"6d43c7c4",77946:"99a61e74",78059:"2f117675",78061:"d999f503",78132:"9ddf9492",78133:"d00410c7",78192:"61c47875",78484:"ae4a8bfb",78505:"17bd234e",78564:"af753b33",78897:"a2e6ced6",79005:"a68ee39b",79203:"57ada458",79245:"c43f31e5",79367:"a60bbbfe",79476:"b1509bad",79509:"dfd81e36",79522:"0f00d983",79636:"2c768b07",79695:"97a5ae26",79787:"f97394ec",79823:"a9e32c6a",79994:"88d99e0f",80035:"81540514",80053:"935f2afb",80443:"051147c5",80670:"e1daa54d",80819:"0abb84f4",80901:"44a20d39",81155:"45fd4fee",81183:"b04df543",81194:"99a72a3c",81225:"6131b196",81257:"e7c29825",81296:"7ca86db6",81512:"8955acc6",81551:"abf597e2",81583:"eb689fda",81744:"b5a12906",81814:"83f3468a",81944:"f7decf47",82010:"434ff406",82048:"08a845a3",82204:"855c4aff",82369:"c202d824",82457:"ebabe618",82462:"b1cd5b20",82623:"64f93100",82633:"7527a9ef",82651:"3c0e6537",82813:"d2567b4d",82902:"ce9b313c",83061:"e1fc87d9",83228:"fd4ba951",83260:"6b04e7ad",83283:"497459e9",83351:"b478b21b",83357:"b9f7f737",83501:"739bc6b2",83508:"c32a5425",83510:"554c686d",83512:"5cd45a8d",83592:"827b607c",83765:"f4610d17",84038:"48db209f",84275:"51b7d1eb",84359:"b46e7759",84432:"262e1fb1",84567:"66301b34",84643:"08cc3f2a",84654:"9f14d4e5",84752:"1007ba84",85048:"015ef8b2",85493:"4e65812e",85969:"273187e1",85992:"00ff3ab8",86440:"52fb3760",86519:"b57dcd1d",86592:"5b38bd06",86608:"22a8c514",86697:"8025f7fd",86761:"0439459b",86766:"962e1587",86943:"ec65f5d5",87027:"f97a64b3",87097:"11f27dd8",87166:"1e942b07",87413:"e1c2af7b",87585:"b76458e3",87836:"46f628a8",88156:"e3bf2dfe",88439:"0f8260a7",88526:"18754cb8",88564:"77053eb1",88646:"7c5cb72e",88654:"43386584",89534:"7b07dcad",89641:"2464c061",89677:"1a4e3b56",89799:"98a79a26",90210:"45528793",90244:"087fccde",90300:"f6c4aca5",90388:"05b7df8f",90442:"728f6513",90479:"ac0e80dd",90628:"a875518b",90759:"3a894f2b",90861:"0d936d6e",90866:"05a8e5eb",90950:"695a0e95",91007:"f6f0ee1b",91019:"b8de4b14",91034:"67d300de",91117:"a534381b",91214:"9f0c8c51",91259:"c0c2b9da",91367:"a11db7eb",91378:"221f3b9a",91518:"70b24ff0",91584:"631988a9",91589:"7fd555e2",91645:"0d808a5a",91700:"3ca1fc8b",91821:"e097c1da",92097:"382b7bd1",92311:"6383d72d",92340:"f21c8b70",92428:"1567a249",92489:"2828c0bd",92514:"3a3cf5dd",92728:"d7dbf034",92749:"d9d7f0a9",92878:"df8f2207",92922:"26ca5cfc",93061:"f9d7044e",93089:"a6aa9e1f",93176:"ef05350a",93450:"dc5eefd4",93602:"19d4af76",93609:"0fa6c6d6",93683:"52a2e7f4",93744:"4aa36b6e",93804:"bff09194",93954:"af82476a",94106:"137765a7",94125:"fa74e77e",94287:"4f49e52d",94304:"0a1ee2df",94399:"90806480",94419:"12a40cbb",94445:"a4a649e5",94919:"a4a37188",94963:"cafc3c94",95095:"0e5b1676",95175:"f470690a",95259:"4204125f",95701:"13524175",95774:"570b38e4",95903:"67f51f7e",95904:"7cae6c3b",95930:"c5298e55",96010:"fbdbf422",96655:"8d1ef8e7",96792:"f4b1ab07",96907:"e7136c90",96935:"225bf44d",96995:"2b471e02",97021:"4741b16e",97192:"ca71fe7b",97214:"168e5dc9",97491:"66f5903d",97492:"1420d1e4",97515:"ebbe4e7d",97559:"b4e6e6a7",97597:"244544b0",97613:"37e3b2f7",97655:"532dad37",97665:"31f0dae5",97732:"c736ecf7",97734:"f689083d",98268:"c67b3c2e",98433:"4104106c",98663:"8e84163f",98746:"e0b2cabb",99007:"d9a25476",99285:"765bde49",99434:"01b32472",99450:"dcf58f45",99521:"67f3d899",99632:"ee2b3c0a"}[e]||e)+"."+{37:"5933f91a",48:"74fd891b",122:"563dc2ec",331:"1d47d146",604:"a7c6b3e3",628:"27da0127",638:"6f112f49",684:"80470805",733:"c42582f0",757:"c4acc940",997:"b09031ba",1053:"079ad337",1178:"1b016d75",1426:"6b2f3b4a",1484:"cbbff5b5",1738:"f9b17472",2312:"31e4d51b",2453:"9674ca46",2803:"82219b38",2820:"76c03fca",2856:"1f026311",3015:"18161c54",3099:"f19817e6",3273:"a4fa98ff",3377:"5a639484",3453:"fed16a79",3664:"bc3f9094",3857:"d65decec",4237:"9682c9ac",4270:"8014c846",4391:"3b5c27bf",4644:"17bb9f95",5017:"1e9a05e1",5044:"b157fe8d",5078:"8bf9c9f9",5172:"d5fcbed9",5352:"bf553ddd",5482:"bf182b79",5558:"4b032376",5621:"3fe3f7fb",5627:"43da0b32",5710:"f774a773",6038:"40ced236",6050:"10961830",6597:"4176e0b9",6664:"5a348ad3",6800:"a9884af4",6994:"49e7032e",7010:"d268b173",7477:"25d58eec",7488:"2fc666c4",7615:"2dc8b17a",7617:"f2822fae",7623:"7b70e5fe",7698:"ba66e9f8",7706:"b8513fad",7861:"35d79e8f",7881:"11f110ce",7945:"b8087d3e",8106:"747ef2bc",8109:"4fb6e49a",8217:"da113ae8",8229:"b40f9232",8258:"75104e59",8325:"8a4558b6",8366:"f8337d0c",8410:"88affed4",8420:"7458bc54",8442:"c1f8837b",8726:"9f87d9dc",8863:"b6937ae1",9054:"eb408306",9437:"b9e6beb3",9704:"de317854",9817:"1e44026e",9832:"28df05ef",9885:"2c640d29",10063:"c41f1484",10297:"6c33e176",10322:"dd830233",10387:"cad306f0",10426:"3ac65578",10552:"76185752",10619:"37de607a",10853:"3fcc3904",11142:"7d5fbbcb",11357:"261138c2",11575:"1a3bc56f",11579:"32d9fd30",11933:"655c02c7",12045:"f28881fd",12046:"f41906ee",12230:"a7eadb40",12402:"f87c09e0",12697:"d5002455",12829:"6da4d4f5",12915:"1903402a",13015:"ee5daff5",13085:"b361a78f",13360:"2b1056df",13398:"99ca8781",13464:"bbdbb9fa",13497:"880930ae",13545:"fea2cc4d",13596:"9170d405",13838:"8af72575",14051:"04d20d21",14094:"1d0c4c03",14219:"a6b3e366",14220:"b0b7e58e",14611:"dcf7ea74",14631:"7c3a8c2c",14899:"784f7c36",14999:"773ad495",15107:"8214e6bc",15133:"bc4ae948",15326:"4e0f2084",15344:"e9a9ed50",15399:"3aaab999",15494:"4426ecb7",15518:"bea7049f",15602:"3d06b878",15717:"6cba3a3e",15734:"4ca1345e",15762:"c65d80ed",15851:"1be5c04c",15944:"af64afb6",15987:"c0ece9ca",16189:"ac7d6c24",16355:"0e05495c",16509:"1f72e055",16590:"3f88f18a",16651:"1ad5d2eb",16663:"b2658965",16781:"68c9f9f2",16857:"f3d4e928",17420:"dde40518",17573:"db8b9f9e",17696:"2e7b95ef",17925:"2184e0d7",18031:"f2e14279",18034:"1ad663e1",18159:"45bcd038",18390:"ce32c2df",18438:"e3a6c35d",18631:"bc92bf7b",18656:"d9b033ae",18677:"b72d21a6",18870:"890018a7",19056:"0033a7d3",19093:"8e505683",19205:"f8846446",19457:"4c4f5d30",19570:"564524e7",19608:"fb0aa998",19661:"607974ec",19699:"da89ad50",19977:"024c377a",19981:"5b9f9e44",20006:"c81b309a",20015:"06f7ed6f",20116:"781aa38c",20243:"1eb094da",20343:"7d2c6c8e",20426:"7b583577",20454:"8e697cc3",20703:"e98e7801",21383:"432f7d73",21401:"720a19b6",21484:"b9d3f670",21683:"96074098",21703:"c006b91c",21791:"bf96e0fe",21852:"61ab5ab4",21897:"b6a6cbb5",21899:"2c1bd9b6",22154:"2fe71644",22184:"96da28b4",22422:"057b5be6",22531:"d9ba6b5b",22671:"0be0df4b",22691:"15e7263f",22723:"c55f3e3e",22854:"6a607205",23064:"d539790c",23089:"237f18d0",23198:"a87e262a",23372:"30d227ba",23454:"70985550",23849:"ff2876c8",23873:"4e58d8e7",24359:"8584c686",24766:"468b4cce",24913:"b8cf8f2a",24966:"be84482a",25080:"aee6535e",25125:"db6dc901",25196:"5ea4518b",25227:"a36efb36",25231:"88ea1c4d",25286:"f44147b8",25304:"d9a8a3c1",25481:"adee698e",25623:"3a22b6ce",25743:"5efeaac5",25794:"802d7efd",25847:"5ce20e70",25857:"96f6f437",26314:"a5ea2e84",26315:"1e3e0f82",26453:"05d9a6f4",26553:"eef5174d",26952:"63951660",27019:"12268c2c",27028:"c071c979",27063:"5182008c",27226:"055d4472",27242:"552d478e",27244:"46bd7ae5",27256:"515df8fe",27319:"0c08dcda",27837:"214e5139",27855:"a2b46343",27918:"4838b5ea",28058:"6cf8c65c",28145:"f3e3531e",28199:"e49a6cd8",28275:"4ba0175f",28369:"9364ef55",28583:"80545a09",28687:"8ae0bba2",28836:"68702e93",28843:"92f1ba22",28920:"a83b3398",29017:"a6e982ba",29129:"59072297",29162:"f790562c",29227:"4d4a2c08",29257:"0fec21f6",29327:"ef0e977c",29514:"1ac7e75c",29566:"46f25376",29604:"7b1e6a41",29620:"6531b96b",29789:"9056a7dc",30018:"61a17b96",30040:"2f49b9ac",30214:"4d2bdddf",30227:"4394b932",30248:"43a53ae7",30299:"59d38389",30335:"ae8edd18",30495:"6246670c",30611:"a094de39",30700:"3eea7ce9",30724:"db9f21d2",30743:"3edeafcd",30753:"64e50ac7",31003:"86b8d54c",31132:"65e4e427",31135:"2722a1b3",31348:"fd44f662",31609:"5418d104",31757:"504ba490",31972:"8fe8f4bc",32197:"62e7dd01",32344:"b311d0ea",32403:"8286490f",32574:"979343f5",32627:"ab3286c2",32708:"f778e5a1",33133:"561fcd99",33209:"d07c11ae",33275:"db7bfb13",33280:"a4797d87",33319:"c5ef9132",33508:"efc56178",33568:"dd1d4440",33581:"59b58625",33630:"d67449a5",33647:"e0283e49",33733:"11b26ab0",33882:"2e8066d9",33990:"1bed7406",34206:"4c244d24",34214:"332631ce",34492:"d15584dd",34594:"6a4bb01c",34749:"bb39a16f",34934:"5d0afc2f",34967:"34cde04f",34996:"efe32c17",35002:"5cc02ead",35090:"a4c4cdce",35377:"ebeb2981",35398:"2241c2d0",35644:"10a0f9e2",35680:"c6814c6d",35727:"50be8dfc",35782:"2841a0f3",35975:"d2118afd",36650:"8b12abae",36686:"a8743407",36770:"d25587ff",36779:"a7ae2649",37050:"11e9a364",37279:"d57a38d6",37382:"97a942cf",37460:"e46c663e",37582:"759cb15e",37651:"455ee130",37728:"ccce251d",37748:"2c2d8707",37754:"213ea3ad",37795:"9ca96123",37838:"3e1a02dc",37923:"72722260",37948:"3e458a41",37961:"36716d44",37979:"01160dff",37984:"e1a0e516",38057:"04e2e265",38097:"7e8ea3d8",38130:"8397b213",38336:"c34346ea",38600:"067113f7",38780:"287fad1e",38781:"5845a6c7",38794:"03fed7c3",38889:"692cbf88",38945:"1eaa87ef",39010:"3b247030",39073:"63e653fc",39259:"c2a9f13e",39714:"fefc3f0b",39717:"c3f8e7e4",39923:"b24d81ed",40055:"624e1196",40117:"60ecff76",40127:"3a604273",40208:"bc9176a9",40270:"02cfbfd4",40339:"4dc46223",40418:"a4e15d62",40626:"d6b49d3c",40766:"2c718ec5",40812:"9ff48a8f",40821:"8d018d8e",40835:"a3b514a5",40858:"41d37882",40869:"2dfaee7d",40882:"e79cc157",40976:"c6560ef2",41160:"a2f4a227",41200:"d391fd86",41203:"5506e1fd",41307:"c7af2fcf",41331:"3563c427",41566:"2ecc0770",41571:"248dc9b1",41642:"cc8c7920",41735:"04a1a727",41905:"3ab27b6b",42036:"e740be66",42267:"6b510e96",42310:"aff971a4",42333:"05386e40",42346:"d30b6991",42374:"b3727ab2",42589:"72f42903",42613:"26b510bd",42888:"59c80b99",43018:"8e4c6e7b",43049:"17af2537",43086:"271d0678",43148:"d370b211",43333:"aaa32d4a",43345:"57c984eb",43488:"862a37ab",43583:"87c50916",43602:"188d7735",43871:"47ce1193",44113:"e19d688d",44116:"2b20ffdd",44193:"4b5830eb",44209:"bba20461",44353:"32787e96",44608:"a2e4652a",44702:"5d55e231",45121:"cd5290d0",45123:"b90f6c07",45345:"d77dfa36",45508:"cca7883d",45711:"5f495d56",46103:"17b266de",46142:"7605e6ad",46154:"045dfcec",46250:"62c50002",46305:"27c4ebb7",46643:"d125c5f7",46781:"c8d25f6f",46822:"ea1b059a",46828:"d5010265",46989:"d5af6b90",47145:"7b8f79b0",47246:"9a4a46dc",47335:"d3fac4a3",47348:"992960d4",47547:"f3e85f31",47561:"b6d9379b",47652:"97981a37",47839:"f1ec02b6",47857:"46e8a67b",48036:"2026159d",48155:"1c16a9a6",48236:"3d17c421",48494:"b476b64b",48610:"038d0474",48625:"8eaa651d",48758:"082a2e65",48850:"599cc2d0",48919:"ab0f5849",49452:"aafb838d",49484:"5adf4bb3",49623:"61dfd9b5",49958:"5b2459e8",50070:"1e384e32",50572:"3cec2473",50808:"f9279d46",51115:"88b465c5",51237:"96ef6750",51445:"6b02ac22",51494:"27f26066",51789:"c65b9779",51828:"c3a12705",52017:"433cdd7c",52071:"70a282f1",52238:"4b3d7d33",52338:"c182542c",52434:"4db50a8a",52535:"5ea52c33",52569:"50b50d05",52579:"7c69d60a",52932:"7b6e43d4",52950:"6d651ca0",52989:"0b6167e8",53118:"13c65b53",53174:"1b0c817f",53389:"9cfe49ec",53470:"5e7b1c85",53486:"a40d83e8",53608:"2cff198f",53698:"c1bc4aab",53876:"34ab4e07",54012:"e4a6012d",54018:"77587fb0",54202:"3a68b9a9",54236:"dc86db17",54293:"ed0be7d0",54647:"207b0a07",54813:"a67be84b",54894:"b3d78903",55084:"ea337b16",55191:"50f44c34",55224:"cd3cac2b",55269:"fba5356f",55411:"42fcfab8",55558:"5f8032b7",55759:"c946df51",55800:"2033ed07",55912:"f441e25d",56115:"337617fe",56414:"a3e37706",56462:"bf860770",56653:"b8f2f574",56686:"7da2837f",57040:"2bd92a3d",57050:"dbc70cd1",57077:"6bd7e954",57119:"273b819a",57168:"735db641",57169:"4dba0047",57550:"db779aff",57832:"07bdd689",57849:"266019e9",58034:"32ee197d",58064:"8c8e5d8b",58079:"ed8b7022",58280:"58478506",58288:"f8530ad3",58428:"892b90c9",58497:"4e3e1ede",58714:"3d3640e8",58738:"7352ecf0",59070:"aa69fedd",59477:"81511f98",59543:"e9802148",59593:"f6f0974b",59978:"c5e114b4",59988:"4251282d",60169:"bdf2d842",60197:"b0525b92",60303:"0e49fdc2",60606:"56078b2a",60706:"6df08112",60738:"fe4f3e69",60840:"5c3c223b",60841:"ce0a2cc4",60899:"4aad0b0a",60933:"e8cc360d",60952:"44f86efc",60981:"9fdfba11",61086:"96544307",61154:"cbef6e0a",61209:"6cc41d6c",61520:"bb03362b",61859:"578be531",62240:"4706a2ff",62259:"9abbba15",62455:"a1e559c8",62497:"59c4fd5f",62516:"15c60f6b",62519:"947b35d9",62595:"94aefa50",62614:"05e19552",62972:"6835bc16",63013:"b2d2a01c",63053:"4700cdff",63167:"abda22b7",63259:"4a673c5e",63323:"39cab38c",63439:"1d1a43fa",63637:"19f260ae",63642:"5f1ae0dd",63741:"71eb1bcd",63749:"c19603e3",63776:"f9f58eb6",63981:"02286d16",63991:"8a54b88b",64013:"03a563a6",64195:"10856761",64232:"8a33c233",64243:"75989d92",64336:"b16dfcf9",64388:"12b1ce59",64419:"595bb63e",64675:"792cf31c",64696:"8063b40d",64804:"4e092706",64835:"e93b78c1",65251:"368ed371",65477:"f8693855",65487:"0f938f35",65571:"a88a4f02",65753:"207a449d",65893:"65d2aa95",65926:"f0c4933f",65992:"8155c7d1",66019:"f5ee5607",66033:"c702b287",66099:"0bdcede5",66242:"f6cd64b3",66444:"d90e13d8",66497:"42dab590",66537:"3e5d621b",66627:"535a46a9",66651:"cf30a8b5",66668:"f6c24571",66672:"f7ea5c91",66686:"9a002a38",66814:"7c54daca",66975:"26a03d4e",67e3:"7a2f8454",67052:"ec301934",67197:"ded5091d",67251:"e7a20109",67434:"93fb2a4d",67490:"41c6c7b5",67493:"fdb654f7",67562:"59cf737d",67622:"475d26f0",67700:"a1bb520e",67856:"4d6434fd",68216:"11dd4212",68368:"aa3680b0",68468:"a14d04c5",68551:"710d1a33",68648:"4c270532",68652:"e8c65b5c",68927:"383338f9",69171:"783e4228",69189:"662fb40b",69197:"647a8ea2",69253:"af3363a4",69407:"f48995b1",69432:"71709137",69964:"0f862fcd",70177:"83306a3d",70266:"4a1b2650",70440:"0b3cc72a",70727:"c0778700",70729:"77c5f782",70936:"2c3852c1",70982:"6c87873b",71012:"51dae131",71133:"b0814e2e",71151:"45dc9fc4",71223:"2e67de49",71303:"6e97b863",71328:"e98b1854",71359:"89fbf093",71468:"9b44129b",71515:"5411c5b7",71630:"e8cc78e1",71791:"e3dd836c",71898:"389786ed",71926:"2e9fc2ee",71940:"df64700a",72050:"c4e794da",72235:"82db2ab9",72601:"7e838f42",72612:"994ef572",72971:"46cb4a7c",73012:"45a8932f",73323:"7e89d196",73337:"6ba05268",73344:"2562505d",73354:"1bc3c6e5",73945:"db10a21a",74248:"317299c5",74668:"cdc50bef",74710:"75bf4517",74834:"e5197f2d",74854:"a8e121b7",74960:"945d52ca",74970:"6dfb1c76",75080:"151b4ca6",75287:"648c9fec",75337:"d6d1fbd3",75555:"b5f54726",75678:"c37afb6b",75927:"be7f1598",76058:"a08263a0",76299:"6c1850be",76336:"31000fc2",76434:"3ae4b594",76508:"0cdcd502",76611:"4f449a9f",76785:"c58c4056",76894:"7e01133a",77022:"585cd7a3",77034:"422ca81d",77046:"ae095a34",77099:"b5df1df3",77170:"55466a2c",77182:"4a946cf4",77309:"5c4e6bd7",77364:"71272021",77411:"40c39ca2",77795:"20664d89",77916:"eb7548b2",77946:"50b604a6",78059:"3323f5f4",78061:"44508634",78132:"4b4ab92b",78133:"2ef6dc97",78192:"ae1c3fad",78484:"e85424a8",78505:"d445ca81",78564:"de056442",78897:"b71d3b4d",79005:"dd91fbad",79203:"d16bdd25",79245:"6ed6d002",79367:"aa518dfe",79476:"931fa5f1",79509:"277c3204",79522:"67e487c9",79636:"ef79b69c",79695:"4ab823e5",79787:"2ec87ae6",79823:"0c88d933",79994:"2356b20f",80035:"3b00b5a6",80053:"65a2cbf9",80443:"8f901e8b",80670:"03d4e9dd",80819:"3468159a",80901:"63df8fcb",81155:"e7b29595",81183:"20dee322",81194:"19b943c6",81225:"d1f5aabc",81257:"33523c17",81296:"eca59293",81512:"7b9e9d63",81551:"91f7f90f",81583:"718ddb87",81744:"a90cd918",81814:"069107bb",81944:"227c3bb5",82010:"7000a07f",82048:"2d38b3a9",82204:"c745fcf8",82369:"bb0c4fb2",82457:"c412245c",82462:"70e8235a",82623:"34994308",82633:"2c5462db",82651:"050a38f5",82813:"fb93488f",82902:"d2330565",83061:"276981f7",83228:"e1598dc3",83260:"9a2d7ed1",83283:"040206c6",83351:"489cb4ae",83357:"83371262",83501:"fee7d68f",83508:"1ae987e2",83510:"8c65d881",83512:"cc5e4449",83592:"7797585d",83765:"c0555ed1",84038:"e928c030",84275:"6ce8164d",84359:"5f80f6ab",84432:"8d57e9f8",84567:"228433c9",84643:"086988aa",84654:"6096f393",84752:"74260af4",85048:"c332faad",85493:"7301f55c",85969:"c44bb119",85992:"3bd7cb73",86440:"763d93fb",86519:"8cc9ab92",86592:"df18f459",86608:"ad599d7a",86697:"071b687c",86761:"ca1acb07",86766:"3b471447",86943:"42d883c0",87027:"39650adc",87097:"98219dbd",87166:"42af16fe",87413:"352bf907",87585:"633732ee",87836:"e1650ecd",88156:"1426d205",88439:"9269f18e",88526:"6c026c2f",88564:"9c758bb6",88646:"c1aeaa84",88654:"4d8d2cd9",89534:"2a3a831f",89641:"6052dea2",89677:"707e39e6",89799:"0d3aa064",90210:"0b56f251",90244:"2d402ca5",90300:"2f314275",90388:"fad898d4",90442:"6d84fff1",90479:"532d5e7a",90628:"c22cde5f",90759:"6c300aea",90861:"08620d12",90866:"7e7d7458",90950:"cccdb672",91007:"6dc74c15",91019:"9b09c5d1",91034:"20fd4e24",91117:"4ba89e87",91214:"ce6a2748",91259:"ac23e049",91367:"7e8d93a6",91378:"b2e765ef",91518:"48b2838a",91584:"d1090d09",91589:"9b3441f0",91645:"2d64e871",91700:"ada5bd8d",91821:"ce55f017",92097:"ed5c446a",92311:"c2c068c3",92340:"8143b691",92428:"6f7a5c51",92489:"dd0cbfc9",92514:"c510e5e3",92728:"2f172920",92749:"a8da147e",92878:"67576f54",92922:"98815355",93061:"25603cec",93089:"e66d7d82",93176:"b76b5762",93450:"d80fc25b",93602:"6e42a41c",93609:"044f94a5",93683:"79c29c31",93744:"66e8e948",93804:"93958d88",93954:"c0420a62",94106:"dd96c34a",94125:"a2f571e3",94287:"21549ec9",94304:"a62a2020",94399:"4a575fbe",94419:"6c86618e",94445:"2d14c86b",94598:"cbf8f4af",94919:"7801b297",94963:"12a978c5",95095:"3b2912a9",95175:"29b42909",95259:"ad1997b9",95701:"333dde60",95774:"c2ceb80c",95903:"713409ed",95904:"c8f5c4c3",95930:"e956c391",96010:"49da5ac1",96655:"beb9aec9",96792:"2fe4fb76",96907:"24e25ee0",96935:"5ab4fc91",96995:"5a858146",97021:"1c267736",97192:"5b53c0f9",97214:"48a9d551",97491:"fa98d8ed",97492:"d37a6c7f",97515:"4152d2c6",97559:"349a5f60",97597:"6c665d18",97613:"7205cf58",97655:"df49261f",97665:"afe54de4",97732:"98e5f02e",97734:"bffeb70d",98268:"41c57aa4",98433:"2f92f49f",98663:"914b8a3d",98746:"6efc8a06",99007:"fffcfa67",99285:"adcd2037",99434:"a6352696",99450:"1325fa2f",99521:"939c4172",99632:"fc20a8b1"}[e]+".js",r.miniCssF=e=>{},r.g=function(){if("object"==typeof globalThis)return globalThis;try{return this||new Function("return this")()}catch(e){if("object"==typeof window)return window}}(),r.o=(e,f)=>Object.prototype.hasOwnProperty.call(e,f),a={},d="website:",r.l=(e,f,b,c)=>{if(a[e])a[e].push(f);else{var t,o;if(void 0!==b)for(var n=document.getElementsByTagName("script"),i=0;i{t.onerror=t.onload=null,clearTimeout(s);var d=a[e];if(delete a[e],t.parentNode&&t.parentNode.removeChild(t),d&&d.forEach((e=>e(b))),f)return f(b)},s=setTimeout(l.bind(null,void 0,{type:"timeout",target:t}),12e4);t.onerror=l.bind(null,t.onerror),t.onload=l.bind(null,t.onload),o&&document.head.appendChild(t)}},r.r=e=>{"undefined"!=typeof Symbol&&Symbol.toStringTag&&Object.defineProperty(e,Symbol.toStringTag,{value:"Module"}),Object.defineProperty(e,"__esModule",{value:!0})},r.p="/Cloud-Native/",r.gca=function(e){return e={13524175:"95701",16399044:"72971",17896441:"27918",18305907:"77309",22711736:"28583",25076710:"40882",25508138:"70729",25619125:"73344",39704467:"61154",41786804:"60899",42917112:"14220",43386584:"88654",45528793:"90210",52052568:"8106",58413115:"68368",70037571:"8229",81540514:"80035",90806480:"94399",ff37d1e9:"37","844f46a5":"48","4a8dbbc6":"122","8355b08c":"331",ed2b8e88:"604","19ce5436":"628",c983f72a:"638","79ca3466":"684","4fbefe4a":"733","52285efb":"757","010f538e":"997","52fc18de":"1053","1bf711d8":"1178",fa0a96ff:"1426","232a5b9a":"1484","0e760f5c":"1738",f6bd7cc2:"2312",a3def401:"2453","7a79be67":"2803",e936f9f6:"2820",b31f0c62:"2856","290bbe6d":"3015","343a65b7":"3099","0a09fc38":"3273",c63bdbb4:"3377","9007293a":"3453","1ef7a213":"3664",c2319041:"3857","8085909f":"4237",ef64c709:"4270","0fc52616":"4391","9ac18e3c":"4644",f83e3e43:"5017",b430dbd6:"5044",ea385099:"5078",dbe9f459:"5172",c2c35f38:"5352","8445e33e":"5482","2cf5c9f6":"5558","6a04bf88":"5621","94fd4cc3":"5627",de95f9b4:"5710","98acceed":"6038","3852251c":"6050","3505d13c":"6597","114a0ea4":"6664","371b5a64":"6800",efe016d3:"6994","2aaf12ef":"7010","3f4f5020":"7477","735396ab":"7488","87bbf9f8":"7615","840d1cce":"7617",b144e829:"7623","7b92706b":"7698","9580127c":"7706",f3333784:"7861",b9e364fe:"7881",e94673f9:"7945","4220343e":"8109",f8223cf0:"8217","1730ce49":"8258",d49a3a10:"8325","70ec2d67":"8366","695b08bd":"8410","56ed19dd":"8420",b6c1521e:"8442",fcb7c80a:"8726","02ad56ee":"8863",a48a9539:"9054","5e4e61a3":"9437",b45174e4:"9704","14eb3368":"9817","6a6147d5":"9832","6fa36db2":"9885","30f527c7":"10063","13f90819":"10297",f7daf5fe:"10322","8ba97af6":"10387","6352e992":"10426","304f028d":"10552",d9293a3c:"10619","5cf46a9a":"10853","5b188835":"11142","9c3672b5":"11357",ba136adc:"11575","358ca55c":"11579",bbc2b27f:"11933","26a80f01":"12045",fa26972a:"12046","7520942f":"12230",a9ff5d75:"12402","63c787c2":"12697","8c56eedf":"12829","7097285a":"12915",b40db2ab:"13015","1f391b9e":"13085","4a4c152b":"13360","4058c823":"13398","3a7594cb":"13464","3ee42e3e":"13497",c1d8a90e:"13545","079fe0b3":"13596",b9c0af58:"13838",b4ea6d68:"14051","94c8d0a4":"14094","5ae2fd00":"14219",cb997589:"14611","1aaa8d08":"14631","3f49754a":"14899","2b94f1a7":"14999","90b498df":"15107","4a93df7c":"15133",e9ebb693:"15326","221d88a7":"15344","5517e946":"15399","14f6b037":"15494","6b9868e6":"15518","83f9535a":"15602",b9d35a0d:"15717",d69f1c18:"15734",a5de73d8:"15762","4f2455b0":"15851",b2c16f4e:"15987","8a2e5722":"16189","4052d3d5":"16355","00429eb7":"16509","8e5814b3":"16590",d22054a7:"16651",ec0ac9db:"16663","069faf48":"16781","3f5ea235":"16857","94ced535":"17420","63b6a597":"17573","2287f69c":"17696",f1fe5cc7:"17925","158ed014":"18031","75d4719a":"18034","311af35b":"18159",e703f3a7:"18390","9e2f083a":"18438",f5784bce:"18631","83bddd4b":"18656",ae3f1154:"18677","0182af35":"18870","4f63e6a8":"19056","5f86de18":"19093","86d446e2":"19205","60f25d92":"19457","6042952c":"19570",e1586f77:"19608","020a0ff4":"19661","9dbc57f1":"19699","171c08f2":"19977",d60c28fa:"19981","934b3b5c":"20006","4e1c0a1c":"20015","808b9749":"20116",dceeb781:"20243","3a0a8c2d":"20343","0154d667":"20426","81ce6d13":"20454","032f8ca1":"20703","2c0c4af3":"21383","9d765af9":"21401",ba7af0ad:"21484","7628c73f":"21683",b66d95b7:"21703",cac8e99d:"21852",ee9bc1a2:"21897",c2d8f9fd:"21899","225d85cd":"22154","1bdf368a":"22184","4edd86bf":"22422","60fcf8e3":"22531",de689bb5:"22671","4f2db759":"22691","69e6ed04":"22723",db1823d5:"22854",d11520dd:"23064","535cf760":"23089","66c5ad31":"23198","69c441bd":"23372",e2262ac9:"23454","543df9b7":"23849",bcbec2d9:"23873","5f63ac35":"24359",e9f92e0d:"24766","4988404d":"24913","08bdb996":"24966",e65795b9:"25080","4edf6cbb":"25125",faa3ccc1:"25196",bbf7817a:"25227",eacffe03:"25231","391aff16":"25286",eef2ff81:"25304","85ebb381":"25481",c327d421:"25623","915f7087":"25743",b34c50e5:"25794",b09e51a7:"25847","1ddf7480":"25857","7f177be4":"26314","4c97e608":"26315","082a9ce0":"26453","74bd70f4":"26553",ac6b5ff3:"26952","10362b01":"27019",a285ff6f:"27028",e934991f:"27063","9245a8c6":"27226",beaf9ddb:"27242","0254e92e":"27244","5c062db9":"27256",da0c8116:"27319",b4a8043d:"27837","541e2717":"27855",f34f398b:"28058",babe571b:"28145",a6767219:"28199","42fd365f":"28275",d307d5dd:"28369","57e8111f":"28687",dd2fa4fa:"28836",cf57d094:"28843",d4bbb0c6:"28920","23b53440":"29017","4a8159d5":"29129","7709179f":"29162",df7b95b8:"29227",d94a5b94:"29257","6e3cf958":"29327","1be78505":"29514","511592be":"29566","6a5b295a":"29604","3e423595":"29620","0fe97ad9":"29789","2efdc0bf":"30018",e1399b64:"30040","12d48ddb":"30214","5a27c07c":"30227","1c910b4c":"30248","15a2eb39":"30299",a648e1ec:"30335","8a1f6266":"30495","21a4b026":"30611","300f5a7a":"30700","2790a299":"30724","5e204f51":"30743","76723d32":"30753","515b0e16":"31003",f128a0f0:"31132","700e33eb":"31135","24634ed0":"31348","97ba9f7a":"31609","310da260":"31757","7d1b9d2c":"31972",a9e85955:"32197","01c7a724":"32344",c39a1b67:"32403",ff198b7d:"32574",a940942f:"32627",ac2c6f29:"32708","2f2b5329":"33133",a361f0db:"33209","485b9e1f":"33275","6cffcc32":"33280",ce53ffd6:"33319","584ccef3":"33508","9147217a":"33568",b53ee4cc:"33581","48ef63ed":"33630","808a5912":"33647","708744f2":"33733","85550d99":"33882","0b5f5bbf":"33990","8bdb3070":"34206","197575b4":"34214","195ec8c1":"34492",d64c3433:"34594",ae8acb83:"34749","6e13655f":"34934","9af977f2":"34967",be7dee77:"34996","4a945222":"35002","1fb2655d":"35090","6c2093fb":"35377",da11032a:"35398",c6c2f8a6:"35644",ffb3fd1a:"35680","4a85be1e":"35727",b589b176:"35782",da6fbf2a:"35975","57fa9de9":"36650",bf4ba93b:"36686","2294c633":"36770","1c7a0340":"36779","08c13187":"37050",f5ac3b90:"37279",e0be8f6f:"37382","8cacefc1":"37460",e6ac9ebe:"37582","37734e29":"37651","54e82dbd":"37728","19dd9a55":"37748","70978acc":"37754",feb943f9:"37795","1280d58f":"37838","28ea4247":"37923","8974c763":"37948","5979b063":"37961","40ec79c5":"37979","714230e1":"37984",d41d8467:"38057",edef2ad1:"38097","0e832ef5":"38130",a6400791:"38336","8c6b1b70":"38600","6a4ca75b":"38780","94607c5f":"38781",e8b803ba:"38794",eb8d02f3:"38889","78f7c451":"38945",f5b7c6a9:"39010","04288e05":"39073","4f0dde4f":"39259","99d1ecb8":"39714",f6df8ec8:"39717",c46736c1:"39923",c8594c9f:"40055","4254c5fd":"40117","0c5ad103":"40127",de48c1c2:"40208","8c7d23e7":"40270","4a100773":"40339","70b87c8a":"40418",f79c2b36:"40626","54e84bd7":"40766","17e657ee":"40812",f8bd7d44:"40821","6abbc264":"40835",ff734053:"40858","92851fbb":"40869",b3d197ad:"40976",de4b4bc0:"41160",d0db6cd5:"41200","4b7d35aa":"41203","4e85b922":"41307",d2487d2c:"41331","9f789b70":"41566","4a6890ba":"41571","2508adfd":"41642","563c77e7":"41735",bcf95b3c:"41905",e1bc2a63:"42036",a42917dd:"42267","4e4c4edb":"42310",d216db91:"42333","05309783":"42346","170f5865":"42374","5ebfacad":"42589","3d607786":"42613",f30b4e00:"42888","26fa933c":"43018",ddeb9c3b:"43049","444ef230":"43086","9de9dc34":"43148","82d96bf5":"43333","45b07980":"43345","4f088abf":"43488",fbfa5a90:"43583","470ed423":"43602",e16a4367:"43871","4d232fa6":"44113",ad46602f:"44116",dc727da6:"44193",e88e2b57:"44209",e8c81125:"44353",d0273d46:"44608",f248a380:"44702",ab87274c:"45121","167f29d7":"45123","00e5e0c6":"45345","64930ae0":"45508","940c9439":"45711",ccc49370:"46103","0c8d610a":"46142","6d71a54c":"46154",d584ff55:"46250",e89bd621:"46305","1cc51124":"46643",c32220c7:"46781","605b97c9":"46822",e5becd70:"46828",f5a85496:"46989",dd73d8cf:"47145","8dbb57bc":"47246",b675f7d6:"47335","129735db":"47348","82d2e731":"47547",cb2d3221:"47561","2e6fe460":"47652",be284c34:"47839","9444fc8d":"47857",a95a7c55:"48036",b1a57682:"48155","6f14a4c7":"48236","9241169f":"48494","6875c492":"48610","9aaaf4b8":"48625","937ccad8":"48758",a49e650c:"48850","6a312c97":"48919",c80c34af:"49452","1c6266f1":"49484","488446b3":"49623","6ef7e3d4":"49958","65cafd8e":"50070","93b8c5e1":"50572",e41794f0:"50808","5c2ccfbc":"51115",f3cb94e5:"51237",baf5811d:"51445","1224d608":"51494",fc042285:"51789","8baf15a9":"51828","0bcbab68":"52017",f5f8a48c:"52071",a6dcb37f:"52238",d667446e:"52338",b75b9dbd:"52434","814f3328":"52535",a24481e9:"52569",c33a8a7c:"52579",b46d1039:"52932","52c003c1":"52950","898c55cb":"52989","06db7cdd":"53118","3c2b2163":"53174","54a5ea7d":"53389","72e38fbe":"53470","79b2265b":"53486","9e4087bc":"53608","8866a401":"53698","39200a92":"53876",c2839c2d:"54012",d721da33:"54018","79b64cea":"54202",fb88b8ca:"54236",bb29086a:"54293",ff1fa6c9:"54647",e4fc1a09:"54813","6ac0f798":"54894","3b690a08":"55084","05cbd5e2":"55191",c2fb8e8b:"55224","0f519dc1":"55269","893f2e93":"55411","14e99011":"55558",e37c4032:"55759",a65e9479:"55800","05037b3e":"55912",c4dc1033:"56115",e9fc3e68:"56414","717ca7ad":"56462","48efa9f4":"56653","53b3fc79":"56686","5d9699b4":"57040","3402daf1":"57050",cea706bc:"57077","48d83bfd":"57119","41c52eb0":"57168",c44d11af:"57169",cdc79d9c:"57550",b44a2473:"57832",e5ae2d3d:"57849",bb1d8af3:"58034",d9a67898:"58064","488d465e":"58079","66ce2abc":"58280",ac8cc8fe:"58288",b004fb50:"58428","70e93a45":"58497",d475afe6:"58714","9750cd01":"58738",f830ec9e:"59070",c580cfa2:"59477",c44bb002:"59543","1a3abbc3":"59593","2007206c":"59978",d2aa22d4:"59988","30f26a7a":"60169","0abf7f02":"60197","48d7f22e":"60303","426f5ee7":"60606",eb884ce7:"60706","07182537":"60738","5db8c956":"60840",f6a05f02:"60841",a9bdffda:"60933","86a7690c":"60952","9fbb892a":"60981","1c8f664c":"61086","808beaf0":"61209",a404e9a0:"61520","329ba483":"61859",e1d438e9:"62240","3f534172":"62259",dccd6689:"62455",aaf8be7c:"62497","2891c2a3":"62516","65bd9c5f":"62519","31c97e84":"62595","6e09f910":"62614","1c9ffcde":"62972","35ac9352":"63013",c0df61e5:"63053",db9e00b3:"63167","2d86cfb6":"63259","425319e1":"63323","647961e4":"63439","09a8101c":"63637","5b222fc6":"63642","021c8d1d":"63741","9a3e0d8e":"63749","8c99d685":"63776","86b9f332":"63981","0e1333d1":"63991","01a85c17":"64013",c4f5d8e4:"64195","56ac2859":"64232",d05368c3:"64243","94d4ac07":"64336",e0ee4473:"64388","27dcd181":"64419",e4a2f027:"64675","6e8a7b67":"64696","742b38dc":"64804","6633d22a":"64835","8f4add25":"65251","300fad81":"65477","155d8733":"65487","73f0aa6e":"65571","89439f6f":"65753",a123ff76:"65893","2b022a0d":"65926","36385a98":"65992","0a90bd61":"66019","18b1ff93":"66033",ec244af9:"66099",bc0e8ad0:"66242",aa72c38b:"66444",c0a2372d:"66497",c4b4de0f:"66537",d64808fa:"66627","1fbd1224":"66651",b1db9e78:"66668","8bc7054e":"66672",a00df5b4:"66686","06b5abd5":"66814",afc3e988:"66975","54e2ce19":"67000",b5dae24c:"67052","02dc33ee":"67197",ffe586b2:"67251","1933092b":"67434","3125c86a":"67490","89197f4f":"67493","36ea8d35":"67562","98735e69":"67622",a8ee6229:"67700","4d42bb9b":"67856",ef8eddd0:"68216",e82f66e0:"68468",c2d757e2:"68551","3052e807":"68648","996a3652":"68652","76df9d58":"68927","3e382c14":"69171","561eb05e":"69189","27a255b0":"69197",bb243f37:"69253",fd7a878f:"69407","8b5714b2":"69432",d04223d4:"69964",b425f106:"70177",ab0051c0:"70266",decd1b07:"70440","8178af10":"70727","2759b647":"70936",d50e2b40:"70982",d11663f1:"71012","297e3da8":"71133",d272aefc:"71151","597d409d":"71223","5d153b8f":"71303","2fff3a21":"71328",f2d08d34:"71359","2acb43b2":"71468","83f4d82c":"71515",e8dcc3fe:"71630","51698cc9":"71791","74a6c4d8":"71898",c212c0a6:"71926",b1059194:"71940","61d029d7":"72050","6f0c12c9":"72235",bb9438bd:"72601",aa826c81:"72612","9a8df0df":"73012","2605ac5e":"73323",c52c4229:"73337",f848febd:"73354","7e7aedec":"73945","2f5655a7":"74668",e159664d:"74710",cd336e02:"74834","3dd66ec8":"74854","276eee65":"74960","603045b5":"74970","5ac38b2f":"75080","6a5e520d":"75337","02eacc81":"75555",ae14fa1f:"75678","417f410e":"75927",fa6b5e6c:"76058","97c179be":"76299","7ce70624":"76336","19e5cabb":"76434","0f2db0e2":"76508","8136a61a":"76611","7d93b36b":"76785","763e49fc":"76894",bb1bd555:"77022","81eaba73":"77034",d1be9ff4:"77046","246d6ed0":"77099","0955c03d":"77170","5250d15a":"77182",edb3edba:"77364",f5ef3ca7:"77411","2e52b9a2":"77795","6d43c7c4":"77916","99a61e74":"77946","2f117675":"78059",d999f503:"78061","9ddf9492":"78132",d00410c7:"78133","61c47875":"78192",ae4a8bfb:"78484","17bd234e":"78505",af753b33:"78564",a2e6ced6:"78897",a68ee39b:"79005","57ada458":"79203",c43f31e5:"79245",a60bbbfe:"79367",b1509bad:"79476",dfd81e36:"79509","0f00d983":"79522","2c768b07":"79636","97a5ae26":"79695",f97394ec:"79787",a9e32c6a:"79823","88d99e0f":"79994","935f2afb":"80053","051147c5":"80443",e1daa54d:"80670","0abb84f4":"80819","44a20d39":"80901","45fd4fee":"81155",b04df543:"81183","99a72a3c":"81194","6131b196":"81225",e7c29825:"81257","7ca86db6":"81296","8955acc6":"81512",abf597e2:"81551",eb689fda:"81583",b5a12906:"81744","83f3468a":"81814",f7decf47:"81944","434ff406":"82010","08a845a3":"82048","855c4aff":"82204",c202d824:"82369",ebabe618:"82457",b1cd5b20:"82462","64f93100":"82623","7527a9ef":"82633","3c0e6537":"82651",d2567b4d:"82813",ce9b313c:"82902",e1fc87d9:"83061",fd4ba951:"83228","6b04e7ad":"83260","497459e9":"83283",b478b21b:"83351",b9f7f737:"83357","739bc6b2":"83501",c32a5425:"83508","554c686d":"83510","5cd45a8d":"83512","827b607c":"83592",f4610d17:"83765","48db209f":"84038","51b7d1eb":"84275",b46e7759:"84359","262e1fb1":"84432","66301b34":"84567","08cc3f2a":"84643","9f14d4e5":"84654","1007ba84":"84752","015ef8b2":"85048","4e65812e":"85493","273187e1":"85969","00ff3ab8":"85992","52fb3760":"86440",b57dcd1d:"86519","5b38bd06":"86592","22a8c514":"86608","8025f7fd":"86697","0439459b":"86761","962e1587":"86766",ec65f5d5:"86943",f97a64b3:"87027","11f27dd8":"87097","1e942b07":"87166",e1c2af7b:"87413",b76458e3:"87585","46f628a8":"87836",e3bf2dfe:"88156","0f8260a7":"88439","18754cb8":"88526","77053eb1":"88564","7c5cb72e":"88646","7b07dcad":"89534","2464c061":"89641","1a4e3b56":"89677","98a79a26":"89799","087fccde":"90244",f6c4aca5:"90300","05b7df8f":"90388","728f6513":"90442",ac0e80dd:"90479",a875518b:"90628","3a894f2b":"90759","0d936d6e":"90861","05a8e5eb":"90866","695a0e95":"90950",f6f0ee1b:"91007",b8de4b14:"91019","67d300de":"91034",a534381b:"91117","9f0c8c51":"91214",c0c2b9da:"91259",a11db7eb:"91367","221f3b9a":"91378","70b24ff0":"91518","631988a9":"91584","7fd555e2":"91589","0d808a5a":"91645","3ca1fc8b":"91700",e097c1da:"91821","382b7bd1":"92097","6383d72d":"92311",f21c8b70:"92340","1567a249":"92428","2828c0bd":"92489","3a3cf5dd":"92514",d7dbf034:"92728",d9d7f0a9:"92749",df8f2207:"92878","26ca5cfc":"92922",f9d7044e:"93061",a6aa9e1f:"93089",ef05350a:"93176",dc5eefd4:"93450","19d4af76":"93602","0fa6c6d6":"93609","52a2e7f4":"93683","4aa36b6e":"93744",bff09194:"93804",af82476a:"93954","137765a7":"94106",fa74e77e:"94125","4f49e52d":"94287","0a1ee2df":"94304","12a40cbb":"94419",a4a649e5:"94445",a4a37188:"94919",cafc3c94:"94963","0e5b1676":"95095",f470690a:"95175","4204125f":"95259","570b38e4":"95774","67f51f7e":"95903","7cae6c3b":"95904",c5298e55:"95930",fbdbf422:"96010","8d1ef8e7":"96655",f4b1ab07:"96792",e7136c90:"96907","225bf44d":"96935","2b471e02":"96995","4741b16e":"97021",ca71fe7b:"97192","168e5dc9":"97214","66f5903d":"97491","1420d1e4":"97492",ebbe4e7d:"97515",b4e6e6a7:"97559","244544b0":"97597","37e3b2f7":"97613","532dad37":"97655","31f0dae5":"97665",c736ecf7:"97732",f689083d:"97734",c67b3c2e:"98268","4104106c":"98433","8e84163f":"98663",e0b2cabb:"98746",d9a25476:"99007","765bde49":"99285","01b32472":"99434",dcf58f45:"99450","67f3d899":"99521",ee2b3c0a:"99632"}[e]||e,r.p+r.u(e)},(()=>{var e={51303:0,40532:0};r.f.j=(f,b)=>{var a=r.o(e,f)?e[f]:void 0;if(0!==a)if(a)b.push(a[2]);else if(/^(40532|51303)$/.test(f))e[f]=0;else{var d=new Promise(((b,d)=>a=e[f]=[b,d]));b.push(a[2]=d);var c=r.p+r.u(f),t=new Error;r.l(c,(b=>{if(r.o(e,f)&&(0!==(a=e[f])&&(e[f]=void 0),a)){var d=b&&("load"===b.type?"missing":b.type),c=b&&b.target&&b.target.src;t.message="Loading chunk "+f+" failed.\n("+d+": "+c+")",t.name="ChunkLoadError",t.type=d,t.request=c,a[1](t)}}),"chunk-"+f,f)}},r.O.j=f=>0===e[f];var f=(f,b)=>{var a,d,c=b[0],t=b[1],o=b[2],n=0;if(c.some((f=>0!==e[f]))){for(a in t)r.o(t,a)&&(r.m[a]=t[a]);if(o)var i=o(r)}for(f&&f(b);n{"use strict";var e,f,b,a,d,c={},t={};function r(e){var f=t[e];if(void 0!==f)return f.exports;var b=t[e]={id:e,loaded:!1,exports:{}};return c[e].call(b.exports,b,b.exports,r),b.loaded=!0,b.exports}r.m=c,r.c=t,e=[],r.O=(f,b,a,d)=>{if(!b){var c=1/0;for(i=0;i=d)&&Object.keys(r.O).every((e=>r.O[e](b[o])))?b.splice(o--,1):(t=!1,d0&&e[i-1][2]>d;i--)e[i]=e[i-1];e[i]=[b,a,d]},r.n=e=>{var f=e&&e.__esModule?()=>e.default:()=>e;return r.d(f,{a:f}),f},b=Object.getPrototypeOf?e=>Object.getPrototypeOf(e):e=>e.__proto__,r.t=function(e,a){if(1&a&&(e=this(e)),8&a)return e;if("object"==typeof e&&e){if(4&a&&e.__esModule)return e;if(16&a&&"function"==typeof e.then)return e}var d=Object.create(null);r.r(d);var c={};f=f||[null,b({}),b([]),b(b)];for(var t=2&a&&e;"object"==typeof t&&!~f.indexOf(t);t=b(t))Object.getOwnPropertyNames(t).forEach((f=>c[f]=()=>e[f]));return c.default=()=>e,r.d(d,c),d},r.d=(e,f)=>{for(var b in f)r.o(f,b)&&!r.o(e,b)&&Object.defineProperty(e,b,{enumerable:!0,get:f[b]})},r.f={},r.e=e=>Promise.all(Object.keys(r.f).reduce(((f,b)=>(r.f[b](e,f),f)),[])),r.u=e=>"assets/js/"+({37:"ff37d1e9",48:"844f46a5",122:"4a8dbbc6",331:"8355b08c",604:"ed2b8e88",628:"19ce5436",638:"c983f72a",684:"79ca3466",733:"4fbefe4a",757:"52285efb",997:"010f538e",1053:"52fc18de",1178:"1bf711d8",1426:"fa0a96ff",1484:"232a5b9a",1738:"0e760f5c",2312:"f6bd7cc2",2453:"a3def401",2803:"7a79be67",2820:"e936f9f6",2856:"b31f0c62",3015:"290bbe6d",3099:"343a65b7",3273:"0a09fc38",3377:"c63bdbb4",3453:"9007293a",3664:"1ef7a213",3857:"c2319041",4237:"8085909f",4270:"ef64c709",4391:"0fc52616",4644:"9ac18e3c",5017:"f83e3e43",5044:"b430dbd6",5078:"ea385099",5172:"dbe9f459",5352:"c2c35f38",5482:"8445e33e",5558:"2cf5c9f6",5621:"6a04bf88",5627:"94fd4cc3",5710:"de95f9b4",6038:"98acceed",6050:"3852251c",6597:"3505d13c",6664:"114a0ea4",6800:"371b5a64",6994:"efe016d3",7010:"2aaf12ef",7477:"3f4f5020",7488:"735396ab",7615:"87bbf9f8",7617:"840d1cce",7623:"b144e829",7698:"7b92706b",7706:"9580127c",7861:"f3333784",7881:"b9e364fe",7945:"e94673f9",8106:"52052568",8109:"4220343e",8217:"f8223cf0",8229:"70037571",8258:"1730ce49",8325:"d49a3a10",8366:"70ec2d67",8410:"695b08bd",8420:"56ed19dd",8442:"b6c1521e",8726:"fcb7c80a",8863:"02ad56ee",9054:"a48a9539",9437:"5e4e61a3",9704:"b45174e4",9817:"14eb3368",9832:"6a6147d5",9885:"6fa36db2",10063:"30f527c7",10297:"13f90819",10322:"f7daf5fe",10387:"8ba97af6",10426:"6352e992",10552:"304f028d",10619:"d9293a3c",10853:"5cf46a9a",11142:"5b188835",11357:"9c3672b5",11575:"ba136adc",11579:"358ca55c",11933:"bbc2b27f",12045:"26a80f01",12046:"fa26972a",12230:"7520942f",12402:"a9ff5d75",12697:"63c787c2",12829:"8c56eedf",12915:"7097285a",13015:"b40db2ab",13085:"1f391b9e",13360:"4a4c152b",13398:"4058c823",13464:"3a7594cb",13497:"3ee42e3e",13545:"c1d8a90e",13596:"079fe0b3",13838:"b9c0af58",14051:"b4ea6d68",14094:"94c8d0a4",14219:"5ae2fd00",14220:"42917112",14611:"cb997589",14631:"1aaa8d08",14899:"3f49754a",14999:"2b94f1a7",15107:"90b498df",15133:"4a93df7c",15326:"e9ebb693",15344:"221d88a7",15399:"5517e946",15494:"14f6b037",15518:"6b9868e6",15602:"83f9535a",15717:"b9d35a0d",15734:"d69f1c18",15762:"a5de73d8",15851:"4f2455b0",15987:"b2c16f4e",16189:"8a2e5722",16355:"4052d3d5",16509:"00429eb7",16590:"8e5814b3",16651:"d22054a7",16663:"ec0ac9db",16781:"069faf48",16857:"3f5ea235",17420:"94ced535",17573:"63b6a597",17696:"2287f69c",17925:"f1fe5cc7",18031:"158ed014",18034:"75d4719a",18159:"311af35b",18390:"e703f3a7",18438:"9e2f083a",18631:"f5784bce",18656:"83bddd4b",18677:"ae3f1154",18870:"0182af35",19056:"4f63e6a8",19093:"5f86de18",19205:"86d446e2",19457:"60f25d92",19570:"6042952c",19608:"e1586f77",19661:"020a0ff4",19699:"9dbc57f1",19977:"171c08f2",19981:"d60c28fa",20006:"934b3b5c",20015:"4e1c0a1c",20116:"808b9749",20243:"dceeb781",20343:"3a0a8c2d",20426:"0154d667",20454:"81ce6d13",20703:"032f8ca1",21383:"2c0c4af3",21401:"9d765af9",21484:"ba7af0ad",21683:"7628c73f",21703:"b66d95b7",21852:"cac8e99d",21897:"ee9bc1a2",21899:"c2d8f9fd",22154:"225d85cd",22184:"1bdf368a",22422:"4edd86bf",22531:"60fcf8e3",22671:"de689bb5",22691:"4f2db759",22723:"69e6ed04",22854:"db1823d5",23064:"d11520dd",23089:"535cf760",23198:"66c5ad31",23372:"69c441bd",23454:"e2262ac9",23849:"543df9b7",23873:"bcbec2d9",24359:"5f63ac35",24766:"e9f92e0d",24913:"4988404d",24966:"08bdb996",25080:"e65795b9",25125:"4edf6cbb",25196:"faa3ccc1",25227:"bbf7817a",25231:"eacffe03",25286:"391aff16",25304:"eef2ff81",25481:"85ebb381",25623:"c327d421",25743:"915f7087",25794:"b34c50e5",25847:"b09e51a7",25857:"1ddf7480",26314:"7f177be4",26315:"4c97e608",26453:"082a9ce0",26553:"74bd70f4",26952:"ac6b5ff3",27019:"10362b01",27028:"a285ff6f",27063:"e934991f",27226:"9245a8c6",27242:"beaf9ddb",27244:"0254e92e",27256:"5c062db9",27319:"da0c8116",27837:"b4a8043d",27855:"541e2717",27918:"17896441",28058:"f34f398b",28145:"babe571b",28199:"a6767219",28275:"42fd365f",28369:"d307d5dd",28583:"22711736",28687:"57e8111f",28836:"dd2fa4fa",28843:"cf57d094",28920:"d4bbb0c6",29017:"23b53440",29129:"4a8159d5",29162:"7709179f",29227:"df7b95b8",29257:"d94a5b94",29327:"6e3cf958",29514:"1be78505",29566:"511592be",29604:"6a5b295a",29620:"3e423595",29789:"0fe97ad9",30018:"2efdc0bf",30040:"e1399b64",30214:"12d48ddb",30227:"5a27c07c",30248:"1c910b4c",30299:"15a2eb39",30335:"a648e1ec",30495:"8a1f6266",30611:"21a4b026",30700:"300f5a7a",30724:"2790a299",30743:"5e204f51",30753:"76723d32",31003:"515b0e16",31132:"f128a0f0",31135:"700e33eb",31348:"24634ed0",31609:"97ba9f7a",31757:"310da260",31972:"7d1b9d2c",32197:"a9e85955",32344:"01c7a724",32403:"c39a1b67",32574:"ff198b7d",32627:"a940942f",32708:"ac2c6f29",33133:"2f2b5329",33209:"a361f0db",33275:"485b9e1f",33280:"6cffcc32",33319:"ce53ffd6",33508:"584ccef3",33568:"9147217a",33581:"b53ee4cc",33630:"48ef63ed",33647:"808a5912",33733:"708744f2",33882:"85550d99",33990:"0b5f5bbf",34206:"8bdb3070",34214:"197575b4",34492:"195ec8c1",34594:"d64c3433",34749:"ae8acb83",34934:"6e13655f",34967:"9af977f2",34996:"be7dee77",35002:"4a945222",35090:"1fb2655d",35377:"6c2093fb",35398:"da11032a",35644:"c6c2f8a6",35680:"ffb3fd1a",35727:"4a85be1e",35782:"b589b176",35975:"da6fbf2a",36650:"57fa9de9",36686:"bf4ba93b",36770:"2294c633",36779:"1c7a0340",37050:"08c13187",37279:"f5ac3b90",37382:"e0be8f6f",37460:"8cacefc1",37582:"e6ac9ebe",37651:"37734e29",37728:"54e82dbd",37748:"19dd9a55",37754:"70978acc",37795:"feb943f9",37838:"1280d58f",37923:"28ea4247",37948:"8974c763",37961:"5979b063",37979:"40ec79c5",37984:"714230e1",38057:"d41d8467",38097:"edef2ad1",38130:"0e832ef5",38336:"a6400791",38600:"8c6b1b70",38780:"6a4ca75b",38781:"94607c5f",38794:"e8b803ba",38889:"eb8d02f3",38945:"78f7c451",39010:"f5b7c6a9",39073:"04288e05",39259:"4f0dde4f",39714:"99d1ecb8",39717:"f6df8ec8",39923:"c46736c1",40055:"c8594c9f",40117:"4254c5fd",40127:"0c5ad103",40208:"de48c1c2",40270:"8c7d23e7",40339:"4a100773",40418:"70b87c8a",40626:"f79c2b36",40766:"54e84bd7",40812:"17e657ee",40821:"f8bd7d44",40835:"6abbc264",40858:"ff734053",40869:"92851fbb",40882:"25076710",40976:"b3d197ad",41160:"de4b4bc0",41200:"d0db6cd5",41203:"4b7d35aa",41307:"4e85b922",41331:"d2487d2c",41566:"9f789b70",41571:"4a6890ba",41642:"2508adfd",41735:"563c77e7",41905:"bcf95b3c",42036:"e1bc2a63",42267:"a42917dd",42310:"4e4c4edb",42333:"d216db91",42346:"05309783",42374:"170f5865",42589:"5ebfacad",42613:"3d607786",42888:"f30b4e00",43018:"26fa933c",43049:"ddeb9c3b",43086:"444ef230",43148:"9de9dc34",43333:"82d96bf5",43345:"45b07980",43488:"4f088abf",43583:"fbfa5a90",43602:"470ed423",43871:"e16a4367",44113:"4d232fa6",44116:"ad46602f",44193:"dc727da6",44209:"e88e2b57",44353:"e8c81125",44608:"d0273d46",44702:"f248a380",45121:"ab87274c",45123:"167f29d7",45345:"00e5e0c6",45508:"64930ae0",45711:"940c9439",46103:"ccc49370",46142:"0c8d610a",46154:"6d71a54c",46250:"d584ff55",46305:"e89bd621",46643:"1cc51124",46781:"c32220c7",46822:"605b97c9",46828:"e5becd70",46989:"f5a85496",47145:"dd73d8cf",47246:"8dbb57bc",47335:"b675f7d6",47348:"129735db",47547:"82d2e731",47561:"cb2d3221",47652:"2e6fe460",47839:"be284c34",47857:"9444fc8d",48036:"a95a7c55",48155:"b1a57682",48236:"6f14a4c7",48494:"9241169f",48610:"6875c492",48625:"9aaaf4b8",48758:"937ccad8",48850:"a49e650c",48919:"6a312c97",49452:"c80c34af",49484:"1c6266f1",49623:"488446b3",49958:"6ef7e3d4",50070:"65cafd8e",50572:"93b8c5e1",50808:"e41794f0",51115:"5c2ccfbc",51237:"f3cb94e5",51445:"baf5811d",51494:"1224d608",51789:"fc042285",51828:"8baf15a9",52017:"0bcbab68",52071:"f5f8a48c",52238:"a6dcb37f",52338:"d667446e",52434:"b75b9dbd",52535:"814f3328",52569:"a24481e9",52579:"c33a8a7c",52932:"b46d1039",52950:"52c003c1",52989:"898c55cb",53118:"06db7cdd",53174:"3c2b2163",53389:"54a5ea7d",53470:"72e38fbe",53486:"79b2265b",53608:"9e4087bc",53698:"8866a401",53876:"39200a92",54012:"c2839c2d",54018:"d721da33",54202:"79b64cea",54236:"fb88b8ca",54293:"bb29086a",54647:"ff1fa6c9",54813:"e4fc1a09",54894:"6ac0f798",55084:"3b690a08",55191:"05cbd5e2",55224:"c2fb8e8b",55269:"0f519dc1",55411:"893f2e93",55558:"14e99011",55759:"e37c4032",55800:"a65e9479",55912:"05037b3e",56115:"c4dc1033",56414:"e9fc3e68",56462:"717ca7ad",56653:"48efa9f4",56686:"53b3fc79",57040:"5d9699b4",57050:"3402daf1",57077:"cea706bc",57119:"48d83bfd",57168:"41c52eb0",57169:"c44d11af",57550:"cdc79d9c",57832:"b44a2473",57849:"e5ae2d3d",58034:"bb1d8af3",58064:"d9a67898",58079:"488d465e",58280:"66ce2abc",58288:"ac8cc8fe",58428:"b004fb50",58497:"70e93a45",58714:"d475afe6",58738:"9750cd01",59070:"f830ec9e",59477:"c580cfa2",59543:"c44bb002",59593:"1a3abbc3",59978:"2007206c",59988:"d2aa22d4",60169:"30f26a7a",60197:"0abf7f02",60303:"48d7f22e",60606:"426f5ee7",60706:"eb884ce7",60738:"07182537",60840:"5db8c956",60841:"f6a05f02",60899:"41786804",60933:"a9bdffda",60952:"86a7690c",60981:"9fbb892a",61086:"1c8f664c",61154:"39704467",61209:"808beaf0",61520:"a404e9a0",61859:"329ba483",62240:"e1d438e9",62259:"3f534172",62455:"dccd6689",62497:"aaf8be7c",62516:"2891c2a3",62519:"65bd9c5f",62595:"31c97e84",62614:"6e09f910",62972:"1c9ffcde",63013:"35ac9352",63053:"c0df61e5",63167:"db9e00b3",63259:"2d86cfb6",63323:"425319e1",63439:"647961e4",63637:"09a8101c",63642:"5b222fc6",63741:"021c8d1d",63749:"9a3e0d8e",63776:"8c99d685",63981:"86b9f332",63991:"0e1333d1",64013:"01a85c17",64195:"c4f5d8e4",64232:"56ac2859",64243:"d05368c3",64336:"94d4ac07",64388:"e0ee4473",64419:"27dcd181",64675:"e4a2f027",64696:"6e8a7b67",64804:"742b38dc",64835:"6633d22a",65251:"8f4add25",65477:"300fad81",65487:"155d8733",65571:"73f0aa6e",65753:"89439f6f",65893:"a123ff76",65926:"2b022a0d",65992:"36385a98",66019:"0a90bd61",66033:"18b1ff93",66099:"ec244af9",66242:"bc0e8ad0",66444:"aa72c38b",66497:"c0a2372d",66537:"c4b4de0f",66627:"d64808fa",66651:"1fbd1224",66668:"b1db9e78",66672:"8bc7054e",66686:"a00df5b4",66814:"06b5abd5",66975:"afc3e988",67e3:"54e2ce19",67052:"b5dae24c",67197:"02dc33ee",67251:"ffe586b2",67434:"1933092b",67490:"3125c86a",67493:"89197f4f",67562:"36ea8d35",67622:"98735e69",67700:"a8ee6229",67856:"4d42bb9b",68216:"ef8eddd0",68368:"58413115",68468:"e82f66e0",68551:"c2d757e2",68648:"3052e807",68652:"996a3652",68927:"76df9d58",69171:"3e382c14",69189:"561eb05e",69197:"27a255b0",69253:"bb243f37",69407:"fd7a878f",69432:"8b5714b2",69964:"d04223d4",70177:"b425f106",70266:"ab0051c0",70440:"decd1b07",70727:"8178af10",70729:"25508138",70936:"2759b647",70982:"d50e2b40",71012:"d11663f1",71133:"297e3da8",71151:"d272aefc",71223:"597d409d",71303:"5d153b8f",71328:"2fff3a21",71359:"f2d08d34",71468:"2acb43b2",71515:"83f4d82c",71630:"e8dcc3fe",71791:"51698cc9",71898:"74a6c4d8",71926:"c212c0a6",71940:"b1059194",72050:"61d029d7",72235:"6f0c12c9",72601:"bb9438bd",72612:"aa826c81",72971:"16399044",73012:"9a8df0df",73323:"2605ac5e",73337:"c52c4229",73344:"25619125",73354:"f848febd",73945:"7e7aedec",74668:"2f5655a7",74710:"e159664d",74834:"cd336e02",74854:"3dd66ec8",74960:"276eee65",74970:"603045b5",75080:"5ac38b2f",75337:"6a5e520d",75555:"02eacc81",75678:"ae14fa1f",75927:"417f410e",76058:"fa6b5e6c",76299:"97c179be",76336:"7ce70624",76434:"19e5cabb",76508:"0f2db0e2",76611:"8136a61a",76785:"7d93b36b",76894:"763e49fc",77022:"bb1bd555",77034:"81eaba73",77046:"d1be9ff4",77099:"246d6ed0",77170:"0955c03d",77182:"5250d15a",77309:"18305907",77364:"edb3edba",77411:"f5ef3ca7",77795:"2e52b9a2",77916:"6d43c7c4",77946:"99a61e74",78059:"2f117675",78061:"d999f503",78132:"9ddf9492",78133:"d00410c7",78192:"61c47875",78484:"ae4a8bfb",78505:"17bd234e",78564:"af753b33",78897:"a2e6ced6",79005:"a68ee39b",79203:"57ada458",79245:"c43f31e5",79367:"a60bbbfe",79476:"b1509bad",79509:"dfd81e36",79522:"0f00d983",79636:"2c768b07",79695:"97a5ae26",79787:"f97394ec",79823:"a9e32c6a",79994:"88d99e0f",80035:"81540514",80053:"935f2afb",80443:"051147c5",80670:"e1daa54d",80819:"0abb84f4",80901:"44a20d39",81155:"45fd4fee",81183:"b04df543",81194:"99a72a3c",81225:"6131b196",81257:"e7c29825",81296:"7ca86db6",81512:"8955acc6",81551:"abf597e2",81583:"eb689fda",81744:"b5a12906",81814:"83f3468a",81944:"f7decf47",82010:"434ff406",82048:"08a845a3",82204:"855c4aff",82369:"c202d824",82457:"ebabe618",82462:"b1cd5b20",82623:"64f93100",82633:"7527a9ef",82651:"3c0e6537",82813:"d2567b4d",82902:"ce9b313c",83061:"e1fc87d9",83228:"fd4ba951",83260:"6b04e7ad",83283:"497459e9",83351:"b478b21b",83357:"b9f7f737",83501:"739bc6b2",83508:"c32a5425",83510:"554c686d",83512:"5cd45a8d",83592:"827b607c",83765:"f4610d17",84038:"48db209f",84275:"51b7d1eb",84359:"b46e7759",84432:"262e1fb1",84567:"66301b34",84643:"08cc3f2a",84654:"9f14d4e5",84752:"1007ba84",85048:"015ef8b2",85493:"4e65812e",85969:"273187e1",85992:"00ff3ab8",86440:"52fb3760",86519:"b57dcd1d",86592:"5b38bd06",86608:"22a8c514",86697:"8025f7fd",86761:"0439459b",86766:"962e1587",86943:"ec65f5d5",87027:"f97a64b3",87097:"11f27dd8",87166:"1e942b07",87413:"e1c2af7b",87585:"b76458e3",87836:"46f628a8",88156:"e3bf2dfe",88439:"0f8260a7",88526:"18754cb8",88564:"77053eb1",88646:"7c5cb72e",88654:"43386584",89534:"7b07dcad",89641:"2464c061",89677:"1a4e3b56",89799:"98a79a26",90210:"45528793",90244:"087fccde",90300:"f6c4aca5",90388:"05b7df8f",90442:"728f6513",90479:"ac0e80dd",90628:"a875518b",90759:"3a894f2b",90861:"0d936d6e",90866:"05a8e5eb",90950:"695a0e95",91007:"f6f0ee1b",91019:"b8de4b14",91034:"67d300de",91117:"a534381b",91214:"9f0c8c51",91259:"c0c2b9da",91367:"a11db7eb",91378:"221f3b9a",91518:"70b24ff0",91584:"631988a9",91589:"7fd555e2",91645:"0d808a5a",91700:"3ca1fc8b",91821:"e097c1da",92097:"382b7bd1",92311:"6383d72d",92340:"f21c8b70",92428:"1567a249",92489:"2828c0bd",92514:"3a3cf5dd",92728:"d7dbf034",92749:"d9d7f0a9",92878:"df8f2207",92922:"26ca5cfc",93061:"f9d7044e",93089:"a6aa9e1f",93176:"ef05350a",93450:"dc5eefd4",93602:"19d4af76",93609:"0fa6c6d6",93683:"52a2e7f4",93744:"4aa36b6e",93804:"bff09194",93954:"af82476a",94106:"137765a7",94125:"fa74e77e",94287:"4f49e52d",94304:"0a1ee2df",94399:"90806480",94419:"12a40cbb",94445:"a4a649e5",94919:"a4a37188",94963:"cafc3c94",95095:"0e5b1676",95175:"f470690a",95259:"4204125f",95701:"13524175",95774:"570b38e4",95903:"67f51f7e",95904:"7cae6c3b",95930:"c5298e55",96010:"fbdbf422",96655:"8d1ef8e7",96792:"f4b1ab07",96907:"e7136c90",96935:"225bf44d",96995:"2b471e02",97021:"4741b16e",97192:"ca71fe7b",97214:"168e5dc9",97491:"66f5903d",97492:"1420d1e4",97515:"ebbe4e7d",97559:"b4e6e6a7",97597:"244544b0",97613:"37e3b2f7",97655:"532dad37",97665:"31f0dae5",97732:"c736ecf7",97734:"f689083d",98268:"c67b3c2e",98433:"4104106c",98663:"8e84163f",98746:"e0b2cabb",99007:"d9a25476",99285:"765bde49",99434:"01b32472",99450:"dcf58f45",99521:"67f3d899",99632:"ee2b3c0a"}[e]||e)+"."+{37:"5933f91a",48:"74fd891b",122:"563dc2ec",331:"1d47d146",604:"a7c6b3e3",628:"27da0127",638:"6f112f49",684:"80470805",733:"c42582f0",757:"c4acc940",997:"ba5bab3a",1053:"079ad337",1178:"1b016d75",1426:"6b2f3b4a",1484:"cbbff5b5",1738:"f9b17472",2312:"31e4d51b",2453:"9674ca46",2803:"82219b38",2820:"76c03fca",2856:"1f026311",3015:"18161c54",3099:"f19817e6",3273:"a4fa98ff",3377:"5a639484",3453:"fed16a79",3664:"bc3f9094",3857:"d65decec",4237:"9682c9ac",4270:"8014c846",4391:"3b5c27bf",4644:"17bb9f95",5017:"1e9a05e1",5044:"b157fe8d",5078:"8bf9c9f9",5172:"d5fcbed9",5352:"bf553ddd",5482:"bf182b79",5558:"4b032376",5621:"3fe3f7fb",5627:"43da0b32",5710:"f774a773",6038:"40ced236",6050:"10961830",6597:"4176e0b9",6664:"5a348ad3",6800:"a9884af4",6994:"49e7032e",7010:"d268b173",7477:"25d58eec",7488:"2fc666c4",7615:"2dc8b17a",7617:"f2822fae",7623:"7b70e5fe",7698:"ba66e9f8",7706:"b8513fad",7861:"35d79e8f",7881:"11f110ce",7945:"b8087d3e",8106:"747ef2bc",8109:"4fb6e49a",8217:"da113ae8",8229:"b40f9232",8258:"75104e59",8325:"8a4558b6",8366:"f8337d0c",8410:"88affed4",8420:"7458bc54",8442:"c1f8837b",8726:"9f87d9dc",8863:"b6937ae1",9054:"eb408306",9437:"b9e6beb3",9704:"de317854",9817:"1e44026e",9832:"28df05ef",9885:"2c640d29",10063:"c41f1484",10297:"6c33e176",10322:"dd830233",10387:"cad306f0",10426:"3ac65578",10552:"76185752",10619:"37de607a",10853:"3fcc3904",11142:"7d5fbbcb",11357:"261138c2",11575:"1a3bc56f",11579:"32d9fd30",11933:"655c02c7",12045:"f28881fd",12046:"f41906ee",12230:"a7eadb40",12402:"f87c09e0",12697:"d5002455",12829:"6da4d4f5",12915:"1903402a",13015:"ee5daff5",13085:"b361a78f",13360:"2b1056df",13398:"99ca8781",13464:"bbdbb9fa",13497:"880930ae",13545:"fea2cc4d",13596:"9170d405",13838:"8af72575",14051:"04d20d21",14094:"1d0c4c03",14219:"a6b3e366",14220:"b0b7e58e",14611:"dcf7ea74",14631:"7c3a8c2c",14899:"784f7c36",14999:"773ad495",15107:"8214e6bc",15133:"bc4ae948",15326:"4e0f2084",15344:"e9a9ed50",15399:"3aaab999",15494:"4426ecb7",15518:"bea7049f",15602:"3d06b878",15717:"6cba3a3e",15734:"4ca1345e",15762:"c65d80ed",15851:"1be5c04c",15944:"af64afb6",15987:"c0ece9ca",16189:"ac7d6c24",16355:"0e05495c",16509:"1f72e055",16590:"3f88f18a",16651:"1ad5d2eb",16663:"b2658965",16781:"68c9f9f2",16857:"f3d4e928",17420:"dde40518",17573:"db8b9f9e",17696:"2e7b95ef",17925:"2184e0d7",18031:"f2e14279",18034:"1ad663e1",18159:"45bcd038",18390:"ce32c2df",18438:"e3a6c35d",18631:"bc92bf7b",18656:"d9b033ae",18677:"b72d21a6",18870:"890018a7",19056:"0033a7d3",19093:"8e505683",19205:"f8846446",19457:"4c4f5d30",19570:"564524e7",19608:"fb0aa998",19661:"607974ec",19699:"da89ad50",19977:"024c377a",19981:"5b9f9e44",20006:"c81b309a",20015:"06f7ed6f",20116:"781aa38c",20243:"1eb094da",20343:"7d2c6c8e",20426:"7b583577",20454:"8e697cc3",20703:"e98e7801",21383:"432f7d73",21401:"720a19b6",21484:"b9d3f670",21683:"96074098",21703:"c006b91c",21791:"bf96e0fe",21852:"61ab5ab4",21897:"b6a6cbb5",21899:"2c1bd9b6",22154:"2fe71644",22184:"96da28b4",22422:"057b5be6",22531:"d9ba6b5b",22671:"0be0df4b",22691:"15e7263f",22723:"c55f3e3e",22854:"6a607205",23064:"d539790c",23089:"237f18d0",23198:"a87e262a",23372:"30d227ba",23454:"70985550",23849:"ff2876c8",23873:"4e58d8e7",24359:"8584c686",24766:"468b4cce",24913:"b8cf8f2a",24966:"be84482a",25080:"aee6535e",25125:"db6dc901",25196:"5ea4518b",25227:"a36efb36",25231:"88ea1c4d",25286:"f44147b8",25304:"d9a8a3c1",25481:"adee698e",25623:"3a22b6ce",25743:"5efeaac5",25794:"802d7efd",25847:"5ce20e70",25857:"96f6f437",26314:"a5ea2e84",26315:"1e3e0f82",26453:"05d9a6f4",26553:"eef5174d",26952:"63951660",27019:"12268c2c",27028:"c071c979",27063:"5182008c",27226:"055d4472",27242:"552d478e",27244:"46bd7ae5",27256:"515df8fe",27319:"0c08dcda",27837:"214e5139",27855:"a2b46343",27918:"4838b5ea",28058:"6cf8c65c",28145:"f3e3531e",28199:"e49a6cd8",28275:"4ba0175f",28369:"9364ef55",28583:"80545a09",28687:"8ae0bba2",28836:"68702e93",28843:"92f1ba22",28920:"a83b3398",29017:"a6e982ba",29129:"59072297",29162:"f790562c",29227:"4d4a2c08",29257:"0fec21f6",29327:"ef0e977c",29514:"1ac7e75c",29566:"46f25376",29604:"7b1e6a41",29620:"6531b96b",29789:"9056a7dc",30018:"61a17b96",30040:"2f49b9ac",30214:"4d2bdddf",30227:"4394b932",30248:"43a53ae7",30299:"59d38389",30335:"ae8edd18",30495:"6246670c",30611:"a094de39",30700:"3eea7ce9",30724:"db9f21d2",30743:"3edeafcd",30753:"64e50ac7",31003:"86b8d54c",31132:"65e4e427",31135:"2722a1b3",31348:"fd44f662",31609:"5418d104",31757:"504ba490",31972:"8fe8f4bc",32197:"62e7dd01",32344:"b311d0ea",32403:"8286490f",32574:"979343f5",32627:"ab3286c2",32708:"f778e5a1",33133:"561fcd99",33209:"d07c11ae",33275:"db7bfb13",33280:"a4797d87",33319:"c5ef9132",33508:"efc56178",33568:"dd1d4440",33581:"59b58625",33630:"d67449a5",33647:"e0283e49",33733:"11b26ab0",33882:"2e8066d9",33990:"1bed7406",34206:"4c244d24",34214:"332631ce",34492:"d15584dd",34594:"6a4bb01c",34749:"bb39a16f",34934:"5d0afc2f",34967:"34cde04f",34996:"efe32c17",35002:"5cc02ead",35090:"a4c4cdce",35377:"ebeb2981",35398:"2241c2d0",35644:"10a0f9e2",35680:"c6814c6d",35727:"50be8dfc",35782:"2841a0f3",35975:"d2118afd",36650:"8b12abae",36686:"a8743407",36770:"d25587ff",36779:"a7ae2649",37050:"11e9a364",37279:"d57a38d6",37382:"97a942cf",37460:"e46c663e",37582:"759cb15e",37651:"455ee130",37728:"ccce251d",37748:"2c2d8707",37754:"213ea3ad",37795:"9ca96123",37838:"3e1a02dc",37923:"72722260",37948:"3e458a41",37961:"b519f53a",37979:"01160dff",37984:"e1a0e516",38057:"fd7f8ab9",38097:"7e8ea3d8",38130:"8397b213",38336:"c34346ea",38600:"067113f7",38780:"287fad1e",38781:"5845a6c7",38794:"03fed7c3",38889:"692cbf88",38945:"1eaa87ef",39010:"3b247030",39073:"63e653fc",39259:"c2a9f13e",39714:"fefc3f0b",39717:"c3f8e7e4",39923:"b24d81ed",40055:"624e1196",40117:"60ecff76",40127:"3a604273",40208:"bc9176a9",40270:"02cfbfd4",40339:"4dc46223",40418:"a4e15d62",40626:"d6b49d3c",40766:"2c718ec5",40812:"9ff48a8f",40821:"8d018d8e",40835:"a3b514a5",40858:"41d37882",40869:"2dfaee7d",40882:"e79cc157",40976:"c6560ef2",41160:"a2f4a227",41200:"d391fd86",41203:"5506e1fd",41307:"c7af2fcf",41331:"3563c427",41566:"2ecc0770",41571:"248dc9b1",41642:"cc8c7920",41735:"04a1a727",41905:"3ab27b6b",42036:"e740be66",42267:"6b510e96",42310:"aff971a4",42333:"05386e40",42346:"d30b6991",42374:"b3727ab2",42589:"72f42903",42613:"26b510bd",42888:"59c80b99",43018:"8e4c6e7b",43049:"17af2537",43086:"271d0678",43148:"d370b211",43333:"aaa32d4a",43345:"57c984eb",43488:"862a37ab",43583:"87c50916",43602:"188d7735",43871:"47ce1193",44113:"e19d688d",44116:"2b20ffdd",44193:"8b05f328",44209:"bba20461",44353:"32787e96",44608:"a2e4652a",44702:"5d55e231",45121:"cd5290d0",45123:"b90f6c07",45345:"d77dfa36",45508:"cca7883d",45711:"5f495d56",46103:"17b266de",46142:"7605e6ad",46154:"045dfcec",46250:"62c50002",46305:"27c4ebb7",46643:"d125c5f7",46781:"c8d25f6f",46822:"ea1b059a",46828:"d5010265",46989:"d5af6b90",47145:"7b8f79b0",47246:"9a4a46dc",47335:"d3fac4a3",47348:"992960d4",47547:"f3e85f31",47561:"b6d9379b",47652:"97981a37",47839:"f1ec02b6",47857:"46e8a67b",48036:"2026159d",48155:"1c16a9a6",48236:"3d17c421",48494:"b476b64b",48610:"038d0474",48625:"8eaa651d",48758:"082a2e65",48850:"599cc2d0",48919:"ab0f5849",49452:"aafb838d",49484:"5adf4bb3",49623:"61dfd9b5",49958:"5b2459e8",50070:"1e384e32",50572:"3cec2473",50808:"f9279d46",51115:"88b465c5",51237:"96ef6750",51445:"6b02ac22",51494:"27f26066",51789:"c65b9779",51828:"c3a12705",52017:"433cdd7c",52071:"70a282f1",52238:"4b3d7d33",52338:"c182542c",52434:"4db50a8a",52535:"5ea52c33",52569:"50b50d05",52579:"7c69d60a",52932:"7b6e43d4",52950:"6d651ca0",52989:"0b6167e8",53118:"13c65b53",53174:"1b0c817f",53389:"9cfe49ec",53470:"5e7b1c85",53486:"a40d83e8",53608:"2cff198f",53698:"c1bc4aab",53876:"34ab4e07",54012:"e4a6012d",54018:"77587fb0",54202:"3a68b9a9",54236:"dc86db17",54293:"ed0be7d0",54647:"207b0a07",54813:"a67be84b",54894:"b3d78903",55084:"ea337b16",55191:"50f44c34",55224:"cd3cac2b",55269:"fba5356f",55411:"42fcfab8",55558:"5f8032b7",55759:"c946df51",55800:"2033ed07",55912:"f441e25d",56115:"337617fe",56414:"a3e37706",56462:"bf860770",56653:"b8f2f574",56686:"7da2837f",57040:"2bd92a3d",57050:"dbc70cd1",57077:"6bd7e954",57119:"273b819a",57168:"735db641",57169:"4dba0047",57550:"db779aff",57832:"07bdd689",57849:"266019e9",58034:"32ee197d",58064:"8c8e5d8b",58079:"ed8b7022",58280:"58478506",58288:"f8530ad3",58428:"892b90c9",58497:"4e3e1ede",58714:"3d3640e8",58738:"7352ecf0",59070:"aa69fedd",59477:"81511f98",59543:"e9802148",59593:"f6f0974b",59978:"c5e114b4",59988:"4251282d",60169:"bdf2d842",60197:"b0525b92",60303:"0e49fdc2",60606:"56078b2a",60706:"6df08112",60738:"fe4f3e69",60840:"5c3c223b",60841:"ce0a2cc4",60899:"4aad0b0a",60933:"e8cc360d",60952:"44f86efc",60981:"9fdfba11",61086:"96544307",61154:"cbef6e0a",61209:"6cc41d6c",61520:"bb03362b",61859:"578be531",62240:"4706a2ff",62259:"9abbba15",62455:"a1e559c8",62497:"59c4fd5f",62516:"15c60f6b",62519:"947b35d9",62595:"94aefa50",62614:"05e19552",62972:"6835bc16",63013:"b2d2a01c",63053:"4700cdff",63167:"abda22b7",63259:"4a673c5e",63323:"39cab38c",63439:"1d1a43fa",63637:"19f260ae",63642:"5f1ae0dd",63741:"71eb1bcd",63749:"c19603e3",63776:"f9f58eb6",63981:"02286d16",63991:"8a54b88b",64013:"03a563a6",64195:"10856761",64232:"8a33c233",64243:"75989d92",64336:"b16dfcf9",64388:"12b1ce59",64419:"595bb63e",64675:"792cf31c",64696:"8063b40d",64804:"4e092706",64835:"e93b78c1",65251:"368ed371",65477:"f8693855",65487:"0f938f35",65571:"a88a4f02",65753:"207a449d",65893:"65d2aa95",65926:"f0c4933f",65992:"8155c7d1",66019:"f5ee5607",66033:"c702b287",66099:"0bdcede5",66242:"f6cd64b3",66444:"d90e13d8",66497:"42dab590",66537:"3e5d621b",66627:"535a46a9",66651:"cf30a8b5",66668:"f6c24571",66672:"f7ea5c91",66686:"9a002a38",66814:"7c54daca",66975:"26a03d4e",67e3:"7a2f8454",67052:"ec301934",67197:"ded5091d",67251:"e7a20109",67434:"93fb2a4d",67490:"41c6c7b5",67493:"fdb654f7",67562:"59cf737d",67622:"475d26f0",67700:"a1bb520e",67856:"4d6434fd",68216:"11dd4212",68368:"aa3680b0",68468:"a14d04c5",68551:"710d1a33",68648:"4c270532",68652:"e8c65b5c",68927:"383338f9",69171:"783e4228",69189:"662fb40b",69197:"647a8ea2",69253:"af3363a4",69407:"f48995b1",69432:"71709137",69964:"0f862fcd",70177:"83306a3d",70266:"4a1b2650",70440:"0b3cc72a",70727:"c0778700",70729:"77c5f782",70936:"2c3852c1",70982:"6c87873b",71012:"51dae131",71133:"b0814e2e",71151:"45dc9fc4",71223:"2e67de49",71303:"6e97b863",71328:"e98b1854",71359:"89fbf093",71468:"9b44129b",71515:"5411c5b7",71630:"e8cc78e1",71791:"e3dd836c",71898:"389786ed",71926:"2e9fc2ee",71940:"df64700a",72050:"c4e794da",72235:"82db2ab9",72601:"7e838f42",72612:"994ef572",72971:"46cb4a7c",73012:"45a8932f",73323:"7e89d196",73337:"6ba05268",73344:"2562505d",73354:"1bc3c6e5",73945:"db10a21a",74248:"317299c5",74668:"cdc50bef",74710:"75bf4517",74834:"e5197f2d",74854:"a8e121b7",74960:"945d52ca",74970:"6dfb1c76",75080:"151b4ca6",75287:"648c9fec",75337:"d6d1fbd3",75555:"b5f54726",75678:"c37afb6b",75927:"be7f1598",76058:"a08263a0",76299:"6c1850be",76336:"31000fc2",76434:"3ae4b594",76508:"7ef32f7a",76611:"4f449a9f",76785:"c58c4056",76894:"7e01133a",77022:"585cd7a3",77034:"422ca81d",77046:"ae095a34",77099:"b5df1df3",77170:"55466a2c",77182:"4a946cf4",77309:"5c4e6bd7",77364:"71272021",77411:"40c39ca2",77795:"20664d89",77916:"eb7548b2",77946:"50b604a6",78059:"3323f5f4",78061:"44508634",78132:"4b4ab92b",78133:"2ef6dc97",78192:"ae1c3fad",78484:"e85424a8",78505:"d445ca81",78564:"de056442",78897:"b71d3b4d",79005:"dd91fbad",79203:"d16bdd25",79245:"6ed6d002",79367:"aa518dfe",79476:"931fa5f1",79509:"277c3204",79522:"67e487c9",79636:"ef79b69c",79695:"4ab823e5",79787:"2ec87ae6",79823:"0c88d933",79994:"2356b20f",80035:"3b00b5a6",80053:"65a2cbf9",80443:"8f901e8b",80670:"03d4e9dd",80819:"3468159a",80901:"63df8fcb",81155:"e7b29595",81183:"20dee322",81194:"19b943c6",81225:"d1f5aabc",81257:"33523c17",81296:"eca59293",81512:"7b9e9d63",81551:"91f7f90f",81583:"718ddb87",81744:"a90cd918",81814:"069107bb",81944:"227c3bb5",82010:"7000a07f",82048:"2d38b3a9",82204:"c745fcf8",82369:"bb0c4fb2",82457:"c412245c",82462:"70e8235a",82623:"34994308",82633:"2c5462db",82651:"050a38f5",82813:"fb93488f",82902:"d2330565",83061:"276981f7",83228:"e1598dc3",83260:"9a2d7ed1",83283:"040206c6",83351:"489cb4ae",83357:"83371262",83501:"fee7d68f",83508:"1ae987e2",83510:"8c65d881",83512:"cc5e4449",83592:"7797585d",83765:"c0555ed1",84038:"e928c030",84275:"6ce8164d",84359:"5f80f6ab",84432:"8d57e9f8",84567:"228433c9",84643:"086988aa",84654:"6096f393",84752:"74260af4",85048:"c332faad",85493:"7301f55c",85969:"c44bb119",85992:"3bd7cb73",86440:"763d93fb",86519:"8cc9ab92",86592:"df18f459",86608:"ad599d7a",86697:"071b687c",86761:"ca1acb07",86766:"3b471447",86943:"42d883c0",87027:"39650adc",87097:"98219dbd",87166:"42af16fe",87413:"352bf907",87585:"633732ee",87836:"e1650ecd",88156:"1426d205",88439:"9269f18e",88526:"6c026c2f",88564:"9c758bb6",88646:"c1aeaa84",88654:"4d8d2cd9",89534:"2a3a831f",89641:"6052dea2",89677:"707e39e6",89799:"0d3aa064",90210:"0b56f251",90244:"2d402ca5",90300:"2f314275",90388:"fad898d4",90442:"6d84fff1",90479:"532d5e7a",90628:"c22cde5f",90759:"6c300aea",90861:"08620d12",90866:"7e7d7458",90950:"cccdb672",91007:"6dc74c15",91019:"9b09c5d1",91034:"20fd4e24",91117:"4ba89e87",91214:"ce6a2748",91259:"ac23e049",91367:"7e8d93a6",91378:"b2e765ef",91518:"48b2838a",91584:"d1090d09",91589:"9b3441f0",91645:"2d64e871",91700:"ada5bd8d",91821:"ce55f017",92097:"ed5c446a",92311:"c2c068c3",92340:"8143b691",92428:"6f7a5c51",92489:"dd0cbfc9",92514:"c510e5e3",92728:"2f172920",92749:"a8da147e",92878:"67576f54",92922:"98815355",93061:"25603cec",93089:"e66d7d82",93176:"b76b5762",93450:"d80fc25b",93602:"6e42a41c",93609:"044f94a5",93683:"79c29c31",93744:"66e8e948",93804:"93958d88",93954:"c0420a62",94106:"dd96c34a",94125:"a2f571e3",94287:"21549ec9",94304:"a62a2020",94399:"4a575fbe",94419:"6c86618e",94445:"2d14c86b",94598:"cbf8f4af",94919:"7801b297",94963:"12a978c5",95095:"3b2912a9",95175:"29b42909",95259:"ad1997b9",95701:"333dde60",95774:"c2ceb80c",95903:"713409ed",95904:"c8f5c4c3",95930:"e956c391",96010:"49da5ac1",96655:"beb9aec9",96792:"2fe4fb76",96907:"24e25ee0",96935:"5ab4fc91",96995:"5a858146",97021:"1c267736",97192:"5b53c0f9",97214:"48a9d551",97491:"fa98d8ed",97492:"d37a6c7f",97515:"4152d2c6",97559:"349a5f60",97597:"6c665d18",97613:"7205cf58",97655:"406d9826",97665:"afe54de4",97732:"98e5f02e",97734:"bffeb70d",98268:"41c57aa4",98433:"2f92f49f",98663:"914b8a3d",98746:"6efc8a06",99007:"fffcfa67",99285:"adcd2037",99434:"a6352696",99450:"1325fa2f",99521:"939c4172",99632:"fc20a8b1"}[e]+".js",r.miniCssF=e=>{},r.g=function(){if("object"==typeof globalThis)return globalThis;try{return this||new Function("return this")()}catch(e){if("object"==typeof window)return window}}(),r.o=(e,f)=>Object.prototype.hasOwnProperty.call(e,f),a={},d="website:",r.l=(e,f,b,c)=>{if(a[e])a[e].push(f);else{var t,o;if(void 0!==b)for(var n=document.getElementsByTagName("script"),i=0;i{t.onerror=t.onload=null,clearTimeout(s);var d=a[e];if(delete a[e],t.parentNode&&t.parentNode.removeChild(t),d&&d.forEach((e=>e(b))),f)return f(b)},s=setTimeout(l.bind(null,void 0,{type:"timeout",target:t}),12e4);t.onerror=l.bind(null,t.onerror),t.onload=l.bind(null,t.onload),o&&document.head.appendChild(t)}},r.r=e=>{"undefined"!=typeof Symbol&&Symbol.toStringTag&&Object.defineProperty(e,Symbol.toStringTag,{value:"Module"}),Object.defineProperty(e,"__esModule",{value:!0})},r.p="/Cloud-Native/",r.gca=function(e){return e={13524175:"95701",16399044:"72971",17896441:"27918",18305907:"77309",22711736:"28583",25076710:"40882",25508138:"70729",25619125:"73344",39704467:"61154",41786804:"60899",42917112:"14220",43386584:"88654",45528793:"90210",52052568:"8106",58413115:"68368",70037571:"8229",81540514:"80035",90806480:"94399",ff37d1e9:"37","844f46a5":"48","4a8dbbc6":"122","8355b08c":"331",ed2b8e88:"604","19ce5436":"628",c983f72a:"638","79ca3466":"684","4fbefe4a":"733","52285efb":"757","010f538e":"997","52fc18de":"1053","1bf711d8":"1178",fa0a96ff:"1426","232a5b9a":"1484","0e760f5c":"1738",f6bd7cc2:"2312",a3def401:"2453","7a79be67":"2803",e936f9f6:"2820",b31f0c62:"2856","290bbe6d":"3015","343a65b7":"3099","0a09fc38":"3273",c63bdbb4:"3377","9007293a":"3453","1ef7a213":"3664",c2319041:"3857","8085909f":"4237",ef64c709:"4270","0fc52616":"4391","9ac18e3c":"4644",f83e3e43:"5017",b430dbd6:"5044",ea385099:"5078",dbe9f459:"5172",c2c35f38:"5352","8445e33e":"5482","2cf5c9f6":"5558","6a04bf88":"5621","94fd4cc3":"5627",de95f9b4:"5710","98acceed":"6038","3852251c":"6050","3505d13c":"6597","114a0ea4":"6664","371b5a64":"6800",efe016d3:"6994","2aaf12ef":"7010","3f4f5020":"7477","735396ab":"7488","87bbf9f8":"7615","840d1cce":"7617",b144e829:"7623","7b92706b":"7698","9580127c":"7706",f3333784:"7861",b9e364fe:"7881",e94673f9:"7945","4220343e":"8109",f8223cf0:"8217","1730ce49":"8258",d49a3a10:"8325","70ec2d67":"8366","695b08bd":"8410","56ed19dd":"8420",b6c1521e:"8442",fcb7c80a:"8726","02ad56ee":"8863",a48a9539:"9054","5e4e61a3":"9437",b45174e4:"9704","14eb3368":"9817","6a6147d5":"9832","6fa36db2":"9885","30f527c7":"10063","13f90819":"10297",f7daf5fe:"10322","8ba97af6":"10387","6352e992":"10426","304f028d":"10552",d9293a3c:"10619","5cf46a9a":"10853","5b188835":"11142","9c3672b5":"11357",ba136adc:"11575","358ca55c":"11579",bbc2b27f:"11933","26a80f01":"12045",fa26972a:"12046","7520942f":"12230",a9ff5d75:"12402","63c787c2":"12697","8c56eedf":"12829","7097285a":"12915",b40db2ab:"13015","1f391b9e":"13085","4a4c152b":"13360","4058c823":"13398","3a7594cb":"13464","3ee42e3e":"13497",c1d8a90e:"13545","079fe0b3":"13596",b9c0af58:"13838",b4ea6d68:"14051","94c8d0a4":"14094","5ae2fd00":"14219",cb997589:"14611","1aaa8d08":"14631","3f49754a":"14899","2b94f1a7":"14999","90b498df":"15107","4a93df7c":"15133",e9ebb693:"15326","221d88a7":"15344","5517e946":"15399","14f6b037":"15494","6b9868e6":"15518","83f9535a":"15602",b9d35a0d:"15717",d69f1c18:"15734",a5de73d8:"15762","4f2455b0":"15851",b2c16f4e:"15987","8a2e5722":"16189","4052d3d5":"16355","00429eb7":"16509","8e5814b3":"16590",d22054a7:"16651",ec0ac9db:"16663","069faf48":"16781","3f5ea235":"16857","94ced535":"17420","63b6a597":"17573","2287f69c":"17696",f1fe5cc7:"17925","158ed014":"18031","75d4719a":"18034","311af35b":"18159",e703f3a7:"18390","9e2f083a":"18438",f5784bce:"18631","83bddd4b":"18656",ae3f1154:"18677","0182af35":"18870","4f63e6a8":"19056","5f86de18":"19093","86d446e2":"19205","60f25d92":"19457","6042952c":"19570",e1586f77:"19608","020a0ff4":"19661","9dbc57f1":"19699","171c08f2":"19977",d60c28fa:"19981","934b3b5c":"20006","4e1c0a1c":"20015","808b9749":"20116",dceeb781:"20243","3a0a8c2d":"20343","0154d667":"20426","81ce6d13":"20454","032f8ca1":"20703","2c0c4af3":"21383","9d765af9":"21401",ba7af0ad:"21484","7628c73f":"21683",b66d95b7:"21703",cac8e99d:"21852",ee9bc1a2:"21897",c2d8f9fd:"21899","225d85cd":"22154","1bdf368a":"22184","4edd86bf":"22422","60fcf8e3":"22531",de689bb5:"22671","4f2db759":"22691","69e6ed04":"22723",db1823d5:"22854",d11520dd:"23064","535cf760":"23089","66c5ad31":"23198","69c441bd":"23372",e2262ac9:"23454","543df9b7":"23849",bcbec2d9:"23873","5f63ac35":"24359",e9f92e0d:"24766","4988404d":"24913","08bdb996":"24966",e65795b9:"25080","4edf6cbb":"25125",faa3ccc1:"25196",bbf7817a:"25227",eacffe03:"25231","391aff16":"25286",eef2ff81:"25304","85ebb381":"25481",c327d421:"25623","915f7087":"25743",b34c50e5:"25794",b09e51a7:"25847","1ddf7480":"25857","7f177be4":"26314","4c97e608":"26315","082a9ce0":"26453","74bd70f4":"26553",ac6b5ff3:"26952","10362b01":"27019",a285ff6f:"27028",e934991f:"27063","9245a8c6":"27226",beaf9ddb:"27242","0254e92e":"27244","5c062db9":"27256",da0c8116:"27319",b4a8043d:"27837","541e2717":"27855",f34f398b:"28058",babe571b:"28145",a6767219:"28199","42fd365f":"28275",d307d5dd:"28369","57e8111f":"28687",dd2fa4fa:"28836",cf57d094:"28843",d4bbb0c6:"28920","23b53440":"29017","4a8159d5":"29129","7709179f":"29162",df7b95b8:"29227",d94a5b94:"29257","6e3cf958":"29327","1be78505":"29514","511592be":"29566","6a5b295a":"29604","3e423595":"29620","0fe97ad9":"29789","2efdc0bf":"30018",e1399b64:"30040","12d48ddb":"30214","5a27c07c":"30227","1c910b4c":"30248","15a2eb39":"30299",a648e1ec:"30335","8a1f6266":"30495","21a4b026":"30611","300f5a7a":"30700","2790a299":"30724","5e204f51":"30743","76723d32":"30753","515b0e16":"31003",f128a0f0:"31132","700e33eb":"31135","24634ed0":"31348","97ba9f7a":"31609","310da260":"31757","7d1b9d2c":"31972",a9e85955:"32197","01c7a724":"32344",c39a1b67:"32403",ff198b7d:"32574",a940942f:"32627",ac2c6f29:"32708","2f2b5329":"33133",a361f0db:"33209","485b9e1f":"33275","6cffcc32":"33280",ce53ffd6:"33319","584ccef3":"33508","9147217a":"33568",b53ee4cc:"33581","48ef63ed":"33630","808a5912":"33647","708744f2":"33733","85550d99":"33882","0b5f5bbf":"33990","8bdb3070":"34206","197575b4":"34214","195ec8c1":"34492",d64c3433:"34594",ae8acb83:"34749","6e13655f":"34934","9af977f2":"34967",be7dee77:"34996","4a945222":"35002","1fb2655d":"35090","6c2093fb":"35377",da11032a:"35398",c6c2f8a6:"35644",ffb3fd1a:"35680","4a85be1e":"35727",b589b176:"35782",da6fbf2a:"35975","57fa9de9":"36650",bf4ba93b:"36686","2294c633":"36770","1c7a0340":"36779","08c13187":"37050",f5ac3b90:"37279",e0be8f6f:"37382","8cacefc1":"37460",e6ac9ebe:"37582","37734e29":"37651","54e82dbd":"37728","19dd9a55":"37748","70978acc":"37754",feb943f9:"37795","1280d58f":"37838","28ea4247":"37923","8974c763":"37948","5979b063":"37961","40ec79c5":"37979","714230e1":"37984",d41d8467:"38057",edef2ad1:"38097","0e832ef5":"38130",a6400791:"38336","8c6b1b70":"38600","6a4ca75b":"38780","94607c5f":"38781",e8b803ba:"38794",eb8d02f3:"38889","78f7c451":"38945",f5b7c6a9:"39010","04288e05":"39073","4f0dde4f":"39259","99d1ecb8":"39714",f6df8ec8:"39717",c46736c1:"39923",c8594c9f:"40055","4254c5fd":"40117","0c5ad103":"40127",de48c1c2:"40208","8c7d23e7":"40270","4a100773":"40339","70b87c8a":"40418",f79c2b36:"40626","54e84bd7":"40766","17e657ee":"40812",f8bd7d44:"40821","6abbc264":"40835",ff734053:"40858","92851fbb":"40869",b3d197ad:"40976",de4b4bc0:"41160",d0db6cd5:"41200","4b7d35aa":"41203","4e85b922":"41307",d2487d2c:"41331","9f789b70":"41566","4a6890ba":"41571","2508adfd":"41642","563c77e7":"41735",bcf95b3c:"41905",e1bc2a63:"42036",a42917dd:"42267","4e4c4edb":"42310",d216db91:"42333","05309783":"42346","170f5865":"42374","5ebfacad":"42589","3d607786":"42613",f30b4e00:"42888","26fa933c":"43018",ddeb9c3b:"43049","444ef230":"43086","9de9dc34":"43148","82d96bf5":"43333","45b07980":"43345","4f088abf":"43488",fbfa5a90:"43583","470ed423":"43602",e16a4367:"43871","4d232fa6":"44113",ad46602f:"44116",dc727da6:"44193",e88e2b57:"44209",e8c81125:"44353",d0273d46:"44608",f248a380:"44702",ab87274c:"45121","167f29d7":"45123","00e5e0c6":"45345","64930ae0":"45508","940c9439":"45711",ccc49370:"46103","0c8d610a":"46142","6d71a54c":"46154",d584ff55:"46250",e89bd621:"46305","1cc51124":"46643",c32220c7:"46781","605b97c9":"46822",e5becd70:"46828",f5a85496:"46989",dd73d8cf:"47145","8dbb57bc":"47246",b675f7d6:"47335","129735db":"47348","82d2e731":"47547",cb2d3221:"47561","2e6fe460":"47652",be284c34:"47839","9444fc8d":"47857",a95a7c55:"48036",b1a57682:"48155","6f14a4c7":"48236","9241169f":"48494","6875c492":"48610","9aaaf4b8":"48625","937ccad8":"48758",a49e650c:"48850","6a312c97":"48919",c80c34af:"49452","1c6266f1":"49484","488446b3":"49623","6ef7e3d4":"49958","65cafd8e":"50070","93b8c5e1":"50572",e41794f0:"50808","5c2ccfbc":"51115",f3cb94e5:"51237",baf5811d:"51445","1224d608":"51494",fc042285:"51789","8baf15a9":"51828","0bcbab68":"52017",f5f8a48c:"52071",a6dcb37f:"52238",d667446e:"52338",b75b9dbd:"52434","814f3328":"52535",a24481e9:"52569",c33a8a7c:"52579",b46d1039:"52932","52c003c1":"52950","898c55cb":"52989","06db7cdd":"53118","3c2b2163":"53174","54a5ea7d":"53389","72e38fbe":"53470","79b2265b":"53486","9e4087bc":"53608","8866a401":"53698","39200a92":"53876",c2839c2d:"54012",d721da33:"54018","79b64cea":"54202",fb88b8ca:"54236",bb29086a:"54293",ff1fa6c9:"54647",e4fc1a09:"54813","6ac0f798":"54894","3b690a08":"55084","05cbd5e2":"55191",c2fb8e8b:"55224","0f519dc1":"55269","893f2e93":"55411","14e99011":"55558",e37c4032:"55759",a65e9479:"55800","05037b3e":"55912",c4dc1033:"56115",e9fc3e68:"56414","717ca7ad":"56462","48efa9f4":"56653","53b3fc79":"56686","5d9699b4":"57040","3402daf1":"57050",cea706bc:"57077","48d83bfd":"57119","41c52eb0":"57168",c44d11af:"57169",cdc79d9c:"57550",b44a2473:"57832",e5ae2d3d:"57849",bb1d8af3:"58034",d9a67898:"58064","488d465e":"58079","66ce2abc":"58280",ac8cc8fe:"58288",b004fb50:"58428","70e93a45":"58497",d475afe6:"58714","9750cd01":"58738",f830ec9e:"59070",c580cfa2:"59477",c44bb002:"59543","1a3abbc3":"59593","2007206c":"59978",d2aa22d4:"59988","30f26a7a":"60169","0abf7f02":"60197","48d7f22e":"60303","426f5ee7":"60606",eb884ce7:"60706","07182537":"60738","5db8c956":"60840",f6a05f02:"60841",a9bdffda:"60933","86a7690c":"60952","9fbb892a":"60981","1c8f664c":"61086","808beaf0":"61209",a404e9a0:"61520","329ba483":"61859",e1d438e9:"62240","3f534172":"62259",dccd6689:"62455",aaf8be7c:"62497","2891c2a3":"62516","65bd9c5f":"62519","31c97e84":"62595","6e09f910":"62614","1c9ffcde":"62972","35ac9352":"63013",c0df61e5:"63053",db9e00b3:"63167","2d86cfb6":"63259","425319e1":"63323","647961e4":"63439","09a8101c":"63637","5b222fc6":"63642","021c8d1d":"63741","9a3e0d8e":"63749","8c99d685":"63776","86b9f332":"63981","0e1333d1":"63991","01a85c17":"64013",c4f5d8e4:"64195","56ac2859":"64232",d05368c3:"64243","94d4ac07":"64336",e0ee4473:"64388","27dcd181":"64419",e4a2f027:"64675","6e8a7b67":"64696","742b38dc":"64804","6633d22a":"64835","8f4add25":"65251","300fad81":"65477","155d8733":"65487","73f0aa6e":"65571","89439f6f":"65753",a123ff76:"65893","2b022a0d":"65926","36385a98":"65992","0a90bd61":"66019","18b1ff93":"66033",ec244af9:"66099",bc0e8ad0:"66242",aa72c38b:"66444",c0a2372d:"66497",c4b4de0f:"66537",d64808fa:"66627","1fbd1224":"66651",b1db9e78:"66668","8bc7054e":"66672",a00df5b4:"66686","06b5abd5":"66814",afc3e988:"66975","54e2ce19":"67000",b5dae24c:"67052","02dc33ee":"67197",ffe586b2:"67251","1933092b":"67434","3125c86a":"67490","89197f4f":"67493","36ea8d35":"67562","98735e69":"67622",a8ee6229:"67700","4d42bb9b":"67856",ef8eddd0:"68216",e82f66e0:"68468",c2d757e2:"68551","3052e807":"68648","996a3652":"68652","76df9d58":"68927","3e382c14":"69171","561eb05e":"69189","27a255b0":"69197",bb243f37:"69253",fd7a878f:"69407","8b5714b2":"69432",d04223d4:"69964",b425f106:"70177",ab0051c0:"70266",decd1b07:"70440","8178af10":"70727","2759b647":"70936",d50e2b40:"70982",d11663f1:"71012","297e3da8":"71133",d272aefc:"71151","597d409d":"71223","5d153b8f":"71303","2fff3a21":"71328",f2d08d34:"71359","2acb43b2":"71468","83f4d82c":"71515",e8dcc3fe:"71630","51698cc9":"71791","74a6c4d8":"71898",c212c0a6:"71926",b1059194:"71940","61d029d7":"72050","6f0c12c9":"72235",bb9438bd:"72601",aa826c81:"72612","9a8df0df":"73012","2605ac5e":"73323",c52c4229:"73337",f848febd:"73354","7e7aedec":"73945","2f5655a7":"74668",e159664d:"74710",cd336e02:"74834","3dd66ec8":"74854","276eee65":"74960","603045b5":"74970","5ac38b2f":"75080","6a5e520d":"75337","02eacc81":"75555",ae14fa1f:"75678","417f410e":"75927",fa6b5e6c:"76058","97c179be":"76299","7ce70624":"76336","19e5cabb":"76434","0f2db0e2":"76508","8136a61a":"76611","7d93b36b":"76785","763e49fc":"76894",bb1bd555:"77022","81eaba73":"77034",d1be9ff4:"77046","246d6ed0":"77099","0955c03d":"77170","5250d15a":"77182",edb3edba:"77364",f5ef3ca7:"77411","2e52b9a2":"77795","6d43c7c4":"77916","99a61e74":"77946","2f117675":"78059",d999f503:"78061","9ddf9492":"78132",d00410c7:"78133","61c47875":"78192",ae4a8bfb:"78484","17bd234e":"78505",af753b33:"78564",a2e6ced6:"78897",a68ee39b:"79005","57ada458":"79203",c43f31e5:"79245",a60bbbfe:"79367",b1509bad:"79476",dfd81e36:"79509","0f00d983":"79522","2c768b07":"79636","97a5ae26":"79695",f97394ec:"79787",a9e32c6a:"79823","88d99e0f":"79994","935f2afb":"80053","051147c5":"80443",e1daa54d:"80670","0abb84f4":"80819","44a20d39":"80901","45fd4fee":"81155",b04df543:"81183","99a72a3c":"81194","6131b196":"81225",e7c29825:"81257","7ca86db6":"81296","8955acc6":"81512",abf597e2:"81551",eb689fda:"81583",b5a12906:"81744","83f3468a":"81814",f7decf47:"81944","434ff406":"82010","08a845a3":"82048","855c4aff":"82204",c202d824:"82369",ebabe618:"82457",b1cd5b20:"82462","64f93100":"82623","7527a9ef":"82633","3c0e6537":"82651",d2567b4d:"82813",ce9b313c:"82902",e1fc87d9:"83061",fd4ba951:"83228","6b04e7ad":"83260","497459e9":"83283",b478b21b:"83351",b9f7f737:"83357","739bc6b2":"83501",c32a5425:"83508","554c686d":"83510","5cd45a8d":"83512","827b607c":"83592",f4610d17:"83765","48db209f":"84038","51b7d1eb":"84275",b46e7759:"84359","262e1fb1":"84432","66301b34":"84567","08cc3f2a":"84643","9f14d4e5":"84654","1007ba84":"84752","015ef8b2":"85048","4e65812e":"85493","273187e1":"85969","00ff3ab8":"85992","52fb3760":"86440",b57dcd1d:"86519","5b38bd06":"86592","22a8c514":"86608","8025f7fd":"86697","0439459b":"86761","962e1587":"86766",ec65f5d5:"86943",f97a64b3:"87027","11f27dd8":"87097","1e942b07":"87166",e1c2af7b:"87413",b76458e3:"87585","46f628a8":"87836",e3bf2dfe:"88156","0f8260a7":"88439","18754cb8":"88526","77053eb1":"88564","7c5cb72e":"88646","7b07dcad":"89534","2464c061":"89641","1a4e3b56":"89677","98a79a26":"89799","087fccde":"90244",f6c4aca5:"90300","05b7df8f":"90388","728f6513":"90442",ac0e80dd:"90479",a875518b:"90628","3a894f2b":"90759","0d936d6e":"90861","05a8e5eb":"90866","695a0e95":"90950",f6f0ee1b:"91007",b8de4b14:"91019","67d300de":"91034",a534381b:"91117","9f0c8c51":"91214",c0c2b9da:"91259",a11db7eb:"91367","221f3b9a":"91378","70b24ff0":"91518","631988a9":"91584","7fd555e2":"91589","0d808a5a":"91645","3ca1fc8b":"91700",e097c1da:"91821","382b7bd1":"92097","6383d72d":"92311",f21c8b70:"92340","1567a249":"92428","2828c0bd":"92489","3a3cf5dd":"92514",d7dbf034:"92728",d9d7f0a9:"92749",df8f2207:"92878","26ca5cfc":"92922",f9d7044e:"93061",a6aa9e1f:"93089",ef05350a:"93176",dc5eefd4:"93450","19d4af76":"93602","0fa6c6d6":"93609","52a2e7f4":"93683","4aa36b6e":"93744",bff09194:"93804",af82476a:"93954","137765a7":"94106",fa74e77e:"94125","4f49e52d":"94287","0a1ee2df":"94304","12a40cbb":"94419",a4a649e5:"94445",a4a37188:"94919",cafc3c94:"94963","0e5b1676":"95095",f470690a:"95175","4204125f":"95259","570b38e4":"95774","67f51f7e":"95903","7cae6c3b":"95904",c5298e55:"95930",fbdbf422:"96010","8d1ef8e7":"96655",f4b1ab07:"96792",e7136c90:"96907","225bf44d":"96935","2b471e02":"96995","4741b16e":"97021",ca71fe7b:"97192","168e5dc9":"97214","66f5903d":"97491","1420d1e4":"97492",ebbe4e7d:"97515",b4e6e6a7:"97559","244544b0":"97597","37e3b2f7":"97613","532dad37":"97655","31f0dae5":"97665",c736ecf7:"97732",f689083d:"97734",c67b3c2e:"98268","4104106c":"98433","8e84163f":"98663",e0b2cabb:"98746",d9a25476:"99007","765bde49":"99285","01b32472":"99434",dcf58f45:"99450","67f3d899":"99521",ee2b3c0a:"99632"}[e]||e,r.p+r.u(e)},(()=>{var e={51303:0,40532:0};r.f.j=(f,b)=>{var a=r.o(e,f)?e[f]:void 0;if(0!==a)if(a)b.push(a[2]);else if(/^(40532|51303)$/.test(f))e[f]=0;else{var d=new Promise(((b,d)=>a=e[f]=[b,d]));b.push(a[2]=d);var c=r.p+r.u(f),t=new Error;r.l(c,(b=>{if(r.o(e,f)&&(0!==(a=e[f])&&(e[f]=void 0),a)){var d=b&&("load"===b.type?"missing":b.type),c=b&&b.target&&b.target.src;t.message="Loading chunk "+f+" failed.\n("+d+": "+c+")",t.name="ChunkLoadError",t.type=d,t.request=c,a[1](t)}}),"chunk-"+f,f)}},r.O.j=f=>0===e[f];var f=(f,b)=>{var a,d,c=b[0],t=b[1],o=b[2],n=0;if(c.some((f=>0!==e[f]))){for(a in t)r.o(t,a)&&(r.m[a]=t[a]);if(o)var i=o(r)}for(f&&f(b);n - +

    01. It's 30DaysOfServerless!

    · 5 min read
    Nitya Narasimhan
    Devanshi Joshi

    What We'll Cover

    • What is Serverless September? (6 initiatives)
    • How can I participate? (3 actions)
    • How can I skill up (30 days)
    • Who is behind this? (Team Contributors)
    • How can you contribute? (Custom Issues)
    • Exercise: Take the Cloud Skills Challenge!
    • Resources: #30DaysOfServerless Collection.

    Serverless September

    Welcome to Day 01 of 🍂 #ServerlessSeptember! Today, we kick off a full month of content and activities to skill you up on all things Serverless on Azure with content, events, and community interactions! Read on to learn about what we have planned!


    Explore our initiatives

    We have a number of initiatives planned for the month to help you learn and skill up on relevant technologies. Click on the links to visit the relevant pages for each.

    We'll go into more details about #30DaysOfServerless in this post - don't forget to subscribe to the blog to get daily posts delivered directly to your preferred feed reader!


    Register for events!

    What are 3 things you can do today, to jumpstart your learning journey?

    Serverless Hacks


    #30DaysOfServerless

    #30DaysOfServerless is a month-long series of daily blog posts grouped into 4 themed weeks - taking you from core concepts to end-to-end solution examples in 30 days. Each article will be short (5-8 mins reading time) and provide exercises and resources to help you reinforce learnings and take next steps.

    This series focuses on the Serverless On Azure learning journey in four stages, each building on the previous week to help you skill up in a beginner-friendly way:

    We have a tentative roadmap for the topics we hope to cover and will keep this updated as we go with links to actual articles as they get published.

    Week 1: FOCUS ON FUNCTIONS ⚡️

    Here's a sneak peek at what we have planned for week 1. We'll start with a broad look at fundamentals, walkthrough examples for each targeted programming language, then wrap with a post that showcases the role of Azure Functions in powering different serverless scenarios.

    • Sep 02: Learn Core Concepts for Azure Functions
    • Sep 03: Build and deploy your first Function
    • Sep 04: Azure Functions - for Java Developers!
    • Sep 05: Azure Functions - for JavaScript Developers!
    • Sep 06: Azure Functions - for .NET Developers!
    • Sep 07: Azure Functions - for Python Developers!
    • Sep 08: Wrap: Azure Functions + Serverless on Azure

    Ways to Participate..

    We hope you are as excited as we are, to jumpstart this journey. We want to make this a useful, beginner-friendly journey and we need your help!

    Here are the many ways you can participate:

    • Follow Azure on dev.to - we'll republish posts under this series page and welcome comments and feedback there!
    • Discussions on GitHub - Use this if you have feedback for us (on how we can improve these resources), or want to chat with your peers about serverless topics.
    • Custom Issues - just pick a template, create a new issue by filling in the requested details, and submit. You can use these to:
      • submit questions for AskTheExpert (live Q&A) ahead of time
      • submit your own articles or projects for community to learn from
      • share your ServerlessHack and get listed in our Hall Of Fame!
      • report bugs or share ideas for improvements

    Here's the list of custom issues currently defined.

    Community Buzz

    Let's Get Started!

    Now you know everything! We hope you are as excited as we are to dive into a full month of active learning and doing! Don't forget to subscribe for updates in your favorite feed reader! And look out for our first Azure Functions post tomorrow!


    - + \ No newline at end of file diff --git a/blog/02-functions-intro/index.html b/blog/02-functions-intro/index.html index 72ba975144..49ae743696 100644 --- a/blog/02-functions-intro/index.html +++ b/blog/02-functions-intro/index.html @@ -14,7 +14,7 @@ - + @@ -22,7 +22,7 @@

    02. Learn Core Concepts

    · 9 min read
    Nitya Narasimhan

    Welcome to Day 2️⃣ of #30DaysOfServerless!

    Today, we kickstart our journey into serveless on Azure with a look at Functions As a Service. We'll explore Azure Functions - from core concepts to usage patterns.

    Ready? Let's Go!


    What We'll Cover

    • What is Functions-as-a-Service? (FaaS)
    • What is Azure Functions?
    • Triggers, Bindings and Custom Handlers
    • What is Durable Functions?
    • Orchestrators, Entity Functions and Application Patterns
    • Exercise: Take the Cloud Skills Challenge!
    • Resources: #30DaysOfServerless Collection.


    1. What is FaaS?

    Faas stands for Functions As a Service (FaaS). But what does that mean for us as application developers? We know that building and deploying modern applications at scale can get complicated and it starts with us needing to take decisions on Compute. In other words, we need to answer this question: "where should I host my application given my resource dependencies and scaling requirements?"

    this useful flowchart

    Azure has this useful flowchart (shown below) to guide your decision-making. You'll see that hosting options generally fall into three categories:

    • Infrastructure as a Service (IaaS) - where you provision and manage Virtual Machines yourself (cloud provider manages infra).
    • Platform as a Service (PaaS) - where you use a provider-managed hosting environment like Azure Container Apps.
    • Functions as a Service (FaaS) - where you forget about hosting environments and simply deploy your code for the provider to run.

    Here, "serverless" compute refers to hosting options where we (as developers) can focus on building apps without having to manage the infrastructure. See serverless compute options on Azure for more information.


    2. Azure Functions

    Azure Functions is the Functions-as-a-Service (FaaS) option on Azure. It is the ideal serverless solution if your application is event-driven with short-lived workloads. With Azure Functions, we develop applications as modular blocks of code (functions) that are executed on demand, in response to configured events (triggers). This approach brings us two advantages:

    • It saves us money. We only pay for the time the function runs.
    • It scales with demand. We have 3 hosting plans for flexible scaling behaviors.

    Azure Functions can be programmed in many popular languages (C#, F#, Java, JavaScript, TypeScript, PowerShell or Python), with Azure providing language-specific handlers and default runtimes to execute them.

    Concept: Custom Handlers
    • What if we wanted to program in a non-supported language?
    • Or we wanted to use a different runtime for a supported language?

    Custom Handlers have you covered! These are lightweight webservers that can receive and process input events from the Functions host - and return responses that can be delivered to any output targets. By this definition, custom handlers can be implemented by any language that supports receiving HTTP events. Check out the quickstart for writing a custom handler in Rust or Go.

    Custom Handlers

    Concept: Trigger and Bindings

    We talked about what functions are (code blocks). But when are they invoked or executed? And how do we provide inputs (arguments) and retrieve outputs (results) from this execution?

    This is where triggers and bindings come in.

    • Triggers define how a function is invoked and what associated data it will provide. A function must have exactly one trigger.
    • Bindings declaratively define how a resource is connected to the function. The resource or binding can be of type input, output, or both. Bindings are optional. A Function can have multiple input, output bindings.

    Azure Functions comes with a number of supported bindings that can be used to integrate relevant services to power a specific scenario. For instance:

    • HTTP Triggers - invokes the function in response to an HTTP request. Use this to implement serverless APIs for your application.
    • Event Grid Triggers invokes the function on receiving events from an Event Grid. Use this to process events reactively, and potentially publish responses back to custom Event Grid topics.
    • SignalR Service Trigger invokes the function in response to messages from Azure SignalR, allowing your application to take actions with real-time contexts.

    Triggers and bindings help you abstract your function's interfaces to other components it interacts with, eliminating hardcoded integrations. They are configured differently based on the programming language you use. For example - JavaScript functions are configured in the functions.json file. Here's an example of what that looks like.

    {
    "disabled":false,
    "bindings":[
    // ... bindings here
    {
    "type": "bindingType",
    "direction": "in",
    "name": "myParamName",
    // ... more depending on binding
    }
    ]
    }

    The key thing to remember is that triggers and bindings have a direction property - triggers are always in, input bindings are in and output bindings are out. Some bindings can support a special inout direction.

    The documentation has code examples for bindings to popular Azure services. Here's an example of the bindings and trigger configuration for a BlobStorage use case.

    // function.json configuration

    {
    "bindings": [
    {
    "queueName": "myqueue-items",
    "connection": "MyStorageConnectionAppSetting",
    "name": "myQueueItem",
    "type": "queueTrigger",
    "direction": "in"
    },
    {
    "name": "myInputBlob",
    "type": "blob",
    "path": "samples-workitems/{queueTrigger}",
    "connection": "MyStorageConnectionAppSetting",
    "direction": "in"
    },
    {
    "name": "myOutputBlob",
    "type": "blob",
    "path": "samples-workitems/{queueTrigger}-Copy",
    "connection": "MyStorageConnectionAppSetting",
    "direction": "out"
    }
    ],
    "disabled": false
    }

    The code below shows the function implementation. In this scenario, the function is triggered by a queue message carrying an input payload with a blob name. In response, it copies that data to the resource associated with the output binding.

    // function implementation

    module.exports = async function(context) {
    context.log('Node.js Queue trigger function processed', context.bindings.myQueueItem);
    context.bindings.myOutputBlob = context.bindings.myInputBlob;
    };
    Concept: Custom Bindings

    What if we have a more complex scenario that requires bindings for non-supported resources?

    There is an option create custom bindings if necessary. We don't have time to dive into details here but definitely check out the documentation


    3. Durable Functions

    This sounds great, right?. But now, let's talk about one challenge for Azure Functions. In the use cases so far, the functions are stateless - they take inputs at runtime if necessary, and return output results if required. But they are otherwise self-contained, which is great for scalability!

    But what if I needed to build more complex workflows that need to store and transfer state, and complete operations in a reliable manner? Durable Functions are an extension of Azure Functions that makes stateful workflows possible.

    Concept: Orchestrator Functions

    How can I create workflows that coordinate functions?

    Durable Functions use orchestrator functions to coordinate execution of other Durable functions within a given Functions app. These functions are durable and reliable. Later in this post, we'll talk briefly about some application patterns that showcase popular orchestration scenarios.

    Concept: Entity Functions

    How do I persist and manage state across workflows?

    Entity Functions provide explicit state mangement for Durable Functions, defining operations to read and write state to durable entities. They are associated with a special entity trigger for invocation. These are currently available only for a subset of programming languages so check to see if they are supported for your programming language of choice.

    USAGE: Application Patterns

    Durable Functions are a fascinating topic that would require a separate, longer post, to do justice. For now, let's look at some application patterns that showcase the value of these starting with the simplest one - Function Chaining as shown below:

    Function Chaining

    Here, we want to execute a sequence of named functions in a specific order. As shown in the snippet below, the orchestrator function coordinates invocations on the given functions in the desired sequence - "chaining" inputs and outputs to establish the workflow. Take note of the yield keyword. This triggers a checkpoint, preserving the current state of the function for reliable operation.

    const df = require("durable-functions");

    module.exports = df.orchestrator(function*(context) {
    try {
    const x = yield context.df.callActivity("F1");
    const y = yield context.df.callActivity("F2", x);
    const z = yield context.df.callActivity("F3", y);
    return yield context.df.callActivity("F4", z);
    } catch (error) {
    // Error handling or compensation goes here.
    }
    });

    Other application patterns for durable functions include:

    There's a lot more to explore but we won't have time to do that today. Definitely check the documentation and take a minute to read the comparison with Azure Logic Apps to understand what each technology provides for serverless workflow automation.


    4. Exercise

    That was a lot of information to absorb! Thankfully, there are a lot of examples in the documentation that can help put these in context. Here are a couple of exercises you can do, to reinforce your understanding of these concepts.


    5. What's Next?

    The goal for today was to give you a quick tour of key terminology and concepts related to Azure Functions. Tomorrow, we dive into the developer experience, starting with core tools for local development and ending by deploying our first Functions app.

    Want to do some prep work? Here are a few useful links:


    6. Resources


    - + \ No newline at end of file diff --git a/blog/03-functions-quickstart/index.html b/blog/03-functions-quickstart/index.html index 0354caa705..90a134843b 100644 --- a/blog/03-functions-quickstart/index.html +++ b/blog/03-functions-quickstart/index.html @@ -14,13 +14,13 @@ - +

    03. Build Your First Function

    · 9 min read
    Nitya Narasimhan

    Welcome to Day 3 of #30DaysOfServerless!

    Yesterday we learned core concepts and terminology for Azure Functions, the signature Functions-as-a-Service option on Azure. Today we take our first steps into building and deploying an Azure Functions app, and validate local development setup.

    Ready? Let's go.


    What We'll Cover


    Developer Guidance

    Before we jump into development, let's familiarize ourselves with language-specific guidance from the Azure Functions Developer Guide. We'll review the JavaScript version but guides for F#, Java, Python, C# and PowerShell are also available.

    1. A function is defined by two things: code (written in a supported programming language) and configuration (specified in a functions.json file, declaring the triggers, bindings and other context for execution).

    2. A function app is the unit of deployment for your functions, and is associated with a single execution context or runtime. It can contain multiple functions, but they must be in the same language.

    3. A host configuration is runtime-specific configuration that affects all functions running in a given function app instance. It is defined in a host.json file.

    4. A recommended folder structure is defined for the function app, but may vary based on the programming language used. Check the documentation on folder structures to learn the default for your preferred language.

    Here's an example of the JavaScript folder structure for a function app containing two functions with some shared dependencies. Note that host.json (runtime configuration) is defined once, in the root directory. And function.json is defined separately for each function.

    FunctionsProject
    | - MyFirstFunction
    | | - index.js
    | | - function.json
    | - MySecondFunction
    | | - index.js
    | | - function.json
    | - SharedCode
    | | - myFirstHelperFunction.js
    | | - mySecondHelperFunction.js
    | - node_modules
    | - host.json
    | - package.json
    | - local.settings.json

    We'll dive into what the contents of these files look like, when we build and deploy the first function. We'll cover local.settings.json in the About Local Testing section at the end.


    My First Function App

    The documentation provides quickstart options for all supported languages. We'll walk through the JavaScript versions in this article. You have two options for development:

    I'm a huge fan of VS Code - so I'll be working through that tutorial today.

    PRE-REQUISITES

    Don't forget to validate your setup by checking the versions of installed software.

    Install VSCode Extension

    Installing the Visual Studio Code extension should automatically open this page in your IDE with similar quickstart instructions, but potentially more recent screenshots.

    Visual Studio Code Extension for VS Code

    Note that it may make sense to install the Azure tools for Visual Studio Code extensions pack if you plan on working through the many projects in Serverless September. This includes the Azure Functions extension by default.

    Create First Function App

    Walk through the Create local [project] steps of the quickstart. The process is quick and painless and scaffolds out this folder structure and files. Note the existence (and locations) of functions.json and host.json files.

    Final screenshot for VS Code workflow

    Explore the Code

    Check out the functions.json configuration file. It shows that the function is activated by an httpTrigger with an input binding (tied to req payload) and an output binding (tied to res payload). And it supports both GET and POST requests on the exposed URL.

    {
    "bindings": [
    {
    "authLevel": "anonymous",
    "type": "httpTrigger",
    "direction": "in",
    "name": "req",
    "methods": [
    "get",
    "post"
    ]
    },
    {
    "type": "http",
    "direction": "out",
    "name": "res"
    }
    ]
    }

    Check out index.js - the function implementation. We see it logs a message to the console when invoked. It then extracts a name value from the input payload (req) and crafts a different responseMessage based on the presence/absence of a valid name. It returns this response in the output payload (res).

    module.exports = async function (context, req) {
    context.log('JavaScript HTTP trigger function processed a request.');

    const name = (req.query.name || (req.body && req.body.name));
    const responseMessage = name
    ? "Hello, " + name + ". This HTTP triggered function executed successfully."
    : "This HTTP triggered function executed successfully. Pass a name in the query string or in the request body for a personalized response.";

    context.res = {
    // status: 200, /* Defaults to 200 */
    body: responseMessage
    };
    }

    Preview Function App Locally

    You can now run this function app locally using Azure Functions Core Tools. VS Code integrates seamlessly with this CLI-based tool, making it possible for you to exploit all its capabilities without leaving the IDE. In fact, the workflow will even prompt you to install those tools if they didn't already exist in your local dev environment.

    Now run the function app locally by clicking on the "Run and Debug" icon in the activity bar (highlighted, left) and pressing the "▶️" (Attach to Node Functions) to start execution. On success, your console output should show something like this.

    Final screenshot for VS Code workflow

    You can test the function locally by visiting the Function Url shown (http://localhost:7071/api/HttpTrigger1) or by opening the Workspace region of the Azure extension, and selecting the Execute Function now menu item as shown.

    Final screenshot for VS Code workflow

    In the latter case, the Enter request body popup will show a pre-populated request of {"name":"Azure"} that you can submit.

    Final screenshot for VS Code workflow

    On successful execution, your VS Code window will show a notification as follows. Take note of the console output - it shows the message encoded in index.js.

    Final screenshot for VS Code workflow

    You can also visit the deployed function URL directly in a local browser - testing the case for a request made with no name payload attached. Note how the response in the browser now shows the non-personalized version of the message!

    Final screenshot for VS Code workflow

    🎉 Congratulations

    You created and ran a function app locally!

    (Re)Deploy to Azure

    Now, just follow the creating a function app in Azure steps to deploy it to Azure, using an active subscription! The deployed app resource should now show up under the Function App Resources where you can click Execute Function Now to test the Azure-deployed version instead. You can also look up the function URL in the portal and visit that link in your local browser to trigger the function without the name context.

    🎉 Congratulations

    You have an Azure-hosted serverless function app!

    Challenge yourself and try to change the code and redeploy to Azure to return something different. You have effectively created a serverless API endpoint!


    About Core Tools

    That was a lot to cover! In the next few days we'll have more examples for Azure Functions app development - focused on different programming languages. So let's wrap today's post by reviewing two helpful resources.

    First, let's talk about Azure Functions Core Tools - the command-line tool that lets you develop, manage, and deploy, Azure Functions projects from your local development environment. It is used transparently by the VS Code extension - but you can use it directly from a terminal for a powerful command-line end-to-end developer experience! The Core Tools commands are organized into the following contexts:

    Learn how to work with Azure Functions Core Tools. Not only can it help with quick command execution, it can also be invaluable for debugging issues that may not always be visible or understandable in an IDE.

    About Local Testing

    You might have noticed that the scaffold also produced a local.settings.json file. What is that and why is it useful? By definition, the local.settings.json file "stores app settings and settings used by local development tools. Settings in the local.settings.json file are used only when you're running your project locally."

    Read the guidance on Code and test Azure Functions Locally to learn more about how to configure development environments locally, for your preferred programming language, to support testing and debugging on the local Functions runtime.

    Exercise

    We made it! Now it's your turn!! Here are a few things you can try to apply what you learned and reinforce your understanding:

    Resources

    Bookmark and visit the #30DaysOfServerless Collection. It's the one-stop collection of resources we will keep updated with links to relevant documentation and learning resources.

    - + \ No newline at end of file diff --git a/blog/04-functions-java/index.html b/blog/04-functions-java/index.html index 89f886209b..65b536bc05 100644 --- a/blog/04-functions-java/index.html +++ b/blog/04-functions-java/index.html @@ -14,13 +14,13 @@ - +

    04. Functions For Java Devs

    · 8 min read
    Rory Preddy

    Welcome to Day 4 of #30DaysOfServerless!

    Yesterday we walked through an Azure Functions Quickstart with JavaScript, and used it to understand the general Functions App structure, tooling and developer experience.

    Today we'll look at developing Functions app with a different programming language - namely, Java - and explore developer guidance, tools and resources to build serverless Java solutions on Azure.


    What We'll Cover


    Developer Guidance

    If you're a Java developer new to serverless on Azure, start by exploring the Azure Functions Java Developer Guide. It covers:

    In this blog post, we'll dive into one quickstart, and discuss other resources briefly, for awareness! Do check out the recommended exercises and resources for self-study!


    My First Java Functions App

    In today's post, we'll walk through the Quickstart: Azure Functions tutorial using Visual Studio Code. In the process, we'll setup our development environment with the relevant command-line tools and VS Code extensions to make building Functions app simpler.

    Note: Completing this exercise may incur a a cost of a few USD cents based on your Azure subscription. Explore pricing details to learn more.

    First, make sure you have your development environment setup and configured.

    PRE-REQUISITES
    1. An Azure account with an active subscription - Create an account for free
    2. The Java Development Kit, version 11 or 8. - Install
    3. Apache Maven, version 3.0 or above. - Install
    4. Visual Studio Code. - Install
    5. The Java extension pack - Install
    6. The Azure Functions extension for Visual Studio Code - Install

    VS Code Setup

    NEW TO VISUAL STUDIO CODE?

    Start with the Java in Visual Studio Code tutorial to jumpstart your learning!

    Install the Extension Pack for Java (shown below) to install 6 popular extensions to help development workflow from creation to testing, debugging, and deployment.

    Extension Pack for Java

    Now, it's time to get started on our first Java-based Functions app.

    1. Create App

    1. Open a command-line terminal and create a folder for your project. Use the code command to launch Visual Studio Code from that directory as shown:

      $ mkdir java-function-resource-group-api
      $ cd java-function-resource-group-api
      $ code .
    2. Open the Visual Studio Command Palette (Ctrl + Shift + p) and select Azure Functions: create new project to kickstart the create workflow. Alternatively, you can click the Azure icon (on activity sidebar), to get the Workspace window, click "+" and pick the "Create Function" option as shown below.

      Screenshot of creating function in Azure from Visual Studio Code.

    3. This triggers a multi-step workflow. Fill in the information for each step as shown in the following prompts. Important: Start this process from an empty folder - the workflow will populate it with the scaffold for your Java-based Functions app.

      PromptValue
      Choose the directory location.You should either create a new folder or choose an empty folder for the project workspace. Don't choose a project folder that is already part of a workspace.
      Select a languageChoose Java.
      Select a version of JavaChoose Java 11 or Java 8, the Java version on which your functions run in Azure. Choose a Java version that you've verified locally.
      Provide a group IDChoose com.function.
      Provide an artifact IDEnter myFunction.
      Provide a versionChoose 1.0-SNAPSHOT.
      Provide a package nameChoose com.function.
      Provide an app nameEnter HttpExample.
      Select the build tool for Java projectChoose Maven.

    Visual Studio Code uses the provided information and generates an Azure Functions project. You can view the local project files in the Explorer - it should look like this:

    Azure Functions Scaffold For Java

    2. Preview App

    Visual Studio Code integrates with the Azure Functions Core tools to let you run this project on your local development computer before you publish to Azure.

    1. To build and run the application, use the following Maven command. You should see output similar to that shown below.

      $ mvn clean package azure-functions:run
      ..
      ..
      Now listening on: http://0.0.0.0:7071
      Application started. Press Ctrl+C to shut down.

      Http Functions:

      HttpExample: [GET,POST] http://localhost:7071/api/HttpExample
      ...
    2. Copy the URL of your HttpExample function from this output to a browser and append the query string ?name=<YOUR_NAME>, making the full URL something like http://localhost:7071/api/HttpExample?name=Functions. The browser should display a message that echoes back your query string value. The terminal in which you started your project also shows log output as you make requests.

    🎉 CONGRATULATIONS

    You created and ran a function app locally!

    With the Terminal panel focused, press Ctrl + C to stop Core Tools and disconnect the debugger. After you've verified that the function runs correctly on your local computer, it's time to use Visual Studio Code and Maven to publish and test the project on Azure.

    3. Sign into Azure

    Before you can deploy, sign in to your Azure subscription.

    az login

    The az login command signs you into your Azure account.

    Use the following command to deploy your project to a new function app.

    mvn clean package azure-functions:deploy

    When the creation is complete, the following Azure resources are created in your subscription:

    • Resource group. Named as java-functions-group.
    • Storage account. Required by Functions. The name is generated randomly based on Storage account name requirements.
    • Hosting plan. Serverless hosting for your function app.The name is java-functions-app-service-plan.
    • Function app. A function app is the deployment and execution unit for your functions. The name is randomly generated based on your artifactId, appended with a randomly generated number.

    4. Deploy App

    1. Back in the Resources area in the side bar, expand your subscription, your new function app, and Functions. Right-click (Windows) or Ctrl - click (macOS) the HttpExample function and choose Execute Function Now....

      Screenshot of executing function in Azure from Visual Studio Code.

    2. In Enter request body you see the request message body value of { "name": "Azure" }. Press Enter to send this request message to your function.

    3. When the function executes in Azure and returns a response, a notification is raised in Visual Studio Code.

    You can also copy the complete Invoke URL shown in the output of the publish command into a browser address bar, appending the query parameter ?name=Functions. The browser should display similar output as when you ran the function locally.

    🎉 CONGRATULATIONS

    You deployed your function app to Azure, and invoked it!

    5. Clean up

    Use the following command to delete the resource group and all its contained resources to avoid incurring further costs.

    az group delete --name java-functions-group

    Next Steps

    So, where can you go from here? The example above used a familiar HTTP Trigger scenario with a single Azure service (Azure Functions). Now, think about how you can build richer workflows by using other triggers and integrating with other Azure or third-party services.

    Other Triggers, Bindings

    Check out Azure Functions Samples In Java for samples (and short use cases) that highlight other triggers - with code! This includes triggers to integrate with CosmosDB, Blob Storage, Event Grid, Event Hub, Kafka and more.

    Scenario with Integrations

    Once you've tried out the samples, try building an end-to-end scenario by using these triggers to integrate seamlessly with other Services. Here are a couple of useful tutorials:

    Exercise

    Time to put this into action and validate your development workflow:

    Resources

    - + \ No newline at end of file diff --git a/blog/05-functions-js/index.html b/blog/05-functions-js/index.html index c236d9b6b6..af74c698d8 100644 --- a/blog/05-functions-js/index.html +++ b/blog/05-functions-js/index.html @@ -14,13 +14,13 @@ - +

    05. Functions for JS Devs

    · 7 min read
    Aaron Powell

    Welcome to Day 5 of #30DaysOfServerless!

    Yesterday we looked at Azure Functions from the perspective of a Java developer. Today, we'll do a similar walkthrough from the perspective of a JavaScript developer.

    And, we'll use this to explore another popular usage scenario for Azure Functions: building a serverless HTTP API using JavaScript.

    Ready? Let's go.


    What We'll Cover

    • Developer Guidance
    • Create Azure Function with CLI
    • Calling an external API
    • Azure Samples & Scenarios for JS
    • Exercise: Support searching
    • Resources: For self-study!


    Developer Guidance

    If you're a JavaScript developer new to serverless on Azure, start by exploring the Azure Functions JavaScript Developers Guide. It covers:

    • Quickstarts for Node.js - using Visual Code, CLI or Azure Portal
    • Guidance on hosting options and performance considerations
    • Azure Functions bindings and (code samples) for JavaScript
    • Scenario examples - integrations with other Azure Services

    Node.js 18 Support

    Node.js 18 Support (Public Preview)

    Azure Functions support for Node.js 18 entered Public Preview on Aug 31, 2022 and is supported by the Azure Functions v.4.x runtime!

    As we continue to explore how we can use Azure Functions, today we're going to look at using JavaScript to create one, and we're going to be using the newly released Node.js 18 support for Azure Functions to make the most out of the platform.

    Ensure you have Node.js 18 and Azure Functions v4.x versions installed, along with a text editor (I'll use VS Code in this post), and a terminal, then we're ready to go.

    Scenario: Calling The GitHub API

    The application we're going to be building today will use the GitHub API to return a random commit message, so that we don't need to come up with one ourselves! After all, naming things can be really hard! 🤣

    Creating the Azure Function

    To create our Azure Function, we're going to use the Azure Functions CLI, which we can install using npm:

    npm install --global azure-function-core-tools

    Once that's installed, we can use the new func command to initalise our project:

    func init --worker-runtime node --language javascript

    When running func init we can either provide the worker-runtime and language as arguments, or use the menu system that the tool will provide us. For brevity's stake, I've used the arguments here, specifying that we want node as the runtime and javascript as the language, but you could change that to typescript if you'd prefer to use TypeScript.

    Once the init command is completed, you should have a .vscode folder, and the files .gitignore, host.json, local.settings.json, and package.json.

    Files generated by func initFiles generated by func init

    Adding a HTTP Trigger

    We have an empty Functions app so far, what we need to do next is create a Function that it will run, and we're going to make a HTTP Trigger Function, which is a Function that responds to HTTP requests. We'll use the func new command to create that:

    func new --template "HTTP Trigger" --name "get-commit-message"

    When this completes, we'll have a folder for the Function, using the name we provided, that contains the filesfunction.json and index.js. Let's open the function.json to understand it a little bit:

    {
    "bindings": [
    {
    "authLevel": "function",
    "type": "httpTrigger",
    "direction": "in",
    "name": "req",
    "methods": [
    "get",
    "post"
    ]
    },
    {
    "type": "http",
    "direction": "out",
    "name": "res"
    }
    ]
    }

    This file is used to tell Functions about the Function that we've created and what it does, so it knows to handle the appropriate events. We have a bindings node which contains the event bindings for our Azure Function. The first binding is using the type httpTrigger, which indicates that it'll be executed, or triggered, by a HTTP event, and the methods indicates that it's listening to both GET and POST (you can change this for the right HTTP methods that you want to support). The HTTP request information will be bound to a property in the Functions context called req, so we can access query strings, the request body, etc.

    The other binding we have has the direction of out, meaning that it's something that the Function will return to the called, and since this is a HTTP API, the type is http, indicating that we'll return a HTTP response, and that response will be on a property called res that we add to the Functions context.

    Let's go ahead and start the Function and call it:

    func start

    Starting the FunctionStarting the Function

    With the Function started, access the endpoint http://localhost:7071/api/get-commit-message via a browser or using cURL:

    curl http://localhost:7071/api/get-commit-message\?name\=ServerlessSeptember

    Hello from Azure FunctionsHello from Azure Functions

    🎉 CONGRATULATIONS

    You created and ran a JavaScript function app locally!

    Calling an external API

    It's time to update the Function to do what we want to do - call the GitHub Search API and get some commit messages. The endpoint that we'll be calling is https://api.github.com/search/commits?q=language:javascript.

    Note: The GitHub API is rate limited and this sample will call it unauthenticated, so be aware of that in your own testing.

    To call this API, we'll leverage the newly released fetch support in Node 18 and async/await, to make for a very clean Function.

    Open up the index.js file, and delete the contents of the existing Function, so we have a empty one:

    module.exports = async function (context, req) {

    }

    The default template uses CommonJS, but you can use ES Modules with Azure Functions if you prefer.

    Now we'll use fetch to call the API, and unpack the JSON response:

    module.exports = async function (context, req) {
    const res = await fetch("https://api.github.com/search/commits?q=language:javascript");
    const json = await res.json();
    const messages = json.items.map(item => item.commit.message);
    context.res = {
    body: {
    messages
    }
    };
    }

    To send a response to the client, we're setting the context.res property, where res is the name of the output binding in our function.json, and giving it a body that contains the commit messages.

    Run func start again, and call the endpoint:

    curl http://localhost:7071/api/get-commit-message

    The you'll get some commit messages:

    A series of commit messages from the GitHub Search APIA series of commit messages from the GitHub Search API

    🎉 CONGRATULATIONS

    There we go, we've created an Azure Function which is used as a proxy to another API, that we call (using native fetch in Node.js 18) and from which we return a subset of the JSON payload.

    Next Steps

    Other Triggers, Bindings

    This article focused on using the HTTPTrigger and relevant bindings, to build a serverless API using Azure Functions. How can you explore other supported bindings, with code samples to illustrate usage?

    Scenarios with Integrations

    Once you've tried out the samples, try building an end-to-end scenario by using these triggers to integrate seamlessly with other services. Here are some suggestions:

    Exercise: Support searching

    The GitHub Search API allows you to provide search parameters via the q query string. In this sample, we hard-coded it to be language:javascript, but as a follow-on exercise, expand the Function to allow the caller to provide the search terms as a query string to the Azure Function, which is passed to the GitHub Search API. Hint - have a look at the req argument.

    Resources

    - + \ No newline at end of file diff --git a/blog/06-functions-dotnet/index.html b/blog/06-functions-dotnet/index.html index 373f9122cf..86dc5dab18 100644 --- a/blog/06-functions-dotnet/index.html +++ b/blog/06-functions-dotnet/index.html @@ -14,13 +14,13 @@ - +

    06. Functions for .NET Devs

    · 10 min read
    Mike James
    Matt Soucoup

    Welcome to Day 6 of #30DaysOfServerless!

    The theme for this week is Azure Functions. Today we're going to talk about why Azure Functions are a great fit for .NET developers.


    What We'll Cover

    • What is serverless computing?
    • How does Azure Functions fit in?
    • Let's build a simple Azure Function in .NET
    • Developer Guide, Samples & Scenarios
    • Exercise: Explore the Create Serverless Applications path.
    • Resources: For self-study!

    A banner image that has the title of this article with the author&#39;s photo and a drawing that summarizes the demo application.


    The leaves are changing colors and there's a chill in the air, or for those lucky folks in the Southern Hemisphere, the leaves are budding and a warmth is in the air. Either way, that can only mean one thing - it's Serverless September!🍂 So today, we're going to take a look at Azure Functions - what they are, and why they're a great fit for .NET developers.

    What is serverless computing?

    For developers, serverless computing means you write highly compact individual functions that do one thing - and run in the cloud. These functions are triggered by some external event. That event could be a record being inserted into a database, a file uploaded into BLOB storage, a timer interval elapsed, or even a simple HTTP request.

    But... servers are still definitely involved! What has changed from other types of cloud computing is that the idea and ownership of the server has been abstracted away.

    A lot of the time you'll hear folks refer to this as Functions as a Service or FaaS. The defining characteristic is all you need to do is put together your application logic. Your code is going to be invoked in response to events - and the cloud provider takes care of everything else. You literally get to focus on only the business logic you need to run in response to something of interest - no worries about hosting.

    You do not need to worry about wiring up the plumbing between the service that originates the event and the serverless runtime environment. The cloud provider will handle the mechanism to call your function in response to whatever event you chose to have the function react to. And it passes along any data that is relevant to the event to your code.

    And here's a really neat thing. You only pay for the time the serverless function is running. So, if you have a function that is triggered by an HTTP request, and you rarely get requests to your function, you would rarely pay.

    How does Azure Functions fit in?

    Microsoft's Azure Functions is a modern serverless architecture, offering event-driven cloud computing that is easy for developers to use. It provides a way to run small pieces of code or Functions in the cloud without developers having to worry themselves about the infrastructure or platform the Function is running on.

    That means we're only concerned about writing the logic of the Function. And we can write that logic in our choice of languages... like C#. We are also able to add packages from NuGet to Azure Functions—this way, we don't have to reinvent the wheel and can use well-tested libraries.

    And the Azure Functions runtime takes care of a ton of neat stuff for us, like passing in information about the event that caused it to kick off - in a strongly typed variable. It also "binds" to other services, like Azure Storage, we can easily access those services from our code without having to worry about new'ing them up.

    Let's build an Azure Function!

    Scaffold the Function

    Don't worry about having an Azure subscription or even being connected to the internet—we can develop and debug Azure Functions locally using either Visual Studio or Visual Studio Code!

    For this example, I'm going to use Visual Studio Code to build up a Function that responds to an HTTP trigger and then writes a message to an Azure Storage Queue.

    Diagram of the how the Azure Function will use the HTTP trigger and the Azure Storage Queue Binding

    The incoming HTTP call is the trigger and the message queue the Function writes to is an output binding. Let's have at it!

    info

    You do need to have some tools downloaded and installed to get started. First and foremost, you'll need Visual Studio Code. Then you'll need the Azure Functions extension for VS Code to do the development with. Finally, you'll need the Azurite Emulator installed as well—this will allow us to write to a message queue locally.

    Oh! And of course, .NET 6!

    Now with all of the tooling out of the way, let's write a Function!

    1. Fire up Visual Studio Code. Then, from the command palette, type: Azure Functions: Create New Project

      Screenshot of create a new function dialog in VS Code

    2. Follow the steps as to which directory you want to create the project in and which .NET runtime and language you want to use.

      Screenshot of VS Code prompting which directory and language to use

    3. Pick .NET 6 and C#.

      It will then prompt you to pick the folder in which your Function app resides and then select a template.

      Screenshot of VS Code prompting you to pick the Function trigger template

      Pick the HTTP trigger template. When prompted for a name, call it: PostToAQueue.

    Execute the Function Locally

    1. After giving it a namespace, it prompts for an authorization level—pick Anonymous. Now we have a Function! Let's go ahead and hit F5 and see it run!
    info

    After the templates have finished installing, you may get a prompt to download additional components—these are NuGet packages. Go ahead and do that.

    When it runs, you'll see the Azure Functions logo appear in the Terminal window with the URL the Function is located at. Copy that link.

    Screenshot of the Azure Functions local runtime starting up

    1. Type the link into a browser, adding a name parameter as shown in this example: http://localhost:7071/api/PostToAQueue?name=Matt. The Function will respond with a message. You can even set breakpoints in Visual Studio Code and step through the code!

    Write To Azure Storage Queue

    Next, we'll get this HTTP trigger Function to write to a local Azure Storage Queue. First we need to add the Storage NuGet package to our project. In the terminal, type:

    dotnet add package Microsoft.Azure.WebJobs.Extensions.Storage

    Then set a configuration setting to tell the Function runtime where to find the Storage. Open up local.settings.json and set "AzureWebJobsStorage" to "UseDevelopmentStorage=true". The full file will look like:

    {
    "IsEncrypted": false,
    "Values": {
    "AzureWebJobsStorage": "UseDevelopmentStorage=true",
    "AzureWebJobsDashboard": ""
    }
    }

    Then create a new class within your project. This class will hold nothing but properties. Call it whatever you want and add whatever properties you want to it. I called mine TheMessage and added an Id and Name properties to it.

    public class TheMessage
    {
    public string Id { get; set; }
    public string Name { get; set; }
    }

    Finally, change your PostToAQueue Function, so it looks like the following:


    public static class PostToAQueue
    {
    [FunctionName("PostToAQueue")]
    public static async Task<IActionResult> Run(
    [HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)] HttpRequest req,
    [Queue("demoqueue", Connection = "AzureWebJobsStorage")] IAsyncCollector<TheMessage> messages,
    ILogger log)
    {
    string name = req.Query["name"];

    await messages.AddAsync(new TheMessage { Id = System.Guid.NewGuid().ToString(), Name = name });

    return new OkResult();
    }
    }

    Note the addition of the messages variable. This is telling the Function to use the storage connection we specified before via the Connection property. And it is also specifying which queue to use in that storage account, in this case demoqueue.

    All the code is doing is pulling out the name from the query string, new'ing up a new TheMessage class and adding that to the IAsyncCollector variable.

    That will add the new message to the queue!

    Make sure Azurite is started within VS Code (both the queue and blob emulators). Run the app and send the same GET request as before: http://localhost:7071/api/PostToAQueue?name=Matt.

    If you have the Azure Storage Explorer installed, you can browse your local Queue and see the new message in there!

    Screenshot of Azure Storage Explorer with the new message in the queue

    Summing Up

    We had a quick look at what Microsoft's serverless offering, Azure Functions, is comprised of. It's a full-featured FaaS offering that enables you to write functions in your language of choice, including reusing packages such as those from NuGet.

    A highlight of Azure Functions is the way they are triggered and bound. The triggers define how a Function starts, and bindings are akin to input and output parameters on it that correspond to external services. The best part is that the Azure Function runtime takes care of maintaining the connection to the external services so you don't have to worry about new'ing up or disposing of the connections yourself.

    We then wrote a quick Function that gets triggered off an HTTP request and then writes a query string parameters from that request into a local Azure Storage Queue.

    What's Next

    So, where can you go from here?

    Think about how you can build real-world scenarios by integrating other Azure services. For example, you could use serverless integrations to build a workflow where the input payload received using an HTTP Trigger, is now stored in Blob Storage (output binding), which in turn triggers another service (e.g., Cognitive Services) that processes the blob and returns an enhanced result.

    Keep an eye out for an update to this post where we walk through a scenario like this with code. Check out the resources below to help you get started on your own.

    Exercise

    This brings us close to the end of Week 1 with Azure Functions. We've learned core concepts, built and deployed our first Functions app, and explored quickstarts and scenarios for different programming languages. So, what can you do to explore this topic on your own?

    • Explore the Create Serverless Applications learning path which has several modules that explore Azure Functions integrations with various services.
    • Take up the Cloud Skills Challenge and complete those modules in a fun setting where you compete with peers for a spot on the leaderboard!

    Then come back tomorrow as we wrap up the week with a discussion on end-to-end scenarios, a recap of what we covered this week, and a look at what's ahead next week.

    Resources

    Start here for developer guidance in getting started with Azure Functions as a .NET/C# developer:

    Then learn about supported Triggers and Bindings for C#, with code snippets to show how they are used.

    Finally, explore Azure Functions samples for C# and learn to implement serverless solutions. Examples include:

    - + \ No newline at end of file diff --git a/blog/07-functions-python/index.html b/blog/07-functions-python/index.html index 7df30e8a7c..44d4d221e1 100644 --- a/blog/07-functions-python/index.html +++ b/blog/07-functions-python/index.html @@ -14,7 +14,7 @@ - + @@ -22,7 +22,7 @@

    07. Functions for Python Devs

    · 7 min read
    Jay Miller

    Welcome to Day 7 of #30DaysOfServerless!

    Over the past couple of days, we've explored Azure Functions from the perspective of specific programming languages. Today we'll continue that trend by looking at Python - exploring the Timer Trigger and CosmosDB binding, and showcasing integration with a FastAPI-implemented web app.

    Ready? Let's go.


    What We'll Cover

    • Developer Guidance: Azure Functions On Python
    • Build & Deploy: Wildfire Detection Apps with Timer Trigger + CosmosDB
    • Demo: My Fire Map App: Using FastAPI and Azure Maps to visualize data
    • Next Steps: Explore Azure Samples
    • Exercise: Try this yourself!
    • Resources: For self-study!


    Developer Guidance

    If you're a Python developer new to serverless on Azure, start with the Azure Functions Python Developer Guide. It covers:

    • Quickstarts with Visual Studio Code and Azure CLI
    • Adopting best practices for hosting, reliability and efficiency.
    • Tutorials showcasing Azure automation, image classification and more
    • Samples showcasing Azure Functions features for Python developers

    Now let's dive in and build our first Python-based Azure Functions app.


    Detecting Wildfires Around the World?

    I live in California which is known for lots of wildfires. I wanted to create a proof of concept for developing an application that could let me know if there was a wildfire detected near my home.

    NASA has a few satelites orbiting the Earth that can detect wildfires. These satelites take scans of the radiative heat in and use that to determine the likelihood of a wildfire. NASA updates their information about every 30 minutes and it can take about four hours for to scan and process information.

    Fire Point Near Austin, TX

    I want to get the information but I don't want to ping NASA or another service every time I check.

    What if I occaisionally download all the data I need? Then I can ping that as much as I like.

    I can create a script that does just that. Any time I say I can create a script that is a verbal queue for me to consider using an Azure function. With the function being ran in the cloud, I can ensure the script runs even when I'm not at my computer.

    How the Timer Trigger Works

    This function will utilize the Timer Trigger. This means Azure will call this function to run at a scheduled interval. This isn't the only way to keep the data in sync, but we know that arcgis, the service that we're using says that data is only updated every 30 minutes or so.

    To learn more about the TimerTrigger as a concept, check out the Azure Functions documentation around Timers.

    When we create the function we tell it a few things like where the script will live (in our case in __init__.py) the type and direction and notably often it should run. We specify the timer using schedule": <The CRON INTERVAL>. For us we're using 0 0,30 * * * which means every 30 minutes at the hour and half-hour.

    {
    "scriptFile": "__init__.py",
    "bindings": [
    {
    "name": "reqTimer",
    "type": "timerTrigger",
    "direction": "in",
    "schedule": "0 0,30 * * * *"
    }
    ]
    }

    Next, we create the code that runs when the function is called.

    Connecting to the Database and our Source

    Disclaimer: The data that we're pulling is for educational purposes only. This is not meant to be a production level application. You're welcome play with this project but ensure that you're using the data in compliance with Esri.

    Our function does two important things.

    1. It pulls data from ArcGIS that meets the parameters
    2. It stores that pulled data into our database

    If you want to check out the code in its entirety, check out the GitHub repository.

    Pulling the data from ArcGIS is easy. We can use the ArcGIS Python API. Then, we need to load the service layer. Finally we query that layer for the specific data.

    def write_new_file_data(gis_id:str, layer:int=0) -> FeatureSet:
    """Returns a JSON String of the Dataframe"""
    fire_data = g.content.get(gis_id)
    feature = fire_data.layers[layer] # Loading Featured Layer from ArcGIS
    q = feature.query(
    where="confidence >= 65 AND hours_old <= 4", #The filter for the query
    return_distince_values=True,
    out_fields="confidence, hours_old", # The data we want to store with our points
    out_sr=4326, # The spatial reference of the data
    )
    return q

    Then we need to store the data in our database.

    We're using Cosmos DB for this. COSMOSDB is a NoSQL database, which means that the data looks a lot like a python dictionary as it's JSON. This means that we don't need to worry about converting the data into a format that can be stored in a relational database.

    The second reason is that Cosmos DB is tied into the Azure ecosystem so that if we want to create functions Azure events around it, we can.

    Our script grabs the information that we pulled from ArcGIS and stores it in our database.

    async with CosmosClient.from_connection_string(COSMOS_CONNECTION_STRING) as client:
    container = database.get_container_client(container=CONTAINER)
    for record in data:
    await container.create_item(
    record,
    enable_automatic_id_generation=True,
    )

    In our code each of these functions live in their own space. So in the main function we focus solely on what azure functions will be doing. The script that gets called is __init__.py. There we'll have the function call the other functions running.

    We created another function called load_and_write that does all the work outlined above. __init__.py will call that.

    async def main(reqTimer: func.TimerRequest) -> None:
    database=database
    container=container
    await update_db.load_and_write(gis_id=GIS_LAYER_ID, database=database, container=container)

    Then we deploy the function to Azure. I like to use VS Code's Azure Extension but you can also deploy it a few other ways.

    Deploying the function via VS Code

    Once the function is deployed we can load the Azure portal and see a ping whenever the function is called. The pings correspond to the Function being ran

    We can also see the data now living in the datastore. Document in Cosmos DB

    It's in the Database, Now What?

    Now the real fun begins. We just loaded the last bit of fire data into a database. We can now query that data and serve it to others.

    As I mentioned before, our Cosmos DB data is also stored in Azure, which means that we can deploy Azure Functions to trigger when new data is added. Perhaps you can use this to check for fires near you and use a Logic App to send an alert to your phone or email.

    Another option is to create a web application that talks to the database and displays the data. I've created an example of this using FastAPI – https://jm-func-us-fire-notify.azurewebsites.net.

    Website that Checks for Fires


    Next Steps

    This article showcased the Timer Trigger and the HTTP Trigger for Azure Functions in Python. Now try exploring other triggers and bindings by browsing Bindings code samples for Python and Azure Functions samples for Python

    Once you've tried out the samples, you may want to explore more advanced integrations or extensions for serverless Python scenarios. Here are some suggestions:

    And check out the resources for more tutorials to build up your Azure Functions skills.

    Exercise

    I encourage you to fork the repository and try building and deploying it yourself! You can see the TimerTrigger and a HTTPTrigger building the website.

    Then try extending it. Perhaps if wildfires are a big thing in your area, you can use some of the data available in Planetary Computer to check out some other datasets.

    Resources

    - + \ No newline at end of file diff --git a/blog/08-functions-azure/index.html b/blog/08-functions-azure/index.html index 6da2df2e68..b901fbfcbf 100644 --- a/blog/08-functions-azure/index.html +++ b/blog/08-functions-azure/index.html @@ -14,13 +14,13 @@ - +

    08. Functions + Serverless On Azure

    · 5 min read
    Nitya Narasimhan
    Devanshi Joshi

    SEP 08: CHANGE IN PUBLISHING SCHEDULE

    Starting from Week 2 (Sep 8), we'll be publishing blog posts in batches rather than on a daily basis, so you can read a series of related posts together. Don't want to miss updates? Just subscribe to the feed


    Welcome to Day 8 of #30DaysOfServerless!

    This marks the end of our Week 1 Roadmap focused on Azure Functions!! Today, we'll do a quick recap of all #ServerlessSeptember activities in Week 1, set the stage for Week 2 - and leave you with some excellent tutorials you should explore to build more advanced scenarios with Azure Functions.

    Ready? Let's go.


    What We'll Cover

    • Azure Functions: Week 1 Recap
    • Advanced Functions: Explore Samples
    • End-to-End: Serverless Hacks & Cloud Skills
    • What's Next: Hello, Containers & Microservices
    • Challenge: Complete the Learning Path


    Week 1 Recap: #30Days & Functions

    Congratulations!! We made it to the end of Week 1 of #ServerlessSeptember. Let's recap what we learned so far:

    • In Core Concepts we looked at where Azure Functions fits into the serverless options available on Azure. And we learned about key concepts like Triggers, Bindings, Custom Handlers and Durable Functions.
    • In Build Your First Function we looked at the tooling options for creating Functions apps, testing them locally, and deploying them to Azure - as we built and deployed our first Functions app.
    • In the next 4 posts, we explored new Triggers, Integrations, and Scenarios - as we looked at building Functions Apps in Java, JavaScript, .NET and Python.
    • And in the Zero-To-Hero series, we learned about Durable Entities - and how we can use them to create stateful serverless solutions using a Chirper Sample as an example scenario.

    The illustrated roadmap below summarizes what we covered each day this week, as we bring our Functions-as-a-Service exploration to a close.


    Advanced Functions: Code Samples

    So, now that we've got our first Functions app under our belt, and validated our local development setup for tooling, where can we go next? A good next step is to explore different triggers and bindings, that drive richer end-to-end scenarios. For example:

    • Integrate Functions with Azure Logic Apps - we'll discuss Azure Logic Apps in Week 3. For now, think of it as a workflow automation tool that lets you integrate seamlessly with other supported Azure services to drive an end-to-end scenario. In this tutorial, we set up a workflow connecting Twitter (get tweet) to Azure Cognitive Services (analyze sentiment) - and use that to trigger an Azure Functions app to send email about the result.
    • Integrate Functions with Event Grid - we'll discuss Azure Event Grid in Week 3. For now, think of it as an eventing service connecting event sources (publishers) to event handlers (subscribers) at cloud scale. In this tutorial, we handle a common use case - a workflow where loading an image to Blob Storage triggers an Azure Functions app that implements a resize function, helping automatically generate thumbnails for the uploaded image.
    • Integrate Functions with CosmosDB and SignalR to bring real-time push-based notifications to your web app. It achieves this by using a Functions app that is triggered by changes in a CosmosDB backend, causing it to broadcast that update (push notification to connected web clients over SignalR, in real time.

    Want more ideas? Check out the Azure Samples for Functions for implementations, and browse the Azure Architecture Center for reference architectures from real-world scenarios that involve Azure Functions usage.


    E2E Scenarios: Hacks & Cloud Skills

    Want to systematically work your way through a single End-to-End scenario involving Azure Functions alongside other serverless support technologies? Check out the Serverless Hacks activity happening during #ServerlessSeptember, and learn to build this "Serverless Tollbooth Application" in a series of 10 challenges. Check out the video series for a reference solution in .NET and sign up for weekly office hours to join peers and discuss your solutions or challenges.

    Or perhaps you prefer to learn core concepts with code in a structured learning path? We have that covered. Check out the 12-module "Create Serverless Applications" course from Microsoft Learn which walks your through concepts, one at a time, with code. Even better - sign up for the free Cloud Skills Challenge and complete the same path (in under 30 days) but this time, with the added fun of competing against your peers for a spot on a leaderboard, and swag.


    What's Next? Hello, Cloud-Native!

    So where to next? In Week 2 we turn our attention from Functions-as-a-Service to building more complex backends using Containers and Microservices. We'll focus on two core technologies - Azure Container Apps and Dapr (Distributed Application Runtime) - both key components of a broader vision around Building Cloud-Native Applications in Azure.

    What is Cloud-Native you ask?

    Fortunately for you, we have an excellent introduction in our Zero-to-Hero article on Go Cloud-Native with Azure Container Apps - that explains the 5 pillars of Cloud-Native and highlights the value of Azure Container Apps (scenarios) and Dapr (sidecar architecture) for simplified microservices-based solution with auto-scale capability. Prefer a visual summary? Here's an illustrate guide to that article for convenience.

    Go Cloud-Native Download a higher resolution version of the image


    Take The Challenge

    We typically end each post with an exercise or activity to reinforce what you learned. For Week 1, we encourage you to take the Cloud Skills Challenge and work your way through at least a subset of the modules, for hands-on experience with the different Azure Functions concepts, integrations, and usage.

    See you in Week 2!

    - + \ No newline at end of file diff --git a/blog/09-aca-fundamentals/index.html b/blog/09-aca-fundamentals/index.html index 39ecd1ca57..aaf7d9a46f 100644 --- a/blog/09-aca-fundamentals/index.html +++ b/blog/09-aca-fundamentals/index.html @@ -14,13 +14,13 @@ - +

    09. Hello, Azure Container Apps

    · 12 min read
    Nitya Narasimhan

    Welcome to Day 9 of #30DaysOfServerless!


    What We'll Cover

    • The Week Ahead
    • Hello, Container Apps!
    • Quickstart: Build Your First ACA!
    • Under The Hood: Core ACA Concepts
    • Exercise: Try this yourself!
    • Resources: For self-study!


    The Week Ahead

    Welcome to Week 2 of #ServerlessSeptember, where we put the focus on Microservices and building Cloud-Native applications that are optimized for serverless solutions on Azure. One week is not enough to do this complex topic justice so consider this a 7-part jumpstart to the longer journey.

    1. Hello, Container Apps (ACA) - Learn about Azure Container Apps, a key service that helps you run microservices and containerized apps on a serverless platform. Know the core concepts. (Tutorial 1: First ACA)
    2. Communication with Microservices - Dive deeper into two key concepts: environments and virtual networking. Learn how microservices communicate in ACA, and walkthrough an example. (Tutorial 2: ACA with 3 Microservices)
    3. Scaling Your Container Apps - Learn about KEDA. Understand how to configure your ACA for auto-scaling with KEDA-supported triggers. Put this into action by walking through a tutorial. (Tutorial 3: Configure Autoscaling)
    4. Hello, Distributed Application Runtime (Dapr) - Learn about Dapr and how its Building Block APIs simplify microservices development with ACA. Know how the sidecar pattern enables incremental adoption of Dapr APIs without requiring any Dapr code integration in app. (Tutorial 4: Setup & Explore Dapr)
    5. Building ACA with Dapr - See how Dapr works with ACA by building a Dapr-enabled Azure Container App. Walk through a .NET tutorial using Pub/Sub and State Management APIs in an enterprise scenario. (Tutorial 5: Build ACA with Dapr)
    6. Managing Secrets With Dapr - We'll look at the Secrets API (a key Building Block of Dapr) and learn how it simplifies management of sensitive information in ACA.
    7. Microservices + Serverless On Azure - We recap Week 2 (Microservices) and set the stage for Week 3 ( Integrations) of Serverless September. Plus, self-study resources including ACA development tutorials in different languages.

    Ready? Let's go!


    Azure Container Apps!

    When building your application, your first decision is about where you host your application. The Azure Architecture Center has a handy chart to help you decide between choices like Azure Functions, Azure App Service, Azure Container Instances, Azure Container Apps and more. But if you are new to this space, you'll need a good understanding of the terms and concepts behind the services Today, we'll focus on Azure Container Apps (ACA) - so let's start with the fundamentals.

    Containerized App Defined

    A containerized app is one where the application components, dependencies, and configuration, are packaged into a single file (container image), which can be instantiated in an isolated runtime environment (container) that is portable across hosts (OS). This makes containers lightweight and scalable - and ensures that applications behave consistently on different host platforms.

    Container images can be shared via container registries (public or private) helping developers discover and deploy related apps with less effort. Scaling a containerized app can be as simple as activating more instances of its container image. However, this requires container orchestrators to automate the management of container apps for efficiency. Orchestrators use technologies like Kubernetes to support capabilities like workload scheduling, self-healing and auto-scaling on demand.

    Cloud-Native & Microservices

    Containers are seen as one of the 5 pillars of Cloud-Native app development, an approach where applications are designed explicitly to take advantage of the unique benefits of modern dynamic environments (involving public, private and hybrid clouds). Containers are particularly suited to serverless solutions based on microservices.

    • With serverless - developers use managed services instead of managing their own infrastructure. Services are typically event-driven and can be configured for autoscaling with rules tied to event triggers. Serverless is cost-effective, with developers paying only for the compute cycles and resources they use.
    • With microservices - developers compose their applications from independent components. Each component can be deployed in its own container, and scaled at that granularity. This simplifies component reuse (across apps) and maintainability (over time) - with developers evolving functionality at microservice (vs. app) levels.

    Hello, Azure Container Apps!

    Azure Container Apps is the managed service that helps you run containerized apps and microservices as a serverless compute solution, on Azure. You can:

    • deploy serverless API endpoints - autoscaled by HTTP request traffic
    • host background processing apps - autoscaled by CPU or memory load
    • handle event-driven processing - autoscaled by #messages in queue
    • run microservices - autoscaled by any KEDA-supported scaler.

    Want a quick intro to the topic? Start by watching the short video below - then read these two posts from our ZeroToHero series:


    Deploy Your First ACA

    Dev Options

    We typically have three options for development:

    • Use the Azure Portal - provision and deploy from a browser.
    • Use Visual Studio Code (with relevant extensions) - if you prefer an IDE
    • Using Azure CLI - if you prefer to build and deploy from command line.

    The documentation site has quickstarts for three contexts:

    For this quickstart, we'll go with the first option (sample image) so we can move quickly to core concepts. We'll leave the others as an exercise for you to explore.

    1. Setup Resources

    PRE-REQUISITES

    You need:

    • An Azure account with an active subscription
    • An installed Azure CLI

    Start by logging into Azure from the CLI. The command should launch a browser to complete the auth flow (or give you an option to take an alternative path).

    $ az login

    Successful authentication will result in extensive command-line output detailing the status of your subscription.

    Next, install the Azure Container Apps extension for the CLI

    $ az extension add --name containerapp --upgrade
    ...
    The installed extension 'containerapp' is in preview.

    Once successfully installed, register the Microsoft.App namespace.

    $ az provider register --namespace Microsoft.App

    Then set local environment variables in that terminal - and verify they are set correctly:

    $ RESOURCE_GROUP="my-container-apps"
    $ LOCATION="canadacentral"
    $ CONTAINERAPPS_ENVIRONMENT="my-environment"

    $ echo $LOCATION $RESOURCE_GROUP $CONTAINERAPPS_ENVIRONMENT
    canadacentral my-container-apps my-environment

    Now you can use Azure CLI to provision a resource group for this tutorial. Creating a resource group also makes it easier for us to delete/reclaim all resources used at the end of this tutorial.

    az group create \
    --name $RESOURCE_GROUP \
    --location $LOCATION
    Congratulations

    You completed the Setup step!

    On completion, the console should print out the details of the newly created resource group. You should also be able to visit the Azure Portal and find the newly-active my-container-apps resource group under your active subscription.

    2. Create Environment

    An environment is like the picket fence around your property. It creates a secure boundary that contains a group of container apps - such that all apps deployed to it share the same virtual network and logging resources.

    $ az containerapp env create \
    --name $CONTAINERAPPS_ENVIRONMENT \
    --resource-group $RESOURCE_GROUP \
    --location $LOCATION

    No Log Analytics workspace provided.
    Generating a Log Analytics workspace with name ...

    This can take a few minutes. When done, you will see the terminal display more details. You can also check the resource group in the portal and see that a Container Apps Environment and a Log Analytics Workspace are created for you as part of this step.

    You've got the fence set up. Now it's time to build your home - er, container app!

    3. Create Container App

    Here's the command we'll use to create our first Azure Container App. Note that the --image argument provides the link to a pre-existing containerapps-helloworld image.

    az containerapp create \
    --name my-container-app \
    --resource-group $RESOURCE_GROUP \
    --environment $CONTAINERAPPS_ENVIRONMENT \
    --image mcr.microsoft.com/azuredocs/containerapps-helloworld:latest \
    --target-port 80 \
    --ingress 'external' \
    --query properties.configuration.ingress.fqdn
    ...
    ...

    Container app created. Access your app at <URL>

    The --ingress property shows that the app is open to external requests; in other words, it is publicly visible at the <URL> that is printed out on the terminal on successsful completion of this step.

    4. Verify Deployment

    Let's see if this works. You can verify that your container app by visitng the URL returned above in your browser. You should see something like this!

    Container App Hello World

    You can also visit the Azure Portal and look under the created Resource Group. You should see a new Container App type of resource was created after this step.

    Congratulations

    You just created and deployed your first "Hello World" Azure Container App! This validates your local development environment setup and existence of a valid Azure subscription.

    5. Clean Up Your Resources

    It's good practice to clean up resources once you are done with a tutorial.

    THIS ACTION IS IRREVERSIBLE

    This command deletes the resource group we created above - and all resources in it. So make sure you specified the right name, then confirm deletion.

    $ az group delete --name $RESOURCE_GROUP
    Are you sure you want to perform this operation? (y/n):

    Note that you can also delete the resource group from the Azure Portal interface if that feels more comfortable. For now, we'll just use the Portal to verify that deletion occurred. If you had previously opened the Resource Group page for the created resource, just refresh it. You should see something like this:

    Resource Not Found


    Core Concepts

    COMING SOON

    An illustrated guide summarizing these concepts in a single sketchnote.

    We covered a lot today - we'll stop with a quick overview of core concepts behind Azure Container Apps, each linked to documentation for self-study. We'll dive into more details on some of these concepts in upcoming articles:

    • Environments - are the secure boundary around a group of container apps that are deployed in the same virtual network. They write logs to a shared Log Analytics workspace and can communicate seamlessly using Dapr, if used.
    • Containers refer to the container image deployed in the Azure Container App. They can use any runtime, programming language, or development stack - and be discovered using any public or private container registry. A container app can support multiple containers.
    • Revisions are immutable snapshots of an Azure Container App. The first revision is created when the ACA is first deployed, with new revisions created when redeployment occurs with revision-scope changes. Multiple revisions can run concurrently in an environment.
    • Application Lifecycle Management revolves around these revisions, with a container app having three phases: deployment, update and deactivation.
    • Microservices are independent units of functionality in Cloud-Native architectures. A single container app typically represents a single microservice, and can be composed from one or more containers. Microservices can now be scaled and upgraded indepedently, giving your application more flexbility and control.
    • Networking architecture consist of a virtual network (VNET) associated with the environment. Unless you provide a custom VNET at environment creation time, a default VNET is automatically created. The VNET configuration determines access (ingress, internal vs. external) and can influence auto-scaling choices (e.g., use HTTP Edge Proxy and scale based on number of HTTP requests).
    • Observability is about monitoring the health of your application and diagnosing it to improve reliability or performance. Azure Container Apps has a number of features - from Log streaming and Container console to integration with Azure Monitor - to provide a holistic view of application status over time.
    • Easy Auth is possible with built-in support for authentication and authorization including support for popular identity providers like Facebook, Google, Twitter and GitHub - alongside the Microsoft Identity Platform.

    Keep these terms in mind as we walk through more tutorials this week, to see how they find application in real examples. Finally, a note on Dapr, the Distributed Application Runtime that abstracts away many of the challenges posed by distributed systems - and lets you focus on your application logic.

    DAPR INTEGRATION MADE EASY

    Dapr uses a sidecar architecture, allowing Azure Container Apps to communicate with Dapr Building Block APIs over either gRPC or HTTP. Your ACA can be built to run with or without Dapr - giving you the flexibility to incrementally adopt specific APIs and unlock related capabilities as the need arises.

    In later articles this week, we'll do a deeper dive into Dapr and build our first Dapr-enable Azure Container App to get a better understanding of this integration.

    Exercise

    Congratulations! You made it! By now you should have a good idea of what Cloud-Native development means, why Microservices and Containers are important to that vision - and how Azure Container Apps helps simplify the building and deployment of microservices based applications using serverless architectures on Azure.

    Now it's your turn to reinforce learning by doing.

    Resources

    Three key resources to bookmark and explore:

    - + \ No newline at end of file diff --git a/blog/11-scaling-container-apps/index.html b/blog/11-scaling-container-apps/index.html index 87bf87b928..ebea82e8ac 100644 --- a/blog/11-scaling-container-apps/index.html +++ b/blog/11-scaling-container-apps/index.html @@ -14,13 +14,13 @@ - +

    11. Scaling Container Apps

    · 7 min read
    Paul Yu

    Welcome to Day 11 of #30DaysOfServerless!

    Yesterday we explored Azure Container Concepts related to environments, networking and microservices communication - and illustrated these with a deployment example. Today, we turn our attention to scaling your container apps with demand.


    What We'll Cover

    • What makes ACA Serverless?
    • What is Keda?
    • Scaling Your ACA
    • ACA Scaling In Action
    • Exercise: Explore azure-opensource-labs examples
    • Resources: For self-study!


    So, what makes Azure Container Apps "serverless"?

    Today we are going to focus on what makes Azure Container Apps (ACA) a "serverless" offering. But what does the term "serverless" really mean? As much as we'd like to think there aren't any servers involved, that is certainly not the case. In general, "serverless" means that most (if not all) server maintenance has been abstracted away from you.

    With serverless, you don't spend any time managing and patching servers. This concern is offloaded to Azure and you simply focus on adding business value through application delivery. In addition to operational efficiency, cost efficiency can be achieved with serverless on-demand pricing models. Your workload horizontally scales out based on need and you only pay for what you use. To me, this is serverless, and my teammate @StevenMurawski said it best... "being able to scale to zero is what gives ACA it's serverless magic."

    Scaling your Container Apps

    If you don't know by now, ACA is built on a solid open-source foundation. Behind the scenes, it runs on a managed Kubernetes cluster and includes several open-source components out-of-the box including Dapr to help you build and run microservices, Envoy Proxy for ingress capabilities, and KEDA for event-driven autoscaling. Again, you do not need to install these components yourself. All you need to be concerned with is enabling and/or configuring your container app to leverage these components.

    Let's take a closer look at autoscaling in ACA to help you optimize your container app.

    What is KEDA?

    KEDA stands for Kubernetes Event-Driven Autoscaler. It is an open-source project initially started by Microsoft and Red Hat and has been donated to the Cloud-Native Computing Foundation (CNCF). It is being maintained by a community of 200+ contributors and adopted by many large organizations. In terms of its status as a CNCF project it is currently in the Incubating Stage which means the project has gone through significant due diligence and on its way towards the Graduation Stage.

    Prior to KEDA, horizontally scaling your Kubernetes deployment was achieved through the Horizontal Pod Autoscaler (HPA) which relies on resource metrics such as CPU and memory to determine when additional replicas should be deployed. Being limited to CPU and memory falls a bit short for certain workloads. This is especially true for apps that need to processes messages from a queue or HTTP-based apps that can handle a specific amount of incoming HTTP requests at a time. KEDA aims to fill that gap and provides a much more robust framework for scaling by working in conjunction with HPA. It offers many scalers for you to implement and even allows your deployments to scale to zero! 🥳

    KEDA architecture

    Configuring ACA scale rules

    As I mentioned above, ACA's autoscaling feature leverages KEDA and gives you the ability to configure the number of replicas to deploy based on rules (event triggers). The number of replicas can be configured as a static number or a range (minimum and maximum). So if you need your containers to run 24/7, set the min and max to be the same value. By default, when you deploy a container app, it is set to scale from 0 to 10 replicas. The default scaling rule uses HTTP scaling and defaults to a minimum of 10 concurrent requests per second. Once the threshold of 10 concurrent request per second is met, another replica will be deployed until it reaches the maximum number of replicas.

    At the time of this writing, a container app can have up to 30 replicas.

    Default autoscaler

    As a best practice, if you have a Min / max replicas range configured, you should configure a scaling rule even if it is just explicitly setting the default values.

    Adding HTTP scaling rule

    In addition to HTTP scaling, you can also configure an Azure queue rule, which allows you to use Azure Storage Queues as an event data source.

    Adding Azure Queue scaling rule

    The most flexibility comes with the Custom rule type. This opens up a LOT more options for scaling. All of KEDA's event-based scalers are supported with this option 🚀

    Adding Custom scaling rule

    Translating KEDA templates to Azure templates

    When you implement Custom rules, you need to become familiar with translating KEDA templates to Azure Resource Manager templates or ACA YAML manifests. The KEDA scaler documentation is great and it should be simple to translate KEDA template metadata to an ACA rule metadata.

    The images below shows how to translated a scaling rule which uses Azure Service Bus as an event data source. The custom rule type is set to azure-servicebus and details of the service bus is added to the Metadata section. One important thing to note here is that the connection string to the service bus was added as a secret on the container app and the trigger parameter must be set to connection.

    Azure Container App custom rule metadata

    Azure Container App custom rule metadata

    Additional examples of KEDA scaler conversion can be found in the resources section and example video below.

    See Container App scaling in action

    Now that we've built up some foundational knowledge on how ACA autoscaling is implemented and configured, let's look at a few examples.

    Autoscaling based on HTTP traffic load

    Autoscaling based on Azure Service Bus message queues

    Summary

    ACA brings you a true serverless experience and gives you the ability to configure autoscaling rules based on KEDA scaler templates. This gives you flexibility to scale based on a wide variety of data sources in an event-driven manner. With the amount built-in scalers currently available, there is probably a scaler out there for all your use cases. If not, I encourage you to get involved with the KEDA community and help make it better!

    Exercise

    By now, you've probably read and seen enough and now ready to give this autoscaling thing a try. The example I walked through in the videos above can be found at the azure-opensource-labs repo. I highly encourage you to head over to the containerapps-terraform folder and try the lab out. There you'll find instructions which will cover all the steps and tools you'll need implement autoscaling container apps within your own Azure subscription.

    If you have any questions or feedback, please let us know in the comments below or reach out on Twitter @pauldotyu

    Have fun scaling your containers!

    Resources

    - + \ No newline at end of file diff --git a/blog/12-build-with-dapr/index.html b/blog/12-build-with-dapr/index.html index f5cbafd5f1..c12c5f3b1c 100644 --- a/blog/12-build-with-dapr/index.html +++ b/blog/12-build-with-dapr/index.html @@ -14,13 +14,13 @@ - +

    12. Build With Dapr!

    · 8 min read
    Nitya Narasimhan

    Welcome to Day 12 of #30DaysOfServerless!

    So far we've looked at Azure Container Apps - what it is, how it enables microservices communication, and how it enables auto-scaling with KEDA compliant scalers. Today we'll shift gears and talk about Dapr - the Distributed Application Runtime - and how it makes microservices development with ACA easier with core building blocks and a sidecar architecture!

    Ready? Let's go!


    What We'll Cover

    • What is Dapr and why use it?
    • Building Block APIs
    • Dapr Quickstart and Tutorials
    • Dapr-enabled ACA: A Sidecar Approach
    • Exercise: Build & Deploy a Dapr-enabled ACA.
    • Resources: For self-study!


    Hello, Dapr!

    Building distributed applications is hard. Building reliable and portable microservces means having middleware that deals with challenges like service discovery, sync and async communications, state management, secure information sharing and more. Integrating these support services into your application can be challenging from both development and maintenance perspectives, adding complexity that is independent of the core application logic you want to focus on.

    This is where Dapr (Distributed Application Runtime) shines - it's defined as::

    a portable, event-driven runtime that makes it easy for any developer to build resilient, stateless and stateful applications that run on the cloud and edge and embraces the diversity of languages and developer frameworks.

    But what does this actually mean to me as an app developer?


    Dapr + Apps: A Sidecar Approach

    The strength of Dapr lies in its ability to:

    • abstract complexities of distributed systems middleware - with Building Block APIs that implement components using best practices to tackle key challenges.
    • implement a Sidecar Pattern with interactions via APIs - allowing applications to keep their codebase clean and focus on app logic.
    • be Incrementally Adoptable - allowing developers to start by integrating one API, then evolving to use more as and when needed.
    • be Platform Agnostic - allowing applications to be developed in a preferred language or framework without impacting integration capabilities.

    The application-dapr sidecar interaction is illustrated below. The API abstraction allows applications to get the desired functionality without having to know how it was implemented, or without having to integrate Dapr-specific code into their codebase. Note how the sidecar process listens on port 3500 and the API provides clear routes for the specific building blocks supported by Dapr (e.g, /secrets, /state etc.)


    Dapr Building Blocks: API Interactions

    Dapr Building Blocks refers to HTTP and gRPC endpoints exposed by Dapr API endpoints exposed by the Dapr sidecar, providing key capabilities like state management, observability, service-to-service invocation, pub/sub messaging and more to the associated application.

    Building Blocks: Under the Hood
    The Dapr API is implemented by modular components that codify best practices for tackling the specific challenge that they represent. The API abstraction allows component implementations to evolve, or alternatives to be used , without requiring changes to the application codebase.

    The latest Dapr release has the building blocks shown in the above figure. Not all capabilities are available to Azure Container Apps by default - check the documentation for the latest updates on this. For now, Azure Container Apps + Dapr integration provides the following capabilities to the application:

    In the next section, we'll dive into Dapr-enabled Azure Container Apps. Before we do that, here are a couple of resources to help you explore the Dapr platform by itself, and get more hands-on experience with the concepts and capabilities:

    • Dapr Quickstarts - build your first Dapr app, then explore quickstarts for a core APIs including service-to-service invocation, pub/sub, state mangement, bindings and secrets management.
    • Dapr Tutorials - go beyond the basic quickstart and explore more realistic service integrations and usage scenarios. Try the distributed calculator example!

    Integrate Dapr & Azure Container Apps

    Dapr currently has a v1.9 (preview) version, but Azure Container Apps supports Dapr v1.8. In this section, we'll look at what it takes to enable, configure, and use, Dapr integration with Azure Container Apps. It involves 3 steps: enabling Dapr using settings, configuring Dapr components (API) for use, then invoking the APIs.

    Here's a simple a publisher-subscriber scenario from the documentation. We have two Container apps identified as publisher-app and subscriber-app deployed in a single environment. Each ACA has an activated daprd sidecar, allowing them to use the Pub/Sub API to communicate asynchronously with each other - without having to write the underlying pub/sub implementation themselves. Rather, we can see that the Dapr API uses a pubsub,azure.servicebus component to implement that capability.

    Pub/sub example

    Let's look at how this is setup.

    1. Enable Dapr in ACA: Settings

    We can enable Dapr integration in the Azure Container App during creation by specifying settings in one of two ways, based on your development preference:

    • Using Azure CLI: use custom commandline options for each setting
    • Using Infrastructure-as-Code (IaC): using properties for Bicep, ARM templates

    Once enabled, Dapr will run in the same environment as the Azure Container App, and listen on port 3500 for API requests. The Dapr sidecar can be shared my multiple Container Apps deployed in the same environment.

    There are four main settings we will focus on for this demo - the example below shows the ARM template properties, but you can find the equivalent CLI parameters here for comparison.

    • dapr.enabled - enable Dapr for Azure Container App
    • dapr.appPort - specify port on which app is listening
    • dapr.appProtocol - specify if using http (default) or gRPC for API
    • dapr.appId - specify unique application ID for service discovery, usage

    These are defined under the properties.configuration section for your resource. Changing Dapr settings does not update the revision but it will restart ACA revisions and replicas. Here is what the relevant section of the ARM template looks like for the publisher-app ACA in the scenario shown above.

    "dapr": {
    "enabled": true,
    "appId": "publisher-app",
    "appProcotol": "http",
    "appPort": 80
    }

    2. Configure Dapr in ACA: Components

    The next step after activating the Dapr sidecar, is to define the APIs that you want to use and potentially specify the Dapr components (specific implementations of that API) that you prefer. These components are created at environment-level and by default, Dapr-enabled containers apps in an environment will load the complete set of deployed components -- use the scopes property to ensure only components needed by a given app are loaded at runtime. Here's what the ARM template resources section looks like for the example above. This tells us that the environment has a dapr-pubsub component of type pubsub.azure.servicebus deployed - where that component is loaded by container apps with dapr ids (publisher-app, subscriber-app).

    USING MANAGED IDENTITY + DAPR

    The secrets approach used here is idea for demo purposes. However, we recommend using Managed Identity with Dapr in production. For more details on secrets, check out tomorrow's post on Secrets and Managed Identity in Azure Container Apps

    {
    "resources": [
    {
    "type": "daprComponents",
    "name": "dapr-pubsub",
    "properties": {
    "componentType": "pubsub.azure.servicebus",
    "version": "v1",
    "secrets": [
    {
    "name": "sb-root-connectionstring",
    "value": "value"
    }
    ],
    "metadata": [
    {
    "name": "connectionString",
    "secretRef": "sb-root-connectionstring"
    }
    ],
    // Application scopes
    "scopes": ["publisher-app", "subscriber-app"]

    }
    }
    ]
    }

    With this configuration, the ACA is now set to use pub/sub capabilities from the Dapr sidecar, using standard HTTP requests to the exposed API endpoint for this service.

    Exercise: Deploy Dapr-enabled ACA

    In the next couple posts in this series, we'll be discussing how you can use the Dapr secrets API and doing a walkthrough of a more complex example, to show how Dapr-enabled Azure Container Apps are created and deployed.

    However, you can get hands-on experience with these concepts by walking through one of these two tutorials, each providing an alternative approach to configure and setup the application describe in the scenario below:

    Resources

    Here are the main resources to explore for self-study:

    - + \ No newline at end of file diff --git a/blog/13-aca-managed-id/index.html b/blog/13-aca-managed-id/index.html index 6874ea8e52..17c8146e72 100644 --- a/blog/13-aca-managed-id/index.html +++ b/blog/13-aca-managed-id/index.html @@ -14,14 +14,14 @@ - +

    13. Secrets + Managed Identity

    · 11 min read
    Kendall Roden

    Welcome to Day 13 of #30DaysOfServerless!

    In the previous post, we learned about all things Distributed Application Runtime (Dapr) and highlighted the capabilities you can unlock through managed Dapr in Azure Container Apps! Today, we'll dive into how we can make use of Container Apps secrets and managed identities to securely access cloud-hosted resources that your Container Apps depend on!

    Ready? Let's go.


    What We'll Cover

    • Secure access to external services overview
    • Using Container Apps Secrets
    • Using Managed Identity for connecting to Azure resources
    • Using Dapr secret store component references (Dapr-only)
    • Conclusion
    • Resources: For self-study!


    Securing access to external services

    In most, if not all, microservice-based applications, one or more services in the system will rely on other cloud-hosted resources; Think external services like databases, secret stores, message brokers, event sources, etc. To interact with these services, an application must have the ability to establish a secure connection. Traditionally, an application will authenticate to these backing resources using some type of connection string or password.

    I'm not sure if it was just me, but one of the first things I learned as a developer was to ensure credentials and other sensitive information were never checked into the codebase. The ability to inject these values at runtime is a non-negotiable.

    In Azure Container Apps, applications can securely leverage connection information via Container Apps Secrets. If the resource is Azure-based, a more ideal solution that removes the dependence on secrets altogether is using Managed Identity.

    Specifically for Dapr-enabled container apps, users can now tap into the power of the Dapr secrets API! With this new capability unlocked in Container Apps, users can call the Dapr secrets API from application code to securely access secrets from Key Vault or other backing secret stores. In addition, customers can also make use of a secret store component reference when wiring up Dapr state store components and more!

    ALSO, I'm excited to share that support for Dapr + Managed Identity is now available!!. What does this mean? It means that you can enable Managed Identity for your container app - and when establishing connections via Dapr, the Dapr sidecar can use this identity! This means simplified components without the need for secrets when connecting to Azure services!

    Let's dive a bit deeper into the following three topics:

    1. Using Container Apps secrets in your container apps
    2. Using Managed Identity to connect to Azure services
    3. Connecting to services securely for Dapr-enabled apps

    Secure access to external services without Dapr

    Leveraging Container Apps secrets at runtime

    Users can leverage this approach for any values which need to be securely stored, however, it is recommended to use Managed Identity where possible when connecting to Azure-specific resources.

    First, let's establish a few important points regarding secrets in container apps:

    • Secrets are scoped at the container app level, meaning secrets cannot be shared across container apps today
    • When running in multiple-revision mode,
      • changes to secrets do not generate a new revision
      • running revisions will not be automatically restarted to reflect changes. If you want to force-update existing container app revisions to reflect the changed secrets values, you will need to perform revision restarts.
    STEP 1

    Provide the secure value as a secret parameter when creating your container app using the syntax "SECRET_NAME=SECRET_VALUE"

    az containerapp create \
    --resource-group "my-resource-group" \
    --name queuereader \
    --environment "my-environment-name" \
    --image demos/queuereader:v1 \
    --secrets "queue-connection-string=$CONNECTION_STRING"
    STEP 2

    Create an environment variable which references the value of the secret created in step 1 using the syntax "ENV_VARIABLE_NAME=secretref:SECRET_NAME"

    az containerapp create \
    --resource-group "my-resource-group" \
    --name myQueueApp \
    --environment "my-environment-name" \
    --image demos/myQueueApp:v1 \
    --secrets "queue-connection-string=$CONNECTIONSTRING" \
    --env-vars "QueueName=myqueue" "ConnectionString=secretref:queue-connection-string"

    This ConnectionString environment variable can be used within your application code to securely access the connection string value at runtime.

    Using Managed Identity to connect to Azure services

    A managed identity from Azure Active Directory (Azure AD) allows your container app to access other Azure AD-protected resources. This approach is recommended where possible as it eliminates the need for managing secret credentials in your container apps and allows you to properly scope the permissions needed for a given container app using role-based access control. Both system-assigned and user-assigned identities are available in container apps. For more background on managed identities in Azure AD, see Managed identities for Azure resources.

    To configure your app with a system-assigned managed identity you will follow similar steps to the following:

    STEP 1

    Run the following command to create a system-assigned identity for your container app

    az containerapp identity assign \
    --name "myQueueApp" \
    --resource-group "my-resource-group" \
    --system-assigned
    STEP 2

    Retrieve the identity details for your container app and store the Principal ID for the identity in a variable "PRINCIPAL_ID"

    az containerapp identity show \
    --name "myQueueApp" \
    --resource-group "my-resource-group"
    STEP 3

    Assign the appropriate roles and permissions to your container app's managed identity using the Principal ID in step 2 based on the resources you need to access (example below)

    az role assignment create \
    --role "Storage Queue Data Contributor" \
    --assignee $PRINCIPAL_ID \
    --scope "/subscriptions/<subscription>/resourceGroups/<resource-group>/providers/Microsoft.Storage/storageAccounts/<storage-account>/queueServices/default/queues/<queue>"

    After running the above commands, your container app will be able to access your Azure Store Queue because it's managed identity has been assigned the "Store Queue Data Contributor" role. The role assignments you create will be contingent solely on the resources your container app needs to access. To instrument your code to use this managed identity, see more details here.

    In addition to using managed identity to access services from your container app, you can also use managed identity to pull your container images from Azure Container Registry.

    Secure access to external services with Dapr

    For Dapr-enabled apps, there are a few ways to connect to the resources your solutions depend on. In this section, we will discuss when to use each approach.

    1. Using Container Apps secrets in your Dapr components
    2. Using Managed Identity with Dapr Components
    3. Using Dapr Secret Stores for runtime secrets and component references

    Using Container Apps secrets in Dapr components

    Prior to providing support for the Dapr Secret's Management building block, this was the only approach available for securely storing sensitive values for use in Dapr components.

    In Dapr OSS, when no secret store reference is provided in a Dapr component file, the default secret store is set to "Kubernetes secrets". In Container Apps, we do not expose the ability to use this default store. Rather, Container Apps secrets can be used in it's place.

    With the introduction of the Secrets API and the ability to use Dapr + Managed Identity, this approach is useful for a limited number of scenarios:

    • Quick demos and dev/test scenarios using the Container Apps CLI
    • Securing values when a secret store is not configured or available for use
    • Using service principal credentials to configure an Azure Key Vault secret store component (Using Managed Identity is recommend)
    • Securing access credentials which may be required when creating a non-Azure secret store component
    STEP 1

    Create a Dapr component which can be used by one or more services in the container apps environment. In the below example, you will create a secret to store the storage account key and reference this secret from the appropriate Dapr metadata property.

       componentType: state.azure.blobstorage
    version: v1
    metadata:
    - name: accountName
    value: testStorage
    - name: accountKey
    secretRef: account-key
    - name: containerName
    value: myContainer
    secrets:
    - name: account-key
    value: "<STORAGE_ACCOUNT_KEY>"
    scopes:
    - myApp
    STEP 2

    Deploy the Dapr component using the below command with the appropriate arguments.

     az containerapp env dapr-component set \
    --name "my-environment" \
    --resource-group "my-resource-group" \
    --dapr-component-name statestore \
    --yaml "./statestore.yaml"

    Using Managed Identity with Dapr Components

    Dapr-enabled container apps can now make use of managed identities within Dapr components. This is the most ideal path for connecting to Azure services securely, and allows for the removal of sensitive values in the component itself.

    The Dapr sidecar makes use of the existing identities available within a given container app; Dapr itself does not have it's own identity. Therefore, the steps to enable Dapr + MI are similar to those in the section regarding managed identity for non-Dapr apps. See example steps below specifically for using a system-assigned identity:

    1. Create a system-assigned identity for your container app

    2. Retrieve the identity details for your container app and store the Principal ID for the identity in a variable "PRINCIPAL_ID"

    3. Assign the appropriate roles and permissions (for accessing resources backing your Dapr components) to your ACA's managed identity using the Principal ID

    4. Create a simplified Dapr component without any secrets required

          componentType: state.azure.blobstorage
      version: v1
      metadata:
      - name: accountName
      value: testStorage
      - name: containerName
      value: myContainer
      scopes:
      - myApp
    5. Deploy the component to test the connection from your container app via Dapr!

    Keep in mind, all Dapr components will be loaded by each Dapr-enabled container app in an environment by default. In order to avoid apps without the appropriate permissions from loading a component unsuccessfully, use scopes. This will ensure that only applications with the appropriate identities to access the backing resource load the component.

    Using Dapr Secret Stores for runtime secrets and component references

    Dapr integrates with secret stores to provide apps and other components with secure storage and access to secrets such as access keys and passwords. The Dapr Secrets API is now available for use in Container Apps.

    Using Dapr’s secret store building block typically involves the following:

    • Setting up a component for a specific secret store solution.
    • Retrieving secrets using the Dapr secrets API in the application code.
    • Optionally, referencing secrets in Dapr component files.

    Let's walk through a couple sample workflows involving the use of Dapr's Secrets Management capabilities!

    Setting up a component for a specific secret store solution

    1. Create an Azure Key Vault instance for hosting the secrets required by your application.

      az keyvault create --name "<your-unique-keyvault-name>" --resource-group "my-resource-group" --location "<your-location>"
    2. Create an Azure Key Vault component in your environment without the secrets values, as the connection will be established to Azure Key Vault via Managed Identity.

          componentType: secretstores.azure.keyvault
      version: v1
      metadata:
      - name: vaultName
      value: "[your_keyvault_name]"
      scopes:
      - myApp
      az containerapp env dapr-component set \
      --name "my-environment" \
      --resource-group "my-resource-group" \
      --dapr-component-name secretstore \
      --yaml "./secretstore.yaml"
    3. Run the following command to create a system-assigned identity for your container app

      az containerapp identity assign \
      --name "myApp" \
      --resource-group "my-resource-group" \
      --system-assigned
    4. Retrieve the identity details for your container app and store the Principal ID for the identity in a variable "PRINCIPAL_ID"

      az containerapp identity show \
      --name "myApp" \
      --resource-group "my-resource-group"
    5. Assign the appropriate roles and permissions to your container app's managed identity to access Azure Key Vault

      az role assignment create \
      --role "Key Vault Secrets Officer" \
      --assignee $PRINCIPAL_ID \
      --scope /subscriptions/{subscriptionid}/resourcegroups/{resource-group-name}/providers/Microsoft.KeyVault/vaults/{key-vault-name}
    6. Begin using the Dapr Secrets API in your application code to retrieve secrets! See additional details here.

    Referencing secrets in Dapr component files

    Once a Dapr secret store component is available in the environment, it can be used to retrieve secrets for use in other components. For example, when creating a state store component, you can add a reference to the Dapr secret store from which you would like to source connection information. You will no longer use secrets directly in the component spec, but rather will instruct the Dapr sidecar to retrieve the secrets from the specified store.

          componentType: state.azure.blobstorage
    version: v1
    metadata:
    - name: accountName
    value: testStorage
    - name: accountKey
    secretRef: account-key
    - name: containerName
    value: myContainer
    secretStoreComponent: "<SECRET_STORE_COMPONENT_NAME>"
    scopes:
    - myApp

    Summary

    In this post, we have covered the high-level details on how to work with secret values in Azure Container Apps for both Dapr and Non-Dapr apps. In the next article, we will walk through a complex Dapr example from end-to-end which makes use of the new support for Dapr + Managed Identity. Stayed tuned for additional documentation around Dapr secrets as it will be release in the next two weeks!

    Resources

    Here are the main resources to explore for self-study:

    - + \ No newline at end of file diff --git a/blog/14-dapr-aca-quickstart/index.html b/blog/14-dapr-aca-quickstart/index.html index 55d8d7d34b..d9a9c5d6c1 100644 --- a/blog/14-dapr-aca-quickstart/index.html +++ b/blog/14-dapr-aca-quickstart/index.html @@ -14,7 +14,7 @@ - + @@ -24,7 +24,7 @@ Image showing container apps role assignment

  • Lastly, we need to restart the container app revision, to do so run the command below:

     ##Get revision name and assign it to a variable
    $REVISION_NAME = (az containerapp revision list `
    --name $BACKEND_SVC_NAME `
    --resource-group $RESOURCE_GROUP `
    --query [0].name)

    ##Restart revision by name
    az containerapp revision restart `
    --resource-group $RESOURCE_GROUP `
    --name $BACKEND_SVC_NAME `
    --revision $REVISION_NAME
  • Run end-to-end Test on Azure

    From the Azure Portal, select the Azure Container App orders-processor and navigate to Log stream under Monitoring tab, leave the stream connected and opened. From the Azure Portal, select the Azure Service Bus Namespace ordersservices, select the topic orderreceivedtopic, select the subscription named orders-processor-subscription, then click on Service Bus Explorer (preview). From there we need to publish/send a message. Use the JSON payload below

    ```json
    {
    "data": {
    "reference": "Order 150",
    "quantity": 150,
    "createdOn": "2022-05-10T12:45:22.0983978Z"
    }
    }
    ```

    If all is configured correctly, you should start seeing the information logs in Container Apps Log stream, similar to the images below Image showing publishing messages from Azure Service

    Information logs on the Log stream of the deployed Azure Container App Image showing ACA Log Stream

    🎉 CONGRATULATIONS

    You have successfully deployed to the cloud an Azure Container App and configured Dapr Pub/Sub API with Azure Service Bus.

    9. Clean up

    If you are done with the tutorial, use the following command to delete the resource group and all its contained resources to avoid incurring further costs.

    az group delete --name $RESOURCE_GROUP

    Exercise

    I left for you the configuration of the Dapr State Store API with Azure Cosmos DB :)

    When you look at the action method OrderReceived in controller ExternalOrdersController, you will see that I left a line with ToDo: note, this line is responsible to save the received message (OrderModel) into Azure Cosmos DB.

    There is no need to change anything on the code base (other than removing this commented line), that's the beauty of Dapr Building Blocks and how easy it allows us to plug components to our microservice application without any plumping and brining external SDKs.

    For sure you need to work on the configuration part of Dapr State Store by creating a new component file like what we have done with the Pub/Sub API, things that you need to work on are:

    • Provision Azure Cosmos DB Account and obtain its masterKey.
    • Create a Dapr Component file adhering to Dapr Specs.
    • Create an Azure Container Apps component file adhering to ACA component specs.
    • Test locally on your dev machine using Dapr Component file.
    • Register the new Dapr State Store component with Azure Container Apps Environment and set the Cosmos Db masterKey from the Azure Portal. If you want to challenge yourself more, use the Managed Identity approach as done in this post! The right way to protect your keys and you will not worry about managing CosmosDb keys anymore!
    • Build a new image of the application and push it to Azure Container Registry.
    • Update Azure Container Apps and create a new revision which contains the updated code.
    • Verify the results by checking Azure Cosmos DB, you should see the Order Model stored in Cosmos DB.

    If you need help, you can always refer to my blog post Azure Container Apps State Store With Dapr State Management API which contains exactly what you need to implement here, so I'm very confident you will be able to complete this exercise with no issues, happy coding :)

    What's Next?

    If you enjoyed working with Dapr and Azure Container Apps, and you want to have a deep dive with more complex scenarios (Dapr bindings, service discovery, auto scaling with KEDA, sync services communication, distributed tracing, health probes, etc...) where multiple services deployed to a single Container App Environment; I have created a detailed tutorial which should walk you through step by step with through details to build the application.

    So far, the published posts below, and I'm publishing more posts on weekly basis, so stay tuned :)

    Resources

    - + \ No newline at end of file diff --git a/blog/15-microservices-azure/index.html b/blog/15-microservices-azure/index.html index 6e25ee920d..1f144a7d94 100644 --- a/blog/15-microservices-azure/index.html +++ b/blog/15-microservices-azure/index.html @@ -14,13 +14,13 @@ - +

    15. ACA + Serverless On Azure

    · 4 min read
    Nitya Narasimhan
    Devanshi Joshi

    Welcome to Day 15 of #30DaysOfServerless!

    This post marks the midpoint of our Serverless on Azure journey! Our Week 2 Roadmap showcased two key technologies - Azure Container Apps (ACA) and Dapr - for building serverless microservices. We'll also look at what happened elsewhere in #ServerlessSeptember, then set the stage for our next week's focus: Serverless Integrations.

    Ready? Let's Go!


    What We'll Cover

    • ICYMI: This Week on #ServerlessSeptember
    • Recap: Microservices, Azure Container Apps & Dapr
    • Coming Next: Serverless Integrations
    • Exercise: Take the Cloud Skills Challenge
    • Resources: For self-study!

    This Week In Events

    We had a number of activities happen this week - here's a quick summary:

    This Week in #30Days

    In our #30Days series we focused on Azure Container Apps and Dapr.

    • In Hello Container Apps we learned how Azure Container Apps helps you run microservices and containerized apps on serverless platforms. And we build and deployed our first ACA.
    • In Microservices Communication we explored concepts like environments and virtual networking, with a hands-on example to show how two microservices communicate in a deployed ACA.
    • In Scaling Your Container Apps we learned about KEDA (Kubernetes Event-Driven Autoscaler) and how to configure autoscaling for your ACA based on KEDA-supported triggers.
    • In Build with Dapr we introduced the Distributed Application Runtime (Dapr) and learned how its Building Block APIs and sidecar architecture make it easier to develop microservices with ACA.
    • In Secure ACA Access we learned how to secure ACA access to external services with - and without - Dapr, covering Secret Stores and Managed Identity.
    • Finally, Build ACA with Dapr tied it all together with a enterprise app scenario where an orders processor (ACA) uses Dapr APIs (PubSub, State Management) to receive and store order messages from Azure Service Bus.

    Here's a visual recap:

    Self Study: Code Samples & Tutorials

    There's no better way to get familiar with the concepts, than to dive in and play with code samples and hands-on tutorials. Here are 4 resources to bookmark and try out:

    1. Dapr Quickstarts - these walk you through samples showcasing individual Building Block APIs - with multiple language options available.
    2. Dapr Tutorials provides more complex examples of microservices applications and tools usage, including a Distributed Calculator polyglot app.
    3. Next, try to Deploy a Dapr application to Azure Container Apps to get familiar with the process of setting up the environment, then deploying the app.
    4. Or, explore the many Azure Container Apps samples showcasing various features and more complex architectures tied to real world scenarios.

    What's Next: Serverless Integrations!

    So far we've talked about core technologies (Azure Functions, Azure Container Apps, Dapr) that provide foundational support for your serverless solution. Next, we'll look at Serverless Integrations - specifically at technologies like Azure Logic Apps and Azure Event Grid that automate workflows and create seamless end-to-end solutions that integrate other Azure services in serverless-friendly ways.

    Take the Challenge!

    The Cloud Skills Challenge is still going on, and we've already had hundreds of participants join and complete the learning modules to skill up on Serverless.

    There's still time to join and get yourself on the leaderboard. Get familiar with Azure Functions, SignalR, Logic Apps, Azure SQL and more - in serverless contexts!!


    - + \ No newline at end of file diff --git a/blog/17-integrate-cosmosdb/index.html b/blog/17-integrate-cosmosdb/index.html index 638142c012..e7334a0f56 100644 --- a/blog/17-integrate-cosmosdb/index.html +++ b/blog/17-integrate-cosmosdb/index.html @@ -14,14 +14,14 @@ - +

    17. Logic Apps + Cosmos DB

    · 6 min read
    Brian Benz

    Welcome to Day 17 of #30DaysOfServerless!

    In past weeks, we've covered serverless technologies that provide core capabilities (functions, containers, microservices) for building serverless solutions. This week we're looking at technologies that make service integrations more seamless, starting with Logic Apps. Let's look at one usage example today!

    Ready? Let's Go!


    What We'll Cover

    • Introduction to Logic Apps
    • Settng up Cosmos DB for Logic Apps
    • Setting up a Logic App connection and event
    • Writing data to Cosmos DB from a Logic app
    • Resources: For self-study!


    Introduction to Logic Apps

    Previously in Serverless September, we've covered Azure Functions, where the event triggers code. In Logic Apps, the event triggers a workflow that you design. Logic Apps enable serverless applications to connect to external sources for data then automate business processes via workflows.

    In this post I'll walk you through setting up a Logic App that works with Cosmos DB. For this example, we'll connect to the MSN weather service, an design a logic app workflow that collects data when weather changes, and writes the data to Cosmos DB.

    PREREQUISITES

    Setup Cosmos DB for Logic Apps

    Cosmos DB has many APIs to choose from, but to use the default Logic App connection, we need to choose the a Cosmos DB SQL API. We'll set this up via the Azure Portal.

    To get started with Cosmos DB, you create an account, then a database, then a container to store JSON documents. To create a new Cosmos DB account from the portal dashboard, Select Create a resource > Azure Cosmos DB > Create. Choose core SQL for the API.

    Select your subscription, then create a new resource group called CosmosWeather. Enter an account name and choose a location, select provisioned throughput capacity mode and apply the free tier discount. From here you can select Review and Create, then Create

    Azure Cosmos DB is available in two different capacity modes: provisioned throughput and serverless. You can perform the same database operations in both modes, but the way you get billed for these operations is different. We wil be using provisioned throughput and the free tier for this example.

    Setup the CosmosDB account

    Next, create a new database and container. Go to the Data Explorer in your new Cosmos DB account, and choose New Container. Name the database, and keep all the orher defaults except:

    SettingAction
    Container IDid
    Container partition/id

    Press OK to create a database and container

    A database is analogous to a traditional DBMS namespace. It's used to organize one or more containers.

    Setup the CosmosDB Container

    Now we're ready to set up our logic app an write to Cosmos DB!

    Setup Logic App connection + event

    Once the Cosmos DB SQL API account is created, we can set up our Logic App. From the portal dashboard, Select Create a resource > Integration > Logic App > Create. Name your Logic App and select a location, the rest fo the settings can be left at their defaults. Once you new Logic App is created, select Create a workflow from designer to get started.

    A workflow is a series of steps that defines a task or process. Each workflow starts with a single trigger, after which you must add one or more actions.

    When in designer, search for weather on the right under Add a trigger. Choose MSN Weather. Choose When the current conditions change as the trigger.

    A trigger is always the first step in any workflow and specifies the condition for running any further steps in that workflow.

    Add a location. Valid locations are City, Region, State, Country, Landmark, Postal Code, latitude and longitude. This triggers a new workflow when the conditions change for a location.

    Write data from Logic App to Cosmos DB

    Now we are ready to set up the action to write data to Cosmos DB. Choose add an action and choose Cosmos DB.

    An action is each step in a workflow after the trigger. Every action runs some operation in a workflow.

    In this case, we will be writing a JSON document to the Cosmos DB container we created earlier. Choose Create or Update Document from the actions. At this point you should have a workflow in designer that looks something like this:

    Logic App workflow with trigger

    Start wth the connection for set up the Cosmos DB action. Select Access Key, and provide the primary read-write key (found under keys in Cosmos DB), and the Cosmos DB account ID (without 'documents.azure.com').

    Next, fill in your Cosmos DB Database ID and Collection ID. Create a JSON document bt selecting dynamic content elements and wrapping JSON formatting around them.

    You will need a unique ID for each document that you write to Cosmos DB, for that you can use an expression. Because we declared id to be our unique ID in Cosmos DB, we will use use that for the name. Under expressions, type guid() and press enter to add a unique ID to the JSON document. When complete, you should have a workflow in designer that looks something like this:

    Logic App workflow with trigger and action

    Save the workflow and test the connections by clicking Run Trigger > Run. If connections are working, you should see documents flowing into Cosmos DB over the next few minutes.

    Check the data in Cosmos Db by opening the Data explorer, then choosing the container you created and selecting items. You should see documents similar to this:

    Logic App workflow with trigger and action

    Resources: For self-study!

    Once you've grasped the basics in ths post, there is so much more to learn!

    Thanks for stopping by!

    - + \ No newline at end of file diff --git a/blog/18-cloudmail/index.html b/blog/18-cloudmail/index.html index 92abead108..dd59485936 100644 --- a/blog/18-cloudmail/index.html +++ b/blog/18-cloudmail/index.html @@ -14,14 +14,14 @@ - +

    18. Logic Apps + Computer Vision

    · 10 min read
    Brian Benz

    Welcome to Day 18 of #30DaysOfServerless!

    Yesterday my Serverless September post introduced you to making Azure Logic Apps and Azure Cosmos DB work together with a sample application that collects weather data. Today I'm sharing a more robust solution that actually reads my mail. Let's learn about Teaching the cloud to read your mail!

    Ready? Let's go!


    What We'll Cover

    • Introduction to the ReadMail solution
    • Setting up Azure storage, Cosmos DB and Computer Vision
    • Connecting it all together with a Logic App
    • Resources: For self-study!


    Introducing the ReadMail solution

    The US Postal system offers a subscription service that sends you images of mail it will be delivering to your home. I decided it would be cool to try getting Azure to collect data based on these images, so that I could categorize my mail and track the types of mail that I received.

    To do this, I used Azure storage, Cosmos DB, Logic Apps, and computer vision. When a new email comes in from the US Postal service (USPS), it triggers a logic app that:

    • Posts attachments to Azure storage
    • Triggers Azure Computer vision to perform an OCR function on attachments
    • Extracts any results into a JSON document
    • Writes the JSON document to Cosmos DB

    workflow for the readmail solution

    In this post I'll walk you through setting up the solution for yourself.

    Prerequisites

    Setup Azure Services

    First, we'll create all of the target environments we need to be used by our Logic App, then we;ll create the Logic App.

    1. Azure Storage

    We'll be using Azure storage to collect attached images from emails as they arrive. Adding images to Azure storage will also trigger a workflow that performs OCR on new attached images and stores the OCR data in Cosmos DB.

    To create a new Azure storage account from the portal dashboard, Select Create a resource > Storage account > Create.

    The Basics tab covers all of the features and information that we will need for this solution:

    SectionFieldRequired or optionalDescription
    Project detailsSubscriptionRequiredSelect the subscription for the new storage account.
    Project detailsResource groupRequiredCreate a new resource group that you will use for storage, Cosmos DB, Computer Vision and the Logic App.
    Instance detailsStorage account nameRequiredChoose a unique name for your storage account. Storage account names must be between 3 and 24 characters in length and may contain numbers and lowercase letters only.
    Instance detailsRegionRequiredSelect the appropriate region for your storage account.
    Instance detailsPerformanceRequiredSelect Standard performance for general-purpose v2 storage accounts (default).
    Instance detailsRedundancyRequiredSelect locally-redundant Storage (LRS) for this example.

    Select Review + create to accept the remaining default options, then validate and create the account.

    2. Azure CosmosDB

    CosmosDB will be used to store the JSON documents returned by the COmputer Vision OCR process.

    See more details and screen shots for setting up CosmosDB in yesterday's Serverless September post - Using Logic Apps with Cosmos DB

    To get started with Cosmos DB, you create an account, then a database, then a container to store JSON documents. To create a new Cosmos DB account from the portal dashboard, Select Create a resource > Azure Cosmos DB > Create. Choose core SQL for the API.

    Select your subscription, then for simplicity use the same resource group you created when you set up storage. Enter an account name and choose a location, select provisioned throughput capacity mode and apply the free tier discount. From here you can select Review and Create, then Create

    Next, create a new database and container. Go to the Data Explorer in your new Cosmos DB account, and choose New Container. Name the database, and keep all the other defaults except:

    SettingAction
    Container IDid
    Container partition/id

    Press OK to create a database and container

    3. Azure Computer Vision

    Azure Cognitive Services' Computer Vision will perform an OCR process on each image attachment that is stored in Azure storage.

    From the portal dashboard, Select Create a resource > AI + Machine Learning > Computer Vision > Create.

    The Basics and Identity tabs cover all of the features and information that we will need for this solution:

    Basics Tab

    SectionFieldRequired or optionalDescription
    Project detailsSubscriptionRequiredSelect the subscription for the new service.
    Project detailsResource groupRequiredUse the same resource group that you used for Azure storage and Cosmos DB.
    Instance detailsRegionRequiredSelect the appropriate region for your Computer Vision service.
    Instance detailsNameRequiredChoose a unique name for your Computer Vision service.
    Instance detailsPricingRequiredSelect the free tier for this example.

    Identity Tab

    SectionFieldRequired or optionalDescription
    System assigned managed identityStatusRequiredEnable system assigned identity to grant the resource access to other existing resources.

    Select Review + create to accept the remaining default options, then validate and create the account.


    Connect it all with a Logic App

    Now we're ready to put this all together in a Logic App workflow!

    1. Create Logic App

    From the portal dashboard, Select Create a resource > Integration > Logic App > Create. Name your Logic App and select a location, the rest of the settings can be left at their defaults.

    2. Create Workflow: Add Trigger

    Once the Logic App is created, select Create a workflow from designer.

    A workflow is a series of steps that defines a task or process. Each workflow starts with a single trigger, after which you must add one or more actions.

    When in designer, search for outlook.com on the right under Add a trigger. Choose outlook.com. Choose When a new email arrives as the trigger.

    A trigger is always the first step in any workflow and specifies the condition for running any further steps in that workflow.

    Set the following values:

    ParameterValue
    FolderInbox
    ImportanceAny
    Only With AttachmentsYes
    Include AttachmentsYes

    Then add a new parameter:

    ParameterValue
    FromAdd the email address that sends you the email with attachments
    3. Create Workflow: Add Action (for Trigger)

    Choose add an action and choose control > for-each.

    logic app for each

    Inside the for-each action, in Select an output from previous steps, choose attachments. Then, again inside the for-each action, add the create blob action:

    Set the following values:

    ParameterValue
    Folder Path/mailreaderinbox
    Blob NameAttachments Name
    Blob ContentAttachments Content

    This extracts attachments from the email and created a new blob for each attachment.

    Next, inside the same for-each action, add the get blob content action.

    Set the following values:

    ParameterValue
    Blobid
    Infer content typeYes

    We create and read from a blob for each attachment because Computer Vision needs a non-virtual source to read from when performing an OCR process. Because we enabled system assigned identity to grant Computer Vision to other existing resources, it can access the blob but not the outlook.com attachment. Also, we pass the ID of the blob to use as a unique ID when writing to Cosmos DB.

    create blob from attachments

    Next, inside the same for-each action, choose add an action and choose control > condition. Set the value to Media Type > is equal to > image/JPEG

    The USPS sends attachments of multiple types, but we only want to scan attachments that have images of our mail, which are always JPEG images. If the condition is true, we will process the image with Computer Vision OCR and write the results to a JSON document in CosmosDB.

    In the True section of the condition, add an action and choose Computer Vision API > Optical Character Recognition (OCR) to JSON.

    Set the following values:

    ParameterValue
    Image SourceImage Content
    Image contentFile Content

    In the same True section of the condition, choose add an action and choose Cosmos DB. Choose Create or Update Document from the actions. Select Access Key, and provide the primary read-write key (found under keys in Cosmos DB), and the Cosmos DB account ID (without 'documents.azure.com').

    Next, fill in your Cosmos DB Database ID and Collection ID. Create a JSON document by selecting dynamic content elements and wrapping JSON formatting around them.

    Be sure to use the ID passed from blob storage as your unique ID for CosmosDB. That way you can troubleshoot and JSON or OCR issues by tracing back the JSON document in Cosmos Db to the blob in Azure storage. Also, include the Computer Vision JSON response, as it contains the results of the Computer Vision OCR scan. all other elements are optional.

    4. TEST WORKFLOW

    When complete, you should have an action the Logic App designer that looks something like this:

    Logic App workflow create or update document in cosmosdb

    Save the workflow and test the connections by clicking Run Trigger > Run. If connections are working, you should see documents flowing into Cosmos DB each time that an email arrives with image attachments.

    Check the data in Cosmos Db by opening the Data explorer, then choosing the container you created and selecting items. You should see documents similar to this:

    Logic App workflow with trigger and action

    1. Congratulations

    You just built your personal ReadMail solution with Logic Apps! 🎉


    Resources: For self-study!

    Once you have an understanding of the basics in ths post, there is so much more to learn!

    Thanks for stopping by!

    - + \ No newline at end of file diff --git a/blog/20-events-graph/index.html b/blog/20-events-graph/index.html index f6802677b4..e5e095da7a 100644 --- a/blog/20-events-graph/index.html +++ b/blog/20-events-graph/index.html @@ -14,7 +14,7 @@ - + @@ -22,7 +22,7 @@

    20. Integrate with Microsoft Graph

    · 10 min read
    Ayca Bas

    Welcome to Day 20 of #30DaysOfServerless!

    Every day millions of people spend their precious time in productivity tools. What if you use data and intelligence behind the Microsoft applications (Microsoft Teams, Outlook, and many other Office apps) to build seamless automations and custom apps to boost productivity?

    In this post, we'll learn how to build a seamless onboarding experience for new employees joining a company with the power of Microsoft Graph, integrated with Event Hubs and Logic Apps!


    What We'll Cover

    • ✨ The power of Microsoft Graph
    • 🖇️ How do Microsoft Graph and Event Hubs work together?
    • 🛠 Let's Build an Onboarding Workflow!
      • 1️⃣ Setup Azure Event Hubs + Key Vault
      • 2️⃣ Subscribe to users, receive change notifications from Logic Apps
      • 3️⃣ Create Onboarding workflow in the Logic Apps
    • 🚀 Debug: Your onboarding experience
    • ✋ Exercise: Try this tutorial out yourself!
    • 📚 Resources: For Self-Study


    ✨ The Power of Microsoft Graph

    Microsoft Graph is the gateway to data and intelligence in Microsoft 365 platform. Microsoft Graph exploses Rest APIs and client libraries to access data across Microsoft 365 core services such as Calendar, Teams, To Do, Outlook, People, Planner, OneDrive, OneNote and more.

    Overview of Microsoft Graph

    You can build custom experiences by using Microsoft Graph such as automating the onboarding process for new employees. When new employees are created in the Azure Active Directory, they will be automatically added in the Onboarding team on Microsoft Teams.

    Solution architecture


    🖇️ Microsoft Graph with Event Hubs

    Microsoft Graph uses a webhook mechanism to track changes in resources and deliver change notifications to the clients. For example, with Microsoft Graph Change Notifications, you can receive change notifications when:

    • a new task is added in the to-do list
    • a user changes the presence status from busy to available
    • an event is deleted/cancelled from the calendar

    If you'd like to track a large set of resources at a high frequency, use Azure Events Hubs instead of traditional webhooks to receive change notifications. Azure Event Hubs is a popular real-time events ingestion and distribution service built for scale.

    EVENT GRID - PARTNER EVENTS

    Microsoft Graph Change Notifications can be also received by using Azure Event Grid -- currently available for Microsoft Partners! Read the Partner Events Overview documentation for details.

    Setup Azure Event Hubs + Key Vault.

    To get Microsoft Graph Change Notifications delivered to Azure Event Hubs, we'll have to setup Azure Event Hubs and Azure Key Vault. We'll use Azure Key Vault to access to Event Hubs connection string.

    1️⃣ Create Azure Event Hubs

    1. Go to Azure Portal and select Create a resource, type Event Hubs and select click Create.
    2. Fill in the Event Hubs namespace creation details, and then click Create.
    3. Go to the newly created Event Hubs namespace page, select Event Hubs tab from the left pane and + Event Hub:
      • Name your Event Hub as Event Hub
      • Click Create.
    4. Click the name of the Event Hub, and then select Shared access policies and + Add to add a new policy:
      • Give a name to the policy
      • Check Send and Listen
      • Click Create.
    5. After the policy has been created, click the name of the policy to open the details panel, and then copy the Connection string-primary key value. Write it down; you'll need it for the next step.
    6. Go to Consumer groups tab in the left pane and select + Consumer group, give a name for your consumer group as onboarding and select Create.

    2️⃣ Create Azure Key Vault

    1. Go to Azure Portal and select Create a resource, type Key Vault and select Create.
    2. Fill in the Key Vault creation details, and then click Review + Create.
    3. Go to newly created Key Vault and select Secrets tab from the left pane and click + Generate/Import:
      • Give a name to the secret
      • For the value, paste in the connection string you generated at the Event Hubs step
      • Click Create
      • Copy the name of the secret.
    4. Select Access Policies from the left pane and + Add Access Policy:
      • For Secret permissions, select Get
      • For Principal, select Microsoft Graph Change Tracking
      • Click Add.
    5. Select Overview tab from the left pane and copy the Vault URI.

    Subscribe for Logic Apps change notifications

    To start receiving Microsoft Graph Change Notifications, we'll need to create subscription to the resource that we'd like to track - here, 'users'. We'll use Azure Logic Apps to create subscription.

    To create subscription for Microsoft Graph Change Notifications, we'll need to make a http post request to https://graph.microsoft.com/v1.0/subscriptions. Microsoft Graph requires Azure Active Directory authentication make API calls. First, we'll need to register an app to Azure Active Directory, and then we will make the Microsoft Graph Subscription API call with Azure Logic Apps.

    1️⃣ Create an app in Azure Active Directory

    1. In the Azure Portal, go to Azure Active Directory and select App registrations from the left pane and select + New registration. Fill in the details for the new App registration form as below:
      • Name: Graph Subscription Flow Auth
      • Supported account types: Accounts in any organizational directory (Any Azure AD directory - Multitenant) and personal Microsoft accounts (e.g. Skype, Xbox)
      • Select Register.
    2. Go to newly registered app in Azure Active Directory, select API permissions:
      • Select + Add a permission and Microsoft Graph
      • Select Application permissions and add User.Read.All and Directory.Read.All.
      • Select Grant admin consent for the organization
    3. Select Certificates & secrets tab from the left pane, select + New client secret:
      • Choose desired expiry duration
      • Select Add
      • Copy the value of the secret.
    4. Go to Overview from the left pane, copy Application (client) ID and Directory (tenant) ID.

    2️⃣ Create subscription with Azure Logic Apps

    1. Go to Azure Portal and select Create a resource, type Logic apps and select click Create.

    2. Fill in the Logic Apps creation details, and then click Create.

    3. Go to the newly created Logic Apps page, select Workflows tab from the left pane and select + Add:

      • Give a name to the new workflow as graph-subscription-flow
      • Select Stateful as a state type
      • Click Create.
    4. Go to graph-subscription-flow, and then select Designer tab.

    5. In the Choose an operation section, search for Schedule and select Recurrence as a trigger. Fill in the parameters as below:

      • Interval: 61
      • Frequency: Minute
      • Time zone: Select your own time zone
      • Start time: Set a start time
    6. Select + button in the flow and select add an action. Search for HTTP and select HTTP as an action. Fill in the parameters as below:

      • Method: POST
      • URI: https://graph.microsoft.com/v1.0/subscriptions
      • Headers:
        • Key: Content-type
        • Value: application/json
      • Body:
      {
      "changeType": "created, updated",
      "clientState": "secretClientValue",
      "expirationDateTime": "@{addHours(utcNow(), 1)}",
      "notificationUrl": "EventHub:https://<YOUR-VAULT-URI>/secrets/<YOUR-KEY-VAULT-SECRET-NAME>?tenantId=72f988bf-86f1-41af-91ab-2d7cd011db47",
      "resource": "users"
      }

      In notificationUrl, make sure to replace <YOUR-VAULT-URI> with the vault uri and <YOUR-KEY-VAULT-SECRET-NAME> with the secret name that you copied from the Key Vault.

      In resource, define the resource type you'd like to track changes. For our example, we will track changes for users resource.

      • Authentication:
        • Authentication type: Active Directory OAuth
        • Authority: https://login.microsoft.com
        • Tenant: Directory (tenant) ID copied from AAD app
        • Audience: https://graph.microsoft.com
        • Client ID: Application (client) ID copied from AAD app
        • Credential Type: Secret
        • Secret: value of the secret copied from AAD app
    7. Select Save and run your workflow from the Overview tab.

      Check your subscription in Graph Explorer: If you'd like to make sure that your subscription is created successfully by Logic Apps, you can go to Graph Explorer, login with your Microsoft 365 account and make GET request to https://graph.microsoft.com/v1.0/subscriptions. Your subscription should appear in the response after it's created successfully.

    Subscription workflow success

    After subscription is created successfully by Logic Apps, Azure Event Hubs will receive notifications whenever there is a new user created in Azure Active Directory.


    Create Onboarding workflow in Logic Apps

    We'll create a second workflow in the Logic Apps to receive change notifications from Event Hubs when there is a new user created in the Azure Active Directory and add new user in Onboarding team on Microsoft Teams.

    1. Go to the Logic Apps you created in the previous steps, select Workflows tab and create a new workflow by selecting + Add:
      • Give a name to the new workflow as teams-onboarding-flow
      • Select Stateful as a state type
      • Click Create.
    2. Go to teams-onboarding-flow, and then select Designer tab.
    3. In the Choose an operation section, search for Event Hub, select When events are available in Event Hub as a trigger. Setup Event Hub connection as below:
      • Create Connection:
        • Connection name: Connection
        • Authentication Type: Connection String
        • Connection String: Go to Event Hubs > Shared Access Policies > RootManageSharedAccessKey and copy Connection string–primary key
        • Select Create.
      • Parameters:
        • Event Hub Name: Event Hub
        • Consumer Group Name: onboarding
    4. Select + in the flow and add an action, search for Control and add For each as an action. Fill in For each action as below:
      • Select output from previous steps: Events
    5. Inside For each, select + in the flow and add an action, search for Data operations and select Parse JSON. Fill in Parse JSON action as below:
      • Content: Events Content
      • Schema: Copy the json content from schema-parse.json and paste as a schema
    6. Select + in the flow and add an action, search for Control and add For each as an action. Fill in For each action as below:
      • Select output from previous steps: value
      1. Inside For each, select + in the flow and add an action, search for Microsoft Teams and select Add a member to a team. Login with your Microsoft 365 account to create a connection and fill in Add a member to a team action as below:
      • Team: Create an Onboarding team on Microsoft Teams and select
      • A user AAD ID for the user to add to a team: id
    7. Select Save.

    🚀 Debug your onboarding experience

    To debug our onboarding experience, we'll need to create a new user in Azure Active Directory and see if it's added in Microsoft Teams Onboarding team automatically.

    1. Go to Azure Portal and select Azure Active Directory from the left pane and go to Users. Select + New user and Create new user. Fill in the details as below:

      • User name: JaneDoe
      • Name: Jane Doe

      new user in Azure Active Directory

    2. When you added Jane Doe as a new user, it should trigger the teams-onboarding-flow to run. teams onboarding flow success

    3. Once the teams-onboarding-flow runs successfully, you should be able to see Jane Doe as a member of the Onboarding team on Microsoft Teams! 🥳 new member in Onboarding team on Microsoft Teams

    Congratulations! 🎉

    You just built an onboarding experience using Azure Logic Apps, Azure Event Hubs and Azure Key Vault.


    📚 Resources

    - + \ No newline at end of file diff --git a/blog/21-cloudevents-via-event-grid/index.html b/blog/21-cloudevents-via-event-grid/index.html index 0d63326c9e..81d95c8442 100644 --- a/blog/21-cloudevents-via-event-grid/index.html +++ b/blog/21-cloudevents-via-event-grid/index.html @@ -14,13 +14,13 @@ - +

    21. CloudEvents with Event Grid

    · 9 min read
    Justin Yoo

    Welcome to Day 21 of #30DaysOfServerless!

    We've so far walked through what Azure Event Grid is and how it generally works. Today, let's discuss how Azure Event Grid deals with CloudEvents.


    What We'll Cover


    OK. Let's get started!

    What is CloudEvents?

    Needless to say, events are everywhere. Events come not only from event-driven systems but also from many different systems and devices, including IoT ones like Raspberry PI.

    But the problem is that every event publisher (system/device that creates events) describes their events differently, meaning there is no standard way of describing events. It has caused many issues between systems, mainly from the interoperability perspective.

    1. Consistency: No standard way of describing events resulted in developers having to write their own event handling logic for each event source.
    2. Accessibility: There were no common libraries, tooling and infrastructure to deliver events across systems.
    3. Productivity: The overall productivity decreases because of the lack of the standard format of events.

    Cloud Events Logo

    Therefore, CNCF (Cloud-Native Computing Foundation) has brought up the concept, called CloudEvents. CloudEvents is a specification that commonly describes event data. Conforming any event data to this spec will simplify the event declaration and delivery across systems and platforms and more, resulting in a huge productivity increase.

    How Azure Event Grid brokers CloudEvents

    Before CloudEvents, Azure Event Grid described events in their own way. Therefore, if you want to use Azure Event Grid, you should follow the event format/schema that Azure Event Grid declares. However, not every system/service/application follows the Azure Event Grid schema. Therefore, Azure Event Grid now supports CloudEvents spec as input and output formats.

    Azure Event Grid for Azure

    Take a look at the simple diagram below, which describes how Azure Event Grid captures events raised from various Azure services. In this diagram, Azure Key Vault takes the role of the event source or event publisher, and Azure Logic Apps takes the role of the event handler (I'll discuss Azure Logic Apps as the event handler later in this post). We use Azure Event Grid System Topic for Azure.

    Azure Event Grid for Azure

    Therefore, let's create an Azure Event Grid System Topic that captures events raised from Azure Key Vault when a new version of a secret is added.

    Azure Event Grid System Topic for Key Vault

    As Azure Event Grid makes use of the pub/sub pattern, you need to create the Azure Event Grid Subscription to consume the events. Here's the subscription that uses the Event Grid data format:

    ![Azure Event Grid System Subscription for Key Vault in Event Grid Format][./img/21-cloudevents-via-event-grid-03.png]

    Once you create the subscription, create a new version of the secret on Azure Key Vault. Then, Azure Key Vault raises an event, which is captured in the Event Grid format:

    [
    {
    "id": "6f44b9c0-d37e-40e7-89be-f70a6da291cc",
    "topic": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/rg-aegce-krc/providers/Microsoft.KeyVault/vaults/kv-xxxxxxxx",
    "subject": "hello",
    "eventType": "Microsoft.KeyVault.SecretNewVersionCreated",
    "data": {
    "Id": "https://kv-xxxxxxxx.vault.azure.net/secrets/hello/064dfc082fec463f8d4610ed6118811d",
    "VaultName": "kv-xxxxxxxx",
    "ObjectType": "Secret",
    "ObjectName": "hello",
    "Version": "064dfc082fec463f8d4610ed6118811d",
    "NBF": null,
    "EXP": null
    },
    "dataVersion": "1",
    "metadataVersion": "1",
    "eventTime": "2022-09-21T07:08:09.1234567Z"
    }
    ]

    So, how is it different from the CloudEvents format? Let's take a look. According to the spec, the JSON data in CloudEvents might look like this:

    {
    "id" : "C234-1234-1234",
    "source" : "/mycontext",
    "specversion" : "1.0",
    "type" : "com.example.someevent",
    "comexampleextension1" : "value",
    "time" : "2018-04-05T17:31:00Z",
    "datacontenttype" : "application/cloudevents+json",
    "data" : {
    "appinfoA" : "abc",
    "appinfoB" : 123,
    "appinfoC" : true
    }
    }

    This time, let's create another subscription using the CloudEvents schema. Here's how to create the subscription against the system topic:

    Azure Event Grid System Subscription for Key Vault in CloudEvents Format

    Therefore, Azure Key Vault emits the event data in the CloudEvents format:

    {
    "id": "6f44b9c0-d37e-40e7-89be-f70a6da291cc",
    "source": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/rg-aegce-krc/providers/Microsoft.KeyVault/vaults/kv-xxxxxxxx",
    "specversion": "1.0",
    "type": "Microsoft.KeyVault.SecretNewVersionCreated",
    "subject": "hello",
    "time": "2022-09-21T07:08:09.1234567Z",
    "data": {
    "Id": "https://kv-xxxxxxxx.vault.azure.net/secrets/hello/064dfc082fec463f8d4610ed6118811d",
    "VaultName": "kv-xxxxxxxx",
    "ObjectType": "Secret",
    "ObjectName": "hello",
    "Version": "064dfc082fec463f8d4610ed6118811d",
    "NBF": null,
    "EXP": null
    }
    }

    Can you identify some differences between the Event Grid format and the CloudEvents format? Fortunately, both Event Grid schema and CloudEvents schema look similar to each other. But they might be significantly different if you use a different event source outside Azure.

    Azure Event Grid for Systems outside Azure

    As mentioned above, the event data described outside Azure or your own applications within Azure might not be understandable by Azure Event Grid. In this case, we need to use Azure Event Grid Custom Topic. Here's the diagram for it:

    Azure Event Grid for Applications outside Azure

    Let's create the Azure Event Grid Custom Topic. When you create the topic, make sure that you use the CloudEvent schema during the provisioning process:

    Azure Event Grid Custom Topic

    If your application needs to publish events to Azure Event Grid Custom Topic, your application should build the event data in the CloudEvents format. If you use a .NET application, add the NuGet package first.

    dotnet add package Azure.Messaging.EventGrid

    Then, create the publisher instance. You've already got the topic endpoint URL and the access key.

    var topicEndpoint = new Uri("<Azure Event Grid Custom Topic Endpoint URL>");
    var credential = new AzureKeyCredential("<Azure Event Grid Custom Topic Access Key>");
    var publisher = new EventGridPublisherClient(topicEndpoint, credential);

    Now, build the event data like below. Make sure that you follow the CloudEvents schema that requires additional metadata like event source, event type and content type.

    var source = "/your/event/source";
    var type = "com.source.event.your/OnEventOccurs";

    var data = new MyEventData() { Hello = "World" };

    var @event = new CloudEvent(source, type, data);

    And finally, send the event to Azure Event Grid Custom Topic.

    await publisher.SendEventAsync(@event);

    The captured event data looks like the following:

    {
    "id": "cc2b2775-52b8-43b8-a7cc-c1c33c2b2e59",
    "source": "/your/event/source",
    "type": "com.source.event.my/OnEventOccurs",
    "data": {
    "Hello": "World"
    },
    "time": "2022-09-21T07:08:09.1234567+00:00",
    "specversion": "1.0"
    }

    However, due to limitations, someone might insist that their existing application doesn't or can't emit the event data in the CloudEvents format. In this case, what should we do? There's no standard way of sending the event data in the CloudEvents format to Azure Event Grid Custom Topic. One of the approaches we may be able to apply is to put a converter between the existing application and Azure Event Grid Custom Topic like below:

    Azure Event Grid for Applications outside Azure with Converter

    Once the Function app (or any converter app) receives legacy event data, it internally converts the CloudEvents format and publishes it to Azure Event Grid.

    var data = default(MyRequestData);
    using (var reader = new StreamReader(req.Body))
    {
    var serialised = await reader.ReadToEndAsync();
    data = JsonConvert.DeserializeObject<MyRequestData>(serialised);
    }

    var converted = new MyEventData() { Hello = data.Lorem };
    var @event = new CloudEvent(source, type, converted);

    The converted event data is captured like this:

    {
    "id": "df296da3-77cd-4da2-8122-91f631941610",
    "source": "/your/event/source",
    "type": "com.source.event.my/OnEventOccurs",
    "data": {
    "Hello": "ipsum"
    },
    "time": "2022-09-21T07:08:09.1234567+00:00",
    "specversion": "1.0"
    }

    This approach is beneficial in many integration scenarios to make all the event data canonicalised.

    How Azure Logic Apps consumes CloudEvents

    I put Azure Logic Apps as the event handler in the previous diagrams. According to the CloudEvents spec, each event handler must implement request validation to avoid abuse. One good thing about using Azure Logic Apps is that it has already implemented this request validation feature. It implies that we just subscribe to the topic and consume the event data.

    Create a new Logic Apps instance and add the HTTP Request trigger. Once it saves, you will get the endpoint URL.

    Azure Logic Apps with HTTP Request Trigger

    Then, create the Azure Event Grid Subscription with:

    • Endpoint type: Webhook
    • Endpoint URL: The Logic Apps URL from above.

    Azure Logic Apps with HTTP Request Trigger

    Once the subscription is ready, this Logic Apps works well as the event handler. Here's how it receives the CloudEvents data from the subscription.

    Azure Logic Apps that Received CloudEvents data

    Now you've got the CloudEvents data. It's entirely up to you to handle that event data however you want!

    Exercise: Try this yourself!

    You can fork this GitHub repository to your account and play around with it to see how Azure Event Grid with CloudEvents works. Alternatively, the "Deploy to Azure" button below will provision all necessary Azure resources and deploy an Azure Functions app to mimic the event publisher.

    Deploy To Azure

    Resources: For self-study!

    Want to know more about CloudEvents in real-life examples? Here are several resources you can take a look at:

    - + \ No newline at end of file diff --git a/blog/24-aca-dotnet/index.html b/blog/24-aca-dotnet/index.html index 3661362a20..71fee58f12 100644 --- a/blog/24-aca-dotnet/index.html +++ b/blog/24-aca-dotnet/index.html @@ -14,13 +14,13 @@ - +

    24. Deploy ASP.NET app to ACA

    · 19 min read
    Alex Wolf

    Welcome to Day 24 of #30DaysOfServerless!

    We continue exploring E2E scenarios with this tutorial where you'll deploy a containerized ASP.NET Core 6.0 application to Azure Container Apps.

    The application consists of a front-end web app built using Blazor Server, as well as two Web API projects to manage data. These projects will exist as three separate containers inside of a shared container apps environment.


    What We'll Cover

    • Deploy ASP.NET Core 6.0 app to Azure Container Apps
    • Automate deployment workflows using GitHub Actions
    • Provision and deploy resources using Azure Bicep
    • Exercise: Try this yourself!
    • Resources: For self-study!


    Introduction

    Azure Container Apps enables you to run microservices and containerized applications on a serverless platform. With Container Apps, you enjoy the benefits of running containers while leaving behind the concerns of manually configuring cloud infrastructure and complex container orchestrators.

    In this tutorial, you'll deploy a containerized ASP.NET Core 6.0 application to Azure Container Apps. The application consists of a front-end web app built using Blazor Server, as well as two Web API projects to manage data. These projects will exist as three separate containers inside of a shared container apps environment.

    You will use GitHub Actions in combination with Bicep to deploy the application. These tools provide an approachable and sustainable solution for building CI/CD pipelines and working with Container Apps.

    PRE-REQUISITES

    Architecture

    In this tutorial, we'll setup a container app environment with a separate container for each project in the sample store app. The major components of the sample project include:

    • A Blazor Server front-end web app to display product information
    • A products API to list available products
    • An inventory API to determine how many products are in stock
    • GitHub Actions and Bicep templates to provision Azure resources and then build and deploy the sample app.

    You will explore these templates later in the tutorial.

    Public internet traffic should be proxied to the Blazor app. The back-end APIs should only be reachable via requests from the Blazor app inside the container apps environment. This setup can be achieved using container apps environment ingress configurations during deployment.

    An architecture diagram of the shopping app


    Project Sources

    Want to follow along? Fork the sample below. The tutorial can be completed with or without Dapr integration. Pick the path you feel comfortable in. Dapr provides various benefits that make working with Microservices easier - you can learn more in the docs. For this tutorial you will need GitHub and Azure CLI.

    PICK YOUR PATH

    To follow along with this tutorial, fork the relevant sample project below.

    You can run the app locally from Visual Studio:

    • Right click on the Blazor Store project and select Set as Startup Project.
    • Press the start button at the top of Visual Studio to run the app.
    • (Once running) start each API in the background by
    • right-clicking on the project node
    • selecting Debug --> Start without debugging.

    Once the Blazor app is running, you should see something like this:

    An architecture diagram of the shopping app


    Configuring Azure credentials

    In order to deploy the application to Azure through GitHub Actions, you first need to create a service principal. The service principal will allow the GitHub Actions process to authenticate to your Azure subscription to create resources and deploy code. You can learn more about Service Principals in the Azure CLI documentation. For this step you'll need to be logged into the Azure CLI.

    1) If you have not done so already, make sure to fork the sample project to your own GitHub account or organization.

    1) Once you have completed this step, create a service principal using the Azure CLI command below:

    ```azurecli
    $subscriptionId=$(az account show --query id --output tsv)
    az ad sp create-for-rbac --sdk-auth --name WebAndApiSample --role Contributor --scopes /subscriptions/$subscriptionId
    ```

    1) Copy the JSON output of the CLI command to your clipboard

    1) Under the settings tab of your forked GitHub repo, create a new secret named AzureSPN. The name is important to match the Bicep templates included in the project, which we'll review later. Paste the copied service principal values on your clipboard into the secret and save your changes. This new secret will be used by the GitHub Actions workflow to authenticate to Azure.

    :::image type="content" source="./img/dotnet/github-secrets.png" alt-text="A screenshot of adding GitHub secrets.":::

    Deploy using Github Actions

    You are now ready to deploy the application to Azure Container Apps using GitHub Actions. The sample application includes a GitHub Actions template that is configured to build and deploy any changes to a branch named deploy. The deploy branch does not exist in your forked repository by default, but you can easily create it through the GitHub user interface.

    1) Switch to the Actions tab along the top navigation of your GitHub repository. If you have not done so already, ensure that workflows are enabled by clicking the button in the center of the page.

    A screenshot showing how to enable GitHub actions

    1) Navigate to the main Code tab of your repository and select the main dropdown. Enter deploy into the branch input box, and then select Create branch: deploy from 'main'.

    A screenshot showing how to create the deploy branch

    1) On the new deploy branch, navigate down into the .github/workflows folder. You should see a file called deploy.yml, which contains the main GitHub Actions workflow script. Click on the file to view its content. You'll learn more about this file later in the tutorial.

    1) Click the pencil icon in the upper right to edit the document.

    1) Change the RESOURCE_GROUP_NAME: value to msdocswebappapis or another valid resource group name of your choosing.

    1) In the upper right of the screen, select Start commit and then Commit changes to commit your edit. This will persist the change to the file and trigger the GitHub Actions workflow to build and deploy the app.

    A screenshot showing how to commit changes

    1) Switch to the Actions tab along the top navigation again. You should see the workflow running to create the necessary resources and deploy the app. The workflow may take several minutes to run. When it completes successfully, all of the jobs should have a green checkmark icon next to them.

    The completed GitHub workflow.

    Explore the Azure resources

    Once the GitHub Actions workflow has completed successfully you can browse the created resources in the Azure portal.

    1) On the left navigation, select Resource Groups. Next,choose the msdocswebappapis resource group that was created by the GitHub Actions workflow.

    2) You should see seven resources available that match the screenshot and table descriptions below.

    The resources created in Azure.

    Resource nameTypeDescription
    inventoryContainer appThe containerized inventory API.
    msdocswebappapisacrContainer registryA registry that stores the built Container images for your apps.
    msdocswebappapisaiApplication insightsApplication insights provides advanced monitoring, logging and metrics for your apps.
    msdocswebappapisenvContainer apps environmentA container environment that manages networking, security and resource concerns. All of your containers live in this environment.
    msdocswebappapislogsLog Analytics workspaceA workspace environment for managing logging and analytics for the container apps environment
    productsContainer appThe containerized products API.
    storeContainer appThe Blazor front-end web app.

    3) You can view your running app in the browser by clicking on the store container app. On the overview page, click the Application Url link on the upper right of the screen.

    :::image type="content" source="./img/dotnet/application-url.png" alt-text="The link to browse the app.":::

    Understanding the GitHub Actions workflow

    The GitHub Actions workflow created and deployed resources to Azure using the deploy.yml file in the .github folder at the root of the project. The primary purpose of this file is to respond to events - such as commits to a branch - and run jobs to accomplish tasks. The deploy.yml file in the sample project has three main jobs:

    • Provision: Create the necessary resources in Azure, such as the container apps environment. This step leverages Bicep templates to create the Azure resources, which you'll explore in a moment.
    • Build: Create the container images for the three apps in the project and store them in the container registry.
    • Deploy: Deploy the container images to the different container apps created during the provisioning job.

    The deploy.yml file also accepts parameters to make the workflow more dynamic, such as setting the resource group name or the Azure region resources will be provisioned to.

    Below is a commented version of the deploy.yml file that highlights the essential steps.

    name: Build and deploy .NET application to Container Apps

    # Trigger the workflow on pushes to the deploy branch
    on:
    push:
    branches:
    - deploy

    env:
    # Set workflow variables
    RESOURCE_GROUP_NAME: msdocswebappapis

    REGION: eastus

    STORE_DOCKER: Store/Dockerfile
    STORE_IMAGE: store

    INVENTORY_DOCKER: Store.InventoryApi/Dockerfile
    INVENTORY_IMAGE: inventory

    PRODUCTS_DOCKER: Store.ProductApi/Dockerfile
    PRODUCTS_IMAGE: products

    jobs:
    # Create the required Azure resources
    provision:
    runs-on: ubuntu-latest

    steps:

    - name: Checkout to the branch
    uses: actions/checkout@v2

    - name: Azure Login
    uses: azure/login@v1
    with:
    creds: ${{ secrets.AzureSPN }}

    - name: Create resource group
    uses: azure/CLI@v1
    with:
    inlineScript: >
    echo "Creating resource group in Azure"
    echo "Executing 'az group create -l ${{ env.REGION }} -n ${{ env.RESOURCE_GROUP_NAME }}'"
    az group create -l ${{ env.REGION }} -n ${{ env.RESOURCE_GROUP_NAME }}

    # Use Bicep templates to create the resources in Azure
    - name: Creating resources
    uses: azure/CLI@v1
    with:
    inlineScript: >
    echo "Creating resources"
    az deployment group create --resource-group ${{ env.RESOURCE_GROUP_NAME }} --template-file '/github/workspace/Azure/main.bicep' --debug

    # Build the three app container images
    build:
    runs-on: ubuntu-latest
    needs: provision

    steps:

    - name: Checkout to the branch
    uses: actions/checkout@v2

    - name: Azure Login
    uses: azure/login@v1
    with:
    creds: ${{ secrets.AzureSPN }}

    - name: Set up Docker Buildx
    uses: docker/setup-buildx-action@v1

    - name: Login to ACR
    run: |
    set -euo pipefail
    access_token=$(az account get-access-token --query accessToken -o tsv)
    refresh_token=$(curl https://${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io/oauth2/exchange -v -d "grant_type=access_token&service=${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io&access_token=$access_token" | jq -r .refresh_token)
    docker login -u 00000000-0000-0000-0000-000000000000 --password-stdin ${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io <<< "$refresh_token"

    - name: Build the products api image and push it to ACR
    uses: docker/build-push-action@v2
    with:
    push: true
    tags: ${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io/${{ env.PRODUCTS_IMAGE }}:${{ github.sha }}
    file: ${{ env.PRODUCTS_DOCKER }}

    - name: Build the inventory api image and push it to ACR
    uses: docker/build-push-action@v2
    with:
    push: true
    tags: ${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io/${{ env.INVENTORY_IMAGE }}:${{ github.sha }}
    file: ${{ env.INVENTORY_DOCKER }}

    - name: Build the frontend image and push it to ACR
    uses: docker/build-push-action@v2
    with:
    push: true
    tags: ${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io/${{ env.STORE_IMAGE }}:${{ github.sha }}
    file: ${{ env.STORE_DOCKER }}

    # Deploy the three container images
    deploy:
    runs-on: ubuntu-latest
    needs: build

    steps:

    - name: Checkout to the branch
    uses: actions/checkout@v2

    - name: Azure Login
    uses: azure/login@v1
    with:
    creds: ${{ secrets.AzureSPN }}

    - name: Installing Container Apps extension
    uses: azure/CLI@v1
    with:
    inlineScript: >
    az config set extension.use_dynamic_install=yes_without_prompt

    az extension add --name containerapp --yes

    - name: Login to ACR
    run: |
    set -euo pipefail
    access_token=$(az account get-access-token --query accessToken -o tsv)
    refresh_token=$(curl https://${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io/oauth2/exchange -v -d "grant_type=access_token&service=${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io&access_token=$access_token" | jq -r .refresh_token)
    docker login -u 00000000-0000-0000-0000-000000000000 --password-stdin ${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io <<< "$refresh_token"

    - name: Deploy Container Apps
    uses: azure/CLI@v1
    with:
    inlineScript: >
    az containerapp registry set -n products -g ${{ env.RESOURCE_GROUP_NAME }} --server ${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io

    az containerapp update -n products -g ${{ env.RESOURCE_GROUP_NAME }} -i ${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io/${{ env.PRODUCTS_IMAGE }}:${{ github.sha }}

    az containerapp registry set -n inventory -g ${{ env.RESOURCE_GROUP_NAME }} --server ${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io

    az containerapp update -n inventory -g ${{ env.RESOURCE_GROUP_NAME }} -i ${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io/${{ env.INVENTORY_IMAGE }}:${{ github.sha }}

    az containerapp registry set -n store -g ${{ env.RESOURCE_GROUP_NAME }} --server ${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io

    az containerapp update -n store -g ${{ env.RESOURCE_GROUP_NAME }} -i ${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io/${{ env.STORE_IMAGE }}:${{ github.sha }}

    - name: logout
    run: >
    az logout

    Understanding the Bicep templates

    During the provisioning stage of the GitHub Actions workflow, the main.bicep file is processed. Bicep files provide a declarative way of generating resources in Azure and are ideal for managing infrastructure as code. You can learn more about Bicep in the related documentation. The main.bicep file in the sample project creates the following resources:

    • The container registry to store images of the containerized apps.
    • The container apps environment, which handles networking and resource management for the container apps.
    • Three container apps - one for the Blazor front-end and two for the back-end product and inventory APIs.
    • Configuration values to connect these services together

    main.bicep without Dapr

    param location string = resourceGroup().location

    # create the azure container registry
    resource acr 'Microsoft.ContainerRegistry/registries@2021-09-01' = {
    name: toLower('${resourceGroup().name}acr')
    location: location
    sku: {
    name: 'Basic'
    }
    properties: {
    adminUserEnabled: true
    }
    }

    # create the aca environment
    module env 'environment.bicep' = {
    name: 'containerAppEnvironment'
    params: {
    location: location
    }
    }

    # create the various configuration pairs
    var shared_config = [
    {
    name: 'ASPNETCORE_ENVIRONMENT'
    value: 'Development'
    }
    {
    name: 'APPINSIGHTS_INSTRUMENTATIONKEY'
    value: env.outputs.appInsightsInstrumentationKey
    }
    {
    name: 'APPLICATIONINSIGHTS_CONNECTION_STRING'
    value: env.outputs.appInsightsConnectionString
    }
    ]

    # create the products api container app
    module products 'container_app.bicep' = {
    name: 'products'
    params: {
    name: 'products'
    location: location
    registryPassword: acr.listCredentials().passwords[0].value
    registryUsername: acr.listCredentials().username
    containerAppEnvironmentId: env.outputs.id
    registry: acr.name
    envVars: shared_config
    externalIngress: false
    }
    }

    # create the inventory api container app
    module inventory 'container_app.bicep' = {
    name: 'inventory'
    params: {
    name: 'inventory'
    location: location
    registryPassword: acr.listCredentials().passwords[0].value
    registryUsername: acr.listCredentials().username
    containerAppEnvironmentId: env.outputs.id
    registry: acr.name
    envVars: shared_config
    externalIngress: false
    }
    }

    # create the store api container app
    var frontend_config = [
    {
    name: 'ProductsApi'
    value: 'http://${products.outputs.fqdn}'
    }
    {
    name: 'InventoryApi'
    value: 'http://${inventory.outputs.fqdn}'
    }
    ]

    module store 'container_app.bicep' = {
    name: 'store'
    params: {
    name: 'store'
    location: location
    registryPassword: acr.listCredentials().passwords[0].value
    registryUsername: acr.listCredentials().username
    containerAppEnvironmentId: env.outputs.id
    registry: acr.name
    envVars: union(shared_config, frontend_config)
    externalIngress: true
    }
    }

    main.bicep with Dapr


    param location string = resourceGroup().location

    # create the azure container registry
    resource acr 'Microsoft.ContainerRegistry/registries@2021-09-01' = {
    name: toLower('${resourceGroup().name}acr')
    location: location
    sku: {
    name: 'Basic'
    }
    properties: {
    adminUserEnabled: true
    }
    }

    # create the aca environment
    module env 'environment.bicep' = {
    name: 'containerAppEnvironment'
    params: {
    location: location
    }
    }

    # create the various config pairs
    var shared_config = [
    {
    name: 'ASPNETCORE_ENVIRONMENT'
    value: 'Development'
    }
    {
    name: 'APPINSIGHTS_INSTRUMENTATIONKEY'
    value: env.outputs.appInsightsInstrumentationKey
    }
    {
    name: 'APPLICATIONINSIGHTS_CONNECTION_STRING'
    value: env.outputs.appInsightsConnectionString
    }
    ]

    # create the products api container app
    module products 'container_app.bicep' = {
    name: 'products'
    params: {
    name: 'products'
    location: location
    registryPassword: acr.listCredentials().passwords[0].value
    registryUsername: acr.listCredentials().username
    containerAppEnvironmentId: env.outputs.id
    registry: acr.name
    envVars: shared_config
    externalIngress: false
    }
    }

    # create the inventory api container app
    module inventory 'container_app.bicep' = {
    name: 'inventory'
    params: {
    name: 'inventory'
    location: location
    registryPassword: acr.listCredentials().passwords[0].value
    registryUsername: acr.listCredentials().username
    containerAppEnvironmentId: env.outputs.id
    registry: acr.name
    envVars: shared_config
    externalIngress: false
    }
    }

    # create the store api container app
    module store 'container_app.bicep' = {
    name: 'store'
    params: {
    name: 'store'
    location: location
    registryPassword: acr.listCredentials().passwords[0].value
    registryUsername: acr.listCredentials().username
    containerAppEnvironmentId: env.outputs.id
    registry: acr.name
    envVars: shared_config
    externalIngress: true
    }
    }


    Bicep Modules

    The main.bicep file references modules to create resources, such as module products. Modules are a feature of Bicep templates that enable you to abstract resource declarations into their own files or sub-templates. As the main.bicep file is processed, the defined modules are also evaluated. Modules allow you to create resources in a more organized and reusable way. They can also define input and output parameters that are passed to and from the parent template, such as the name of a resource.

    For example, the environment.bicep module extracts the details of creating a container apps environment into a reusable template. The module defines necessary resource dependencies such as Log Analytics Workspaces and an Application Insights instance.

    environment.bicep without Dapr

    param baseName string = resourceGroup().name
    param location string = resourceGroup().location

    resource logs 'Microsoft.OperationalInsights/workspaces@2021-06-01' = {
    name: '${baseName}logs'
    location: location
    properties: any({
    retentionInDays: 30
    features: {
    searchVersion: 1
    }
    sku: {
    name: 'PerGB2018'
    }
    })
    }

    resource appInsights 'Microsoft.Insights/components@2020-02-02' = {
    name: '${baseName}ai'
    location: location
    kind: 'web'
    properties: {
    Application_Type: 'web'
    WorkspaceResourceId: logs.id
    }
    }

    resource env 'Microsoft.App/managedEnvironments@2022-01-01-preview' = {
    name: '${baseName}env'
    location: location
    properties: {
    appLogsConfiguration: {
    destination: 'log-analytics'
    logAnalyticsConfiguration: {
    customerId: logs.properties.customerId
    sharedKey: logs.listKeys().primarySharedKey
    }
    }
    }
    }

    output id string = env.id
    output appInsightsInstrumentationKey string = appInsights.properties.InstrumentationKey
    output appInsightsConnectionString string = appInsights.properties.ConnectionString

    environment.bicep with Dapr


    param baseName string = resourceGroup().name
    param location string = resourceGroup().location

    resource logs 'Microsoft.OperationalInsights/workspaces@2021-06-01' = {
    name: '${baseName}logs'
    location: location
    properties: any({
    retentionInDays: 30
    features: {
    searchVersion: 1
    }
    sku: {
    name: 'PerGB2018'
    }
    })
    }

    resource appInsights 'Microsoft.Insights/components@2020-02-02' = {
    name: '${baseName}ai'
    location: location
    kind: 'web'
    properties: {
    Application_Type: 'web'
    WorkspaceResourceId: logs.id
    }
    }

    resource env 'Microsoft.App/managedEnvironments@2022-01-01-preview' = {
    name: '${baseName}env'
    location: location
    properties: {
    appLogsConfiguration: {
    destination: 'log-analytics'
    logAnalyticsConfiguration: {
    customerId: logs.properties.customerId
    sharedKey: logs.listKeys().primarySharedKey
    }
    }
    }
    }

    output id string = env.id
    output appInsightsInstrumentationKey string = appInsights.properties.InstrumentationKey
    output appInsightsConnectionString string = appInsights.properties.ConnectionString


    The container_apps.bicep template defines numerous parameters to provide a reusable template for creating container apps. This allows the module to be used in other CI/CD pipelines as well.

    container_app.bicep without Dapr

    param name string
    param location string = resourceGroup().location
    param containerAppEnvironmentId string
    param repositoryImage string = 'mcr.microsoft.com/azuredocs/containerapps-helloworld:latest'
    param envVars array = []
    param registry string
    param minReplicas int = 1
    param maxReplicas int = 1
    param port int = 80
    param externalIngress bool = false
    param allowInsecure bool = true
    param transport string = 'http'
    param registryUsername string
    @secure()
    param registryPassword string

    resource containerApp 'Microsoft.App/containerApps@2022-01-01-preview' ={
    name: name
    location: location
    properties:{
    managedEnvironmentId: containerAppEnvironmentId
    configuration: {
    activeRevisionsMode: 'single'
    secrets: [
    {
    name: 'container-registry-password'
    value: registryPassword
    }
    ]
    registries: [
    {
    server: registry
    username: registryUsername
    passwordSecretRef: 'container-registry-password'
    }
    ]
    ingress: {
    external: externalIngress
    targetPort: port
    transport: transport
    allowInsecure: allowInsecure
    }
    }
    template: {
    containers: [
    {
    image: repositoryImage
    name: name
    env: envVars
    }
    ]
    scale: {
    minReplicas: minReplicas
    maxReplicas: maxReplicas
    }
    }
    }
    }

    output fqdn string = containerApp.properties.configuration.ingress.fqdn

    container_app.bicep with Dapr


    param name string
    param location string = resourceGroup().location
    param containerAppEnvironmentId string
    param repositoryImage string = 'mcr.microsoft.com/azuredocs/containerapps-helloworld:latest'
    param envVars array = []
    param registry string
    param minReplicas int = 1
    param maxReplicas int = 1
    param port int = 80
    param externalIngress bool = false
    param allowInsecure bool = true
    param transport string = 'http'
    param appProtocol string = 'http'
    param registryUsername string
    @secure()
    param registryPassword string

    resource containerApp 'Microsoft.App/containerApps@2022-01-01-preview' ={
    name: name
    location: location
    properties:{
    managedEnvironmentId: containerAppEnvironmentId
    configuration: {
    dapr: {
    enabled: true
    appId: name
    appPort: port
    appProtocol: appProtocol
    }
    activeRevisionsMode: 'single'
    secrets: [
    {
    name: 'container-registry-password'
    value: registryPassword
    }
    ]
    registries: [
    {
    server: registry
    username: registryUsername
    passwordSecretRef: 'container-registry-password'
    }
    ]
    ingress: {
    external: externalIngress
    targetPort: port
    transport: transport
    allowInsecure: allowInsecure
    }
    }
    template: {
    containers: [
    {
    image: repositoryImage
    name: name
    env: envVars
    }
    ]
    scale: {
    minReplicas: minReplicas
    maxReplicas: maxReplicas
    }
    }
    }
    }

    output fqdn string = containerApp.properties.configuration.ingress.fqdn


    Understanding configuration differences with Dapr

    The code for this specific sample application is largely the same whether or not Dapr is integrated. However, even with this simple app, there are a few benefits and configuration differences when using Dapr that are worth exploring.

    In this scenario most of the changes are related to communication between the container apps. However, you can explore the full range of Dapr benefits by reading the Dapr integration with Azure Container Apps article in the conceptual documentation.

    Without Dapr

    Without Dapr the main.bicep template handles wiring up the front-end store app to communicate with the back-end apis by manually managing environment variables. The bicep template retrieves the fully qualified domains (fqdn) of the API apps as output parameters when they are created. Those configurations are then set as environment variables on the store container app.


    # Retrieve environment variables from API container creation
    var frontend_config = [
    {
    name: 'ProductsApi'
    value: 'http://${products.outputs.fqdn}'
    }
    {
    name: 'InventoryApi'
    value: 'http://${inventory.outputs.fqdn}'
    }
    ]

    # create the store api container app, passing in config
    module store 'container_app.bicep' = {
    name: 'store'
    params: {
    name: 'store'
    location: location
    registryPassword: acr.listCredentials().passwords[0].value
    registryUsername: acr.listCredentials().username
    containerAppEnvironmentId: env.outputs.id
    registry: acr.name
    envVars: union(shared_config, frontend_config)
    externalIngress: true
    }
    }

    The environment variables are then retrieved inside of the program class and used to configure the base URLs of the corresponding HTTP clients.


    builder.Services.AddHttpClient("Products", (httpClient) => httpClient.BaseAddress = new Uri(builder.Configuration.GetValue<string>("ProductsApi")));
    builder.Services.AddHttpClient("Inventory", (httpClient) => httpClient.BaseAddress = new Uri(builder.Configuration.GetValue<string>("InventoryApi")));

    With Dapr

    Dapr can be enabled on a container app when it is created, as seen below. This configuration adds a Dapr sidecar to the app to streamline discovery and communication features between the different container apps in your environment.


    # Create the container app with Dapr enabled
    resource containerApp 'Microsoft.App/containerApps@2022-01-01-preview' ={
    name: name
    location: location
    properties:{
    managedEnvironmentId: containerAppEnvironmentId
    configuration: {
    dapr: {
    enabled: true
    appId: name
    appPort: port
    appProtocol: appProtocol
    }
    activeRevisionsMode: 'single'
    secrets: [
    {
    name: 'container-registry-password'
    value: registryPassword
    }
    ]

    # Rest of template omitted for brevity...
    }
    }

    Some of these Dapr features can be surfaced through the program file. You can configure your HttpClient to leverage Dapr configurations when communicating with other apps in your environment.


    // reconfigure code to make requests to Dapr sidecar
    var baseURL = (Environment.GetEnvironmentVariable("BASE_URL") ?? "http://localhost") + ":" + (Environment.GetEnvironmentVariable("DAPR_HTTP_PORT") ?? "3500");
    builder.Services.AddHttpClient("Products", (httpClient) =>
    {
    httpClient.BaseAddress = new Uri(baseURL);
    httpClient.DefaultRequestHeaders.Add("dapr-app-id", "Products");
    });

    builder.Services.AddHttpClient("Inventory", (httpClient) =>
    {
    httpClient.BaseAddress = new Uri(baseURL);
    httpClient.DefaultRequestHeaders.Add("dapr-app-id", "Inventory");
    });


    Clean up resources

    If you're not going to continue to use this application, you can delete the Azure Container Apps and all the associated services by removing the resource group.

    Follow these steps in the Azure portal to remove the resources you created:

    1. In the Azure portal, navigate to the msdocswebappsapi resource group using the left navigation or search bar.
    2. Select the Delete resource group button at the top of the resource group Overview.
    3. Enter the resource group name msdocswebappsapi in the Are you sure you want to delete "msdocswebappsapi" confirmation dialog.
    4. Select Delete.
      The process to delete the resource group may take a few minutes to complete.
    - + \ No newline at end of file diff --git a/blog/25-aca-java/index.html b/blog/25-aca-java/index.html index 7ae5364577..633efc8b2b 100644 --- a/blog/25-aca-java/index.html +++ b/blog/25-aca-java/index.html @@ -14,13 +14,13 @@ - +

    25. Deploy Spring Boot App to ACA

    · 7 min read
    Brian Benz

    Welcome to Day 25 of #30DaysOfServerless!

    Azure Container Apps enable application code packaged in containers to run and scale without the overhead of managing cloud infrastructure and container orchestration. In this post I'll show you how to deploy a Java application running on Spring Boot in a container to Azure Container Registry and Azure Container Apps.


    What We'll Cover

    • Introduction to Deploying Java containers in the cloud
    • Step-by-step: Deploying to Azure Container Registry
    • Step-by-step: Deploying and running on Azure Container Apps
    • Resources: For self-study!


    Deploy Java containers to cloud

    We'll deploy a Java application running on Spring Boot in a container to Azure Container Registry and Azure Container Apps. Here are the main steps:

    • Create Azure Container Registry (ACR) on Azure portal
    • Create Azure Container App (ACA) on Azure portal.
    • Deploy code to Azure Container Registry from the Azure CLI.
    • Deploy container from ACR to ACA using the Azure portal.
    PRE-REQUISITES

    Sign in to Azure from the CLI using the az login command, and follow the prompts in your browser to complete the authentication process. Also, ensure you're running the latest version of the CLI by using the az upgrade command.

    1. Get Sample Code

    Fork and clone the sample GitHub repo to your local machine. Navigate to the and click Fork in the top-right corner of the page.

    The example code that we're using is a very basic containerized Spring Boot example. There are a lot more details to learn about Spring boot apps in docker, for a deep dive check out this Spring Boot Guide

    2. Run Sample Locally (Optional)

    If you have docker installed locally, you can optionally test the code on your local machine. Navigate to the root directory of the forked repository and run the following commands:

    docker build -t spring-boot-docker-aca .
    docker run -p 8080:8080 spring-boot-docker-aca

    Open a browser and go to https://localhost:8080. You should see this message:

    Hello Docker World

    That indicates the the Spring Boot app is successfully running locally in a docker container.

    Next, let's set up an Azure Container Registry an an Azure Container App and deploy this container to the cloud!


    3. Step-by-step: Deploy to ACR

    To create a container registry from the portal dashboard, Select Create a resource > Containers > Container Registry.

    Navigate to container registry in portal

    In the Basics tab, enter values for Resource group and Registry name. The registry name must be unique within Azure, and contain 5-50 alphanumeric characters. Create a new resource group in the West US location named spring-boot-docker-aca. Select the 'Basic' SKU.

    Keep the default values for the remaining settings. Then select Review + create, then Create. When the Deployment succeeded message appears, select the container registry in the portal.

    Note the registry server name ending with azurecr.io. You will use this in the following steps when you push and pull images with Docker.

    3.1 Log into registry using the Azure CLI

    Before pushing and pulling container images, you must log in to the registry instance. Sign into the Azure CLI on your local machine, then run the az acr login command. For this step, use the registry name, not the server name ending with azurecr.io.

    From the command line, type:

    az acr login --name myregistryname

    The command returns Login Succeeded once completed.

    3.2 Build & deploy with az acr build

    Next, we're going to deploy the docker container we created earlier using the AZ ACR Build command. AZ ACR Build creates a docker build from local code and pushes the container to Azure Container Registry if the build is successful.

    Go to your local clone of the spring-boot-docker-aca repo in the command line, type:

    az acr build --registry myregistryname --image spring-boot-docker-aca:v1 .

    3.3 List container images

    Once the AZ ACR Build command is complete, you should be able to view the container as a repository in the registry. In the portal, open your registry and select Repositories, then select the spring-boot-docker-aca repository you created with docker push. You should also see the v1 image under Tags.

    4. Deploy on ACA

    Now that we have an image in the Azure Container Registry, we can deploy it to Azure Container Apps. For the first deployment, we'll pull the container from our ACR as part of the ACA setup.

    4.1 Create a container app

    We'll create the container app at the same place that we created the container registry in the Azure portal. From the portal, select Create a resource > Containers > Container App. In the Basics tab, set these values:

    4.2 Enter project details

    SettingAction
    SubscriptionYour Azure subscription.
    Resource groupUse the spring-boot-docker-aca resource group
    Container app nameEnter spring-boot-docker-aca.

    4.3 Create an environment

    1. In the Create Container App environment field, select Create new.

    2. In the Create Container App Environment page on the Basics tab, enter the following values:

      SettingValue
      Environment nameEnter my-environment.
      RegionSelect westus3.
    3. Select OK.

    4. Select the Create button at the bottom of the Create Container App Environment page.

    5. Select the Next: App settings button at the bottom of the page.

    5. App settings tab

    The App settings tab is where you connect to the ACR and pull the repository image:

    SettingAction
    Use quickstart imageUncheck the checkbox.
    NameEnter spring-boot-docker-aca.
    Image sourceSelect Azure Container Registry
    RegistrySelect your ACR from the list.
    ImageSelect spring-boot-docker-aca from the list.
    Image TagSelect v1 from the list.

    5.1 Application ingress settings

    SettingAction
    IngressSelect Enabled.
    Ingress visibilitySelect External to publicly expose your container app.
    Target portEnter 8080.

    5.2 Deploy the container app

    1. Select the Review and create button at the bottom of the page.
    2. Select Create.

    Once the deployment is successfully completed, you'll see the message: Your deployment is complete.

    5.3 Verify deployment

    In the portal, go to the Overview of your spring-boot-docker-aca Azure Container App, and click on the Application Url. You should see this message in the browser:

    Hello Docker World

    That indicates the the Spring Boot app is running in a docker container in your spring-boot-docker-aca Azure Container App.

    Resources: For self-study!

    Once you have an understanding of the basics in ths post, there is so much more to learn!

    Thanks for stopping by!

    - + \ No newline at end of file diff --git a/blog/28-where-am-i/index.html b/blog/28-where-am-i/index.html index 9cfb26e189..05d186b04b 100644 --- a/blog/28-where-am-i/index.html +++ b/blog/28-where-am-i/index.html @@ -14,13 +14,13 @@ - +

    28. Serverless + Power Platforms

    · 14 min read
    Justin Yoo

    Welcome to Day 28 of #30DaysOfServerless!

    Since it's the serverless end-to-end week, I'm going to discuss how to use a serverless application Azure Functions with OpenAPI extension to be seamlessly integrated with Power Platform custom connector through Azure API Management - in a post I call "Where am I? My GPS Location with Serverless Power Platform Custom Connector"

    OK. Are you ready? Let's get started!


    What We'll Cover

    • What is Power Platform custom connector?
    • Proxy app to Google Maps and Naver Map API
    • API Management integration
    • Two ways of building custom connector
    • Where am I? Power Apps app
    • Exercise: Try this yourself!
    • Resources: For self-study!


    SAMPLE REPO

    Want to follow along? Check out the sample app on GitHub repository used in this post.

    What is Power Platform custom connector?

    Power Platform is a low-code/no-code application development tool for fusion teams that consist of a group of people. Those people come from various disciplines, including field experts (domain experts), IT professionals and professional developers, to draw business values successfully. Within the fusion team, the domain experts become citizen developers or low-code developers by Power Platform. In addition, Making Power Platform more powerful is that it offers hundreds of connectors to other Microsoft 365 and third-party services like SAP, ServiceNow, Salesforce, Google, etc.

    However, what if you want to use your internal APIs or APIs not yet offering their official connectors? Here's an example. If your company has an inventory management system, and you want to use it within your Power Apps or Power Automate. That point is exactly where Power Platform custom connectors is necessary.

    Inventory Management System for Power Apps

    Therefore, Power Platform custom connectors enrich those citizen developers' capabilities because those connectors can connect any API applications for the citizen developers to use.

    In this post, let's build a custom connector that provides a static map image generated by Google Maps API and Naver Map API using your GPS location.

    Proxy app to Google Maps and Naver Map API

    First, let's build an Azure Functions app that connects to Google Maps and Naver Map. Suppose that you've already got the API keys for both services. If you haven't yet, get the keys first by visiting here for Google and here for Naver. Then, store them to local.settings.json within your Azure Functions app.

    {
    "Values": {
    ...
    "Maps__Google__ApiKey": "<GOOGLE_MAPS_API_KEY>",
    "Maps__Naver__ClientId": "<NAVER_MAP_API_CLIENT_ID>",
    "Maps__Naver__ClientSecret": "<NAVER_MAP_API_CLIENT_SECRET>"
    }
    }

    Here's the sample logic to get the static image from Google Maps API. It takes the latitude and longitude of your current location and image zoom level, then returns the static map image. There are a few hard-coded assumptions, though:

    • The image size should be 400x400.
    • The image should be in .png format.
    • The marker should show be red and show my location.
    public class GoogleMapService : IMapService
    {
    public async Task<byte[]> GetMapAsync(HttpRequest req)
    {
    var latitude = req.Query["lat"];
    var longitude = req.Query["long"];
    var zoom = (string)req.Query["zoom"] ?? "14";

    var sb = new StringBuilder();
    sb.Append("https://maps.googleapis.com/maps/api/staticmap")
    .Append($"?center={latitude},{longitude}")
    .Append("&size=400x400")
    .Append($"&zoom={zoom}")
    .Append($"&markers=color:red|{latitude},{longitude}")
    .Append("&format=png32")
    .Append($"&key={this._settings.Google.ApiKey}");
    var requestUri = new Uri(sb.ToString());

    var bytes = await this._http.GetByteArrayAsync(requestUri).ConfigureAwait(false);

    return bytes;
    }
    }

    The NaverMapService class has a similar logic with the same input and assumptions. Here's the code:

    public class NaverMapService : IMapService
    {
    public async Task<byte[]> GetMapAsync(HttpRequest req)
    {
    var latitude = req.Query["lat"];
    var longitude = req.Query["long"];
    var zoom = (string)req.Query["zoom"] ?? "13";

    var sb = new StringBuilder();
    sb.Append("https://naveropenapi.apigw.ntruss.com/map-static/v2/raster")
    .Append($"?center={longitude},{latitude}")
    .Append("&w=400")
    .Append("&h=400")
    .Append($"&level={zoom}")
    .Append($"&markers=color:blue|pos:{longitude}%20{latitude}")
    .Append("&format=png")
    .Append("&lang=en");
    var requestUri = new Uri(sb.ToString());

    this._http.DefaultRequestHeaders.Clear();
    this._http.DefaultRequestHeaders.Add("X-NCP-APIGW-API-KEY-ID", this._settings.Naver.ClientId);
    this._http.DefaultRequestHeaders.Add("X-NCP-APIGW-API-KEY", this._settings.Naver.ClientSecret);

    var bytes = await this._http.GetByteArrayAsync(requestUri).ConfigureAwait(false);

    return bytes;
    }
    }

    Let's take a look at the function endpoints. Here's for the Google Maps and Naver Map. As the GetMapAsync(req) method returns a byte array value, you need to transform it as FileContentResult, with the content type of image/png.

    // Google Maps
    public class GoogleMapsTrigger
    {
    [FunctionName(nameof(GoogleMapsTrigger.GetGoogleMapImage))]
    public async Task<IActionResult> GetGoogleMapImage(
    [HttpTrigger(AuthorizationLevel.Anonymous, "GET", Route = "google/image")] HttpRequest req)
    {
    this._logger.LogInformation("C# HTTP trigger function processed a request.");

    var bytes = await this._service.GetMapAsync(req).ConfigureAwait(false);

    return new FileContentResult(bytes, "image/png");
    }
    }

    // Naver Map
    public class NaverMapsTrigger
    {
    [FunctionName(nameof(NaverMapsTrigger.GetNaverMapImage))]
    public async Task<IActionResult> GetNaverMapImage(
    [HttpTrigger(AuthorizationLevel.Anonymous, "GET", Route = "naver/image")] HttpRequest req)
    {
    this._logger.LogInformation("C# HTTP trigger function processed a request.");

    var bytes = await this._service.GetMapAsync(req).ConfigureAwait(false);

    return new FileContentResult(bytes, "image/png");
    }
    }

    Then, add the OpenAPI capability to each function endpoint. Here's the example:

    // Google Maps
    public class GoogleMapsTrigger
    {
    [FunctionName(nameof(GoogleMapsTrigger.GetGoogleMapImage))]
    // ⬇️⬇️⬇️ Add decorators provided by the OpenAPI extension ⬇️⬇️⬇️
    [OpenApiOperation(operationId: nameof(GoogleMapsTrigger.GetGoogleMapImage), tags: new[] { "google" })]
    [OpenApiParameter(name: "lat", In = ParameterLocation.Query, Required = true, Type = typeof(string), Description = "The **latitude** parameter")]
    [OpenApiParameter(name: "long", In = ParameterLocation.Query, Required = true, Type = typeof(string), Description = "The **longitude** parameter")]
    [OpenApiParameter(name: "zoom", In = ParameterLocation.Query, Required = false, Type = typeof(string), Description = "The **zoom level** parameter &ndash; Default value is `14`")]
    [OpenApiResponseWithBody(statusCode: HttpStatusCode.OK, contentType: "image/png", bodyType: typeof(byte[]), Description = "The map image as an OK response")]
    // ⬆️⬆️⬆️ Add decorators provided by the OpenAPI extension ⬆️⬆️⬆️
    public async Task<IActionResult> GetGoogleMapImage(
    [HttpTrigger(AuthorizationLevel.Anonymous, "GET", Route = "google/image")] HttpRequest req)
    {
    ...
    }
    }

    // Naver Map
    public class NaverMapsTrigger
    {
    [FunctionName(nameof(NaverMapsTrigger.GetNaverMapImage))]
    // ⬇️⬇️⬇️ Add decorators provided by the OpenAPI extension ⬇️⬇️⬇️
    [OpenApiOperation(operationId: nameof(NaverMapsTrigger.GetNaverMapImage), tags: new[] { "naver" })]
    [OpenApiParameter(name: "lat", In = ParameterLocation.Query, Required = true, Type = typeof(string), Description = "The **latitude** parameter")]
    [OpenApiParameter(name: "long", In = ParameterLocation.Query, Required = true, Type = typeof(string), Description = "The **longitude** parameter")]
    [OpenApiParameter(name: "zoom", In = ParameterLocation.Query, Required = false, Type = typeof(string), Description = "The **zoom level** parameter &ndash; Default value is `13`")]
    [OpenApiResponseWithBody(statusCode: HttpStatusCode.OK, contentType: "image/png", bodyType: typeof(byte[]), Description = "The map image as an OK response")]
    // ⬆️⬆️⬆️ Add decorators provided by the OpenAPI extension ⬆️⬆️⬆️
    public async Task<IActionResult> GetNaverMapImage(
    [HttpTrigger(AuthorizationLevel.Anonymous, "GET", Route = "naver/image")] HttpRequest req)
    {
    ...
    }
    }

    Run the function app in the local. Here are the latitude and longitude values for Seoul, Korea.

    • latitude: 37.574703
    • longitude: 126.978519

    Google Map for Seoul

    It seems to be working! Let's deploy it to Azure.

    API Management integration

    Visual Studio 2022 provides a built-in deployment tool for Azure Functions app onto Azure. In addition, the deployment tool supports seamless integration with Azure API Management as long as your Azure Functions app enables the OpenAPI capability. In this post, I'm going to use this feature. Right-mouse click on the Azure Functions project and select the "Publish" menu.

    Visual Studio context menu for publish

    Then, you will see the publish screen. Click the "➕ New" button to create a new publish profile.

    Create a new publish profile

    Choose "Azure" and click the "Next" button.

    Choose the target platform for publish

    Select the app instance. This time simply pick up the "Azure Function App (Windows)" option, then click "Next".

    Choose the target OS for publish

    If you already provision an Azure Function app instance, you will see it on the screen. Otherwise, create a new one. Then, click "Next".

    Choose the target instance for publish

    In the next step, you are asked to choose the Azure API Management instance for integration. Choose one, or create a new one. Then, click "Next".

    Choose the APIM instance for integration

    Finally, select the publish method either local publish or GitHub Actions workflow. Let's pick up the local publish method for now. Then, click "Finish".

    Choose the deployment type

    The publish profile has been created. Click "Close" to move on.

    Publish profile created

    Now the function app is ready for deployment. Click the "Publish" button and see how it goes.

    Publish function app

    The Azure function app has been deployed and integrated with the Azure API Management instance.

    Function app published

    Go to the published function app site, and everything looks OK.

    Function app on Azure

    And API Management shows the function app integrated perfectly.

    Function app integrated with APIM

    Now, you are ready to create a custom connector. Let's move on.

    Two ways of building custom connector

    There are two ways to create a custom connector.

    Export custom connector from API Management

    First, you can directly use the built-in API Management feature. Then, click the ellipsis icon and select the "Create Power Connector" menu.

    Create Power Connector menu

    Then, you are redirected to this screen. While the "API" and "API display name" fields are pre-populated, you need to choose the Power Platform environment tied to your tenant. Choose an environment, click "Authenticate", and click "Create".

    Create custom connector screen

    Check your custom connector on Power Apps or Power Automate side.

    Custom connector created on Power Apps

    However, there's a caveat to this approach. Because it's tied to your tenant, you should use the second approach if you want to use this custom connector on the other tenant.

    Import custom connector from OpenAPI document or URL

    Click the ellipsis icon again and select the "Export" menu.

    Export menu

    On the Export API screen, choose the "OpenAPI v2 (JSON)" panel because Power Platform custom connector currently accepts version 2 of the OpenAPI document.

    Select OpenAPI v2

    Download the OpenAPI document to your local computer and move to your Power Apps or Power Automate page under your desired environment. I'm going to use the Power Automate page. First, go to the "Data" ➡️ "Custom connectors" page. Then, click the "➕ New custom connector" ➡️ "Import an OpenAPI file" at the top right corner.

    New custom connector

    When a modal pops up, give the custom connector name and import the OpenAPI document exported above. Then, click "Continue".

    Import custom connector

    Actually, that's it! Next, click the "✔️ Create connector" button to create the connector.

    Create custom connector

    Go back to the custom connector page, and you will see the "Maps API" custom connector you just created.

    Custom connector imported

    So, you are ready to create a Power Apps app to display your location on Google Maps or Naver Map! Let's move on.

    Where am I? Power Apps app

    Open the Power Apps Studio, and create an empty canvas app, named Who am I with a phone layout.

    Custom connector integration

    To use the custom connector created above, you need to add it to the Power App. Click the cylinder icon on the left and click the "Add data" button.

    Add custom connector to data pane

    Search the custom connector name, "Maps API", and click the custom connector to add.

    Search custom connector

    To use the custom connector, you also need to create a connection to it. Click the "Connect" button and move on.

    Create connection to custom connector

    Now, you've got the connection to the custom connector.

    Connection to custom connector ready

    Controls

    Let's build the Power Apps app. First of all, put three controls Image, Slider and Button onto the canvas.

    Power Apps control added

    Click the "Screen1" control and change the value on the property "OnVisible" to the formula below. The formula stores the current slider value in the zoomlevel collection.

    ClearCollect(
    zoomlevel,
    Slider1.Value
    )

    Click the "Botton1" control and change the value on the property "OnSelected" to the formula below. It passes the current latitude, longitude and zoom level to the custom connector and receives the image data. The received image data is stored in the result collection.

    ClearCollect(
    result,
    MAPS.GetGoogleMapImage(
    Location.Latitude,
    Location.Longitude,
    { zoom: First(zoomlevel).Value }
    )
    )

    Click the "Image1" control and change the value on the property "Image" to the formula below. It gets the image data from the result collection.

    First(result).Url

    Click the "Slider1" control and change the value on the property "OnChange" to the formula below. It stores the current slider value to the zoomlevel collection, followed by calling the custom connector to get the image data against the current location.

    ClearCollect(
    zoomlevel,
    Slider1.Value
    );
    ClearCollect(
    result,
    MAPS.GetGoogleMapImage(
    Location.Latitude,
    Location.Longitude,
    { zoom: First(zoomlevel).Value }
    )
    )

    That seems to be OK. Let's click the "Where am I?" button. But it doesn't show the image. The First(result).Url value is actually similar to this:

    appres://blobmanager/1090a86393a843adbfcf428f0b90e91b/1

    It's the image reference value somewhere you can't get there.

    Workaround Power Automate workflow

    Therefore, you need a workaround using a Power Automate workflow to sort out this issue. Open the Power Automate Studio, create an instant cloud flow with the Power App trigger, and give it the "Where am I" name. Then add input parameters of lat, long and zoom.

    Power Apps trigger on Power Automate workflow

    Add custom connector action to get the map image.

    Select action to get the Google Maps image

    In the action, pass the appropriate parameters to the action.

    Pass parameters to the custom connector action

    Add a "Response" action and put the following values into each field.

    • "Body" field:

      {
      "base64Image": <power_automate_expression>
      }

      The <power_automate_expression> should be concat('data:', body('GetGoogleMapImage')?['$content-type'], ';base64,', body('GetGoogleMapImage')?['$content']).

    • "Response Body JSON Schema" field:

      {
      "type": "object",
      "properties": {
      "base64Image": {
      "type": "string"
      }
      }
      }

    Format the Response action

    Let's return to the Power Apps Studio and add the Power Automate workflow you created.

    Add Power Automate workflow

    Select "Button1" and change the value on the property "OnSelect" below. It replaces the direct call to the custom connector with the Power Automate workflow.

    ClearCollect(
    result,
    WhereamI.Run(
    Location.Latitude,
    Location.Longitude,
    First(zoomlevel).Value
    )
    )

    Also, change the value on the property "OnChange" of the "Slider1" control below, replacing the custom connector call with the Power Automate workflow call.

    ClearCollect(
    zoomlevel,
    Slider1.Value
    );
    ClearCollect(
    result,
    WhereamI.Run(
    Location.Latitude,
    Location.Longitude,
    First(zoomlevel).Value
    )
    )

    And finally, change the "Image1" control's "Image" property value below.

    First(result).base64Image

    The workaround has been applied. Click the "Where am I?" button to see your current location from Google Maps.

    Run Power Apps app #1

    If you change the slider left or right, you will see either the zoomed-in image or the zoomed-out image.

    Run Power Apps app #2

    Now, you've created a Power Apps app to show your current location using:

    • Google Maps API through the custom connector, and
    • Custom connector written in Azure Functions with OpenAPI extension!

    Exercise: Try this yourself!

    You can fork this GitHub repository to your account and play around with it to see how the custom connector works. After forking the repository, make sure that you create all the necessary secrets to your repository documented in the README file.

    Then, click the "Deploy to Azure" button, and it will provision all necessary Azure resources and deploy an Azure Functions app for a custom connector.

    Deploy To Azure

    Once everything is deployed successfully, try to create a Power Apps app and Power Automate workflow to see your current location in real-time!

    Resources: For self-study!

    Want to know more about Power Platform custom connector and Azure Functions OpenAPI extension? Here are several resources you can take a look at:

    - + \ No newline at end of file diff --git a/blog/29-awesome-azd/index.html b/blog/29-awesome-azd/index.html index 1ff8048aa0..6c5f084e72 100644 --- a/blog/29-awesome-azd/index.html +++ b/blog/29-awesome-azd/index.html @@ -14,13 +14,13 @@ - +

    Oct | `awesome-azd` Templates

    · 5 min read
    Savannah Ostrowski

    Welcome to Beyond #30DaysOfServerless! in October!

    Yes, it's October!! And since we ended #ServerlessSeptember with a focus on End-to-End Development for Serverless on Azure, we thought it would be good to share updates in October that can help you skill up even further.

    Today, we're following up on the Code to Cloud with azd blog post (Day #29) where we introduced the Azure Developer CLI (azd), an open-source tool for streamlining your end-to-end developer experience going from local development environment to Azure cloud. In today's post, we celebrate the October 2022 release of the tool, with three cool new features.

    And if it's October, it must be #Hacktoberfest!! Read on to learn about how you can take advantage of one of the new features, to contribute to the azd open-source community and ecosystem!

    Ready? Let's go!


    What We'll Cover

    • Azure Friday: Introducing the Azure Developer CLI (Video)
    • October 2022 Release: What's New in the Azure Developer CLI?
      • Azure Pipelines for CI/CD: Learn more
      • Improved Infrastructure as Code structure via Bicep modules: Learn more
      • A new azd template gallery: The new azd-templates gallery for community use! Learn more
    • Awesome-Azd: The new azd-templates gallery for Community use
      • Features: discover, create, contribute, request - templates
      • Hacktoberfest: opportunities to contribute in October - and beyond!


    Azure Friday

    This post is a follow-up to our #ServerlessSeptember post on Code to Cloud with Azure Developer CLI where we introduced azd, a new open-source tool that makes it quick and simple for you to move your application from a local development environment to Azure, streamlining your end-to-end developer workflow in the process.

    Prefer to watch a video overview? I have you covered! Check out my recent conversation with Scott Hanselman on Azure Friday where we:

    • talked about the code-to-cloud developer journey
    • walkthrough the ins and outs of an azd template
    • explored Azure Developer CLI commands in the terminal and VS Code, and
    • (probably most importantly) got a web app up and running on Azure with a database, Key Vault and monitoring all in a couple of minutes

    October Release

    We're pleased to announce the October 2022 release of the Azure Developer CLI (currently 0.3.0-beta.2). Read the release announcement for more details. Here are the highlights:

    • Azure Pipelines for CI/CD: This addresses azure-dev#101, adding support for Azure Pipelines (alongside GitHub Actions) as a CI/CD provider. Learn more about usage and related documentation.
    • Improved Infrastructure as Code structure via Bicep modules: This addresses azure-dev#543, which recognized the complexity of using a single resources.bicep file for all resources. With this release, azd templates now come with Bicep modules organized by purpose making it easier to edit and understand. Learn more about this structure, and how to use it.
    • New Templates Gallery - awesome-azd: This addresses azure-dev#398, which aimed to make templates more discoverable and easier to contribute. Learn more about how the new gallery improves the template discovery experience.

    In the next section, we'll dive briefly into the last feature, introducing the new awesome-azd site and resource for templates discovery and contribution. And, since it's #Hacktoberfest season, we'll talk about the Contributor Guide and the many ways you can contribute to this project - with, or without, code.


    It's awesome-azd

    Welcome to awesome-azd a new template gallery hosted on GitHub Pages, and meant to be a destination site for discovering, requesting, and contributing azd-templates for community use!

    In addition, it's README reflects the awesome-list resource format, providing a location for the community to share "best of" resources for Azure Developer CLI - from blog posts and videos, to full-scale tutorials and templates.

    The Gallery is organized into three main areas:

    Take a minute to explore the Gallery and note the features:

    • Search for templates by name
    • Requested Templates - indicating asks from the community
    • Featured Templates - highlighting high-quality templates
    • Filters - to discover templates by and/or query combinations

    Check back often to see the latest contributed templates and requests!


    Hacktoberfest

    So, why is this a good time to talk about the Gallery? Because October means it's time for #Hacktoberfest - a month-long celebration of open-source projects and their maintainers, and an opportunity for first-time contributors to get support and guidance making their first pull-requests! Check out the #Hacktoberfest topic on GitHub for projects you can contribute to.

    And we hope you think of awesome-azd as another possible project to contribute to.

    Check out the FAQ section to learn how to create, discover, and contribute templates. Or take a couple of minutes to watch this video walkthrough from Jon Gallant:

    And don't hesitate to reach out to us - either via Issues on the repo, or in the Discussions section of this site, to give us feedback!

    Happy Hacking! 🎃


    - + \ No newline at end of file diff --git a/blog/29-azure-developer-cli/index.html b/blog/29-azure-developer-cli/index.html index 13cdd79fde..2039f1abdf 100644 --- a/blog/29-azure-developer-cli/index.html +++ b/blog/29-azure-developer-cli/index.html @@ -14,7 +14,7 @@ - + @@ -26,7 +26,7 @@

    ...and that's it! We've successfully deployed our application on Azure!

    But there's more!

    Best practices: Monitoring and CI/CD!

    In my opinion, it's not enough to just set up the application on Azure! I want to know that my web app is performant and serving my users reliably! I also want to make sure that I'm not inadvertently breaking my application as I continue to make changes to it. Thankfully, the Azure Developer CLI also handles all of this via two additional commands - azd monitor and azd pipeline config.

    Application Monitoring

    When we provisioned all of our infrastructure, we also set up application monitoring via a Bicep file in our .infra/ directory that spec'd out an Application Insights dashboard. By running azd monitor we can see the dashboard with live metrics that was configured for the application.

    We can also navigate to the Application Dashboard by clicking on the resource group name, where you can set a specific refresh rate for the dashboard, and see usage, reliability, and performance metrics over time.

    I don't know about everyone else but I have spent a ton of time building out similar dashboards. It can be super time-consuming to write all the queries and create the visualizations so this feels like a real time saver.

    CI/CD

    Finally let's talk about setting up CI/CD! This might be my favorite azd feature. As I mentioned before, the Azure Developer CLI has a command, azd pipeline config, which uses the files in the .github/ directory to set up a GitHub Action. More than that, if there is no upstream repo, the Developer CLI will actually help you create one. But what does this mean exactly? Because our GitHub Action is using the same commands you'd run in the CLI under the hood, we're actually going to have CI/CD set up to run on every commit into the repo, against real Azure resources. What a sweet collaboration feature!

    That's it! We've gone end-to-end with the Azure Developer CLI - initialized a project, provisioned the resources on Azure, deployed our code on Azure, set up monitoring logs and dashboards, and set up a CI/CD pipeline with GitHub Actions to run on every commit into the repo (on real Azure resources!).

    Exercise: Try it yourself or create your own template!

    As an exercise, try out the workflow above with any template on GitHub!

    Or, try turning your own project into an Azure Developer CLI-enabled template by following this guidance. If you create your own template, don't forget to tag the repo with the azd-templates topic on GitHub to help others find it (unfamiliar with GitHub topics? Learn how to add topics to your repo)! We'd also love to chat with you about your experience creating an azd template - if you're open to providing feedback around this, please fill out this form!

    Resources

    - + \ No newline at end of file diff --git a/blog/archive/index.html b/blog/archive/index.html index d63fa89410..3d988dd274 100644 --- a/blog/archive/index.html +++ b/blog/archive/index.html @@ -14,13 +14,13 @@ - +

    Archive

    Archive

    2022

    - + \ No newline at end of file diff --git a/blog/index.html b/blog/index.html index ccfb120d97..2e58fc1993 100644 --- a/blog/index.html +++ b/blog/index.html @@ -14,13 +14,13 @@ - +

    · 5 min read
    Savannah Ostrowski

    Welcome to Beyond #30DaysOfServerless! in October!

    Yes, it's October!! And since we ended #ServerlessSeptember with a focus on End-to-End Development for Serverless on Azure, we thought it would be good to share updates in October that can help you skill up even further.

    Today, we're following up on the Code to Cloud with azd blog post (Day #29) where we introduced the Azure Developer CLI (azd), an open-source tool for streamlining your end-to-end developer experience going from local development environment to Azure cloud. In today's post, we celebrate the October 2022 release of the tool, with three cool new features.

    And if it's October, it must be #Hacktoberfest!! Read on to learn about how you can take advantage of one of the new features, to contribute to the azd open-source community and ecosystem!

    Ready? Let's go!


    What We'll Cover

    • Azure Friday: Introducing the Azure Developer CLI (Video)
    • October 2022 Release: What's New in the Azure Developer CLI?
      • Azure Pipelines for CI/CD: Learn more
      • Improved Infrastructure as Code structure via Bicep modules: Learn more
      • A new azd template gallery: The new azd-templates gallery for community use! Learn more
    • Awesome-Azd: The new azd-templates gallery for Community use
      • Features: discover, create, contribute, request - templates
      • Hacktoberfest: opportunities to contribute in October - and beyond!


    Azure Friday

    This post is a follow-up to our #ServerlessSeptember post on Code to Cloud with Azure Developer CLI where we introduced azd, a new open-source tool that makes it quick and simple for you to move your application from a local development environment to Azure, streamlining your end-to-end developer workflow in the process.

    Prefer to watch a video overview? I have you covered! Check out my recent conversation with Scott Hanselman on Azure Friday where we:

    • talked about the code-to-cloud developer journey
    • walkthrough the ins and outs of an azd template
    • explored Azure Developer CLI commands in the terminal and VS Code, and
    • (probably most importantly) got a web app up and running on Azure with a database, Key Vault and monitoring all in a couple of minutes

    October Release

    We're pleased to announce the October 2022 release of the Azure Developer CLI (currently 0.3.0-beta.2). Read the release announcement for more details. Here are the highlights:

    • Azure Pipelines for CI/CD: This addresses azure-dev#101, adding support for Azure Pipelines (alongside GitHub Actions) as a CI/CD provider. Learn more about usage and related documentation.
    • Improved Infrastructure as Code structure via Bicep modules: This addresses azure-dev#543, which recognized the complexity of using a single resources.bicep file for all resources. With this release, azd templates now come with Bicep modules organized by purpose making it easier to edit and understand. Learn more about this structure, and how to use it.
    • New Templates Gallery - awesome-azd: This addresses azure-dev#398, which aimed to make templates more discoverable and easier to contribute. Learn more about how the new gallery improves the template discovery experience.

    In the next section, we'll dive briefly into the last feature, introducing the new awesome-azd site and resource for templates discovery and contribution. And, since it's #Hacktoberfest season, we'll talk about the Contributor Guide and the many ways you can contribute to this project - with, or without, code.


    It's awesome-azd

    Welcome to awesome-azd a new template gallery hosted on GitHub Pages, and meant to be a destination site for discovering, requesting, and contributing azd-templates for community use!

    In addition, it's README reflects the awesome-list resource format, providing a location for the community to share "best of" resources for Azure Developer CLI - from blog posts and videos, to full-scale tutorials and templates.

    The Gallery is organized into three main areas:

    Take a minute to explore the Gallery and note the features:

    • Search for templates by name
    • Requested Templates - indicating asks from the community
    • Featured Templates - highlighting high-quality templates
    • Filters - to discover templates by and/or query combinations

    Check back often to see the latest contributed templates and requests!


    Hacktoberfest

    So, why is this a good time to talk about the Gallery? Because October means it's time for #Hacktoberfest - a month-long celebration of open-source projects and their maintainers, and an opportunity for first-time contributors to get support and guidance making their first pull-requests! Check out the #Hacktoberfest topic on GitHub for projects you can contribute to.

    And we hope you think of awesome-azd as another possible project to contribute to.

    Check out the FAQ section to learn how to create, discover, and contribute templates. Or take a couple of minutes to watch this video walkthrough from Jon Gallant:

    And don't hesitate to reach out to us - either via Issues on the repo, or in the Discussions section of this site, to give us feedback!

    Happy Hacking! 🎃


    - + \ No newline at end of file diff --git a/blog/microservices-10/index.html b/blog/microservices-10/index.html index 3d10795aaf..85f205ed41 100644 --- a/blog/microservices-10/index.html +++ b/blog/microservices-10/index.html @@ -14,13 +14,13 @@ - +

    10. Microservices Communication

    · 8 min read
    Paul Yu

    Welcome to Day 10 of #30DaysOfServerless!

    We continue our exploraton into Azure Container Apps, with today's focus being communication between microservices, and how to configure your Azure Container Apps environment in the context of a deployment example.


    What We'll Cover

    • ACA Environments & Virtual Networking
    • Basic Microservices Communications
    • Walkthrough: ACA Deployment Example
    • Summary and Next Steps
    • Exercise: Try this yourself!
    • Resources: For self-study!


    Introduction

    In yesterday's post, we learned what the Azure Container Apps (ACA) service is and the problems it aims to solve. It is considered to be a Container-as-a-Service platform since much of the complex implementation details of running a Kubernetes cluster is managed for you.

    Some of the use cases for ACA include event-driven processing jobs and background tasks, but this article will focus on hosting microservices, and how they can communicate with each other within the ACA service. At the end of this article, you will have a solid understanding of how networking and communication is handled and will leave you with a few tutorials to try.

    Environments and virtual networking in ACA

    Before we jump into microservices communication, we should review how networking works within ACA. With ACA being a managed service, Azure will take care of most of your underlying infrastructure concerns. As you provision an ACA resource, Azure provisions an Environment to deploy Container Apps into. This environment is your isolation boundary.

    Azure Container Apps Environment

    By default, Azure creates and manages a new Virtual Network (VNET) for you and the VNET is associated with the environment. As you deploy container apps, they are deployed into the same VNET and the environment is assigned a static public IP address which allows your apps to be accessible over the internet. This VNET is not visible or manageable.

    If you need control of the networking flows within the VNET, you can pre-provision one and tell Azure to deploy an environment within it. This "bring-your-own" VNET model allows you to deploy an environment in either External or Internal modes. Deploying an environment in External mode gives you the flexibility of managing your own VNET, while still allowing your containers to be accessible from outside the environment; a static public IP address is assigned to the environment. When deploying in Internal mode, your containers are accessible within the environment and/or VNET but not accessible from the internet.

    Bringing your own VNET will require some planning and you will need dedicate an empty subnet which will be used exclusively by the ACA environment. The size of your subnet will be dependant on how many containers you plan on deploying and your scaling requirements and one requirement to know is that the subnet address range must have have a /23 CIDR prefix at minimum. You will also need to think about your deployment strategy since ACA has the concept of Revisions which will also consume IPs from your subnet.

    Some additional restrictions to consider when planning your subnet address space is listed in the Resources section below and can be addressed in future posts, so be sure to follow us on dev.to and bookmark the ServerlessSeptember site.

    Basic microservices communication in ACA

    When it comes to communications between containers, ACA addresses this concern with its Ingress capabilities. With HTTP Ingress enabled on your container app, you can expose your app on a HTTPS endpoint.

    If your environment is deployed using default networking and your containers needs to be accessible from outside the environment, you will need to set the Ingress traffic option to Accepting traffic from anywhere. This will generate a Full-Qualified Domain Name (FQDN) which you can use to access your app right away. The ingress feature also generates and assigns a Secure Socket Layer (SSL) certificate for the FQDN.

    External ingress on Container App

    If your environment is deployed using default networking and your containers only need to communicate with other containers in the environment, you'll need to set the Ingress traffic option to Limited to Container Apps Environment. You get a FQDN here as well, but in the section below we'll see how that changes.

    Internal ingress on Container App

    As mentioned in the networking section above, if you deploy your ACA environment into a VNET in internal mode, your options will be Limited to Container Apps Environment or Limited to VNet.

    Ingress on internal virtual network

    Note how the Accepting traffic from anywhere option is greyed out. If your VNET is deployed in external mode, then the option will be available.

    Let's walk though an example ACA deployment

    The diagram below illustrates a simple microservices application that I deployed to ACA. The three container apps all have ingress enabled. The greeting-service app calls two backend services; a hello-service that returns the text Hello (in random casing) and a world-service that returns the text World (in a few random languages). The greeting-service concatenates the two strings together and returns Hello World to the browser. The greeting-service is the only service accessible via external ingress while two backend services are only accessible via internal ingress.

    Greeting Service overview

    With ingress enabled, let's take a quick look at the FQDN structures. Here is the FQDN of the external greeting-service.

    https://greeting-service.victoriouswave-3749d046.eastus.azurecontainerapps.io

    We can break it down into these components:

    https://[YOUR-CONTAINER-APP-NAME].[RANDOM-NAME]-[RANDOM-CHARACTERS].[AZURE-REGION].containerapps.io

    And here is the FQDN of the internal hello-service.

    https://hello-service.internal.victoriouswave-3749d046.eastus.azurecontainerapps.io

    Can you spot the difference between FQDNs?

    That was too easy 😉... the word internal is added as a subdomain in the FQDN between your container app name and the random name for all internal ingress endpoints.

    https://[YOUR-CONTAINER-APP-NAME].internal.[RANDOM-NAME]-[RANDOM-CHARACTERS].[AZURE-REGION].containerapps.io

    Now that we know the internal service FQDNs, we use them in the greeting-service app to achieve basic service-to-service communications.

    So we can inject FQDNs of downstream APIs to upstream apps using environment variables, but the downside to this approach is that need to deploy downstream containers ahead of time and this dependency will need to be planned for during your deployment process. There are ways around this and one option is to leverage the auto-injected environment variables within your app code.

    If I use the Console blade for the hello-service container app and run the env command, you will see environment variables named CONTAINER_APP_NAME and CONTAINER_APP_ENV_DNS_SUFFIX. You can use these values to determine FQDNs within your upstream app.

    hello-service environment variables

    Back in the greeting-service container I can invoke the hello-service container's sayhello method. I know the container app name is hello-service and this service is exposed over an internal ingress, therefore, if I add the internal subdomain to the CONTAINER_APP_ENV_DNS_SUFFIX I can invoke a HTTP request to the hello-service from my greeting-service container.

    Invoke the sayHello method from the greeting-service container

    As you can see, the ingress feature enables communications to other container apps over HTTP/S and ACA will inject environment variables into our container to help determine what the ingress FQDNs would be. All we need now is a little bit of code modification in the greeting-service app and build the FQDNs of our backend APIs by retrieving these environment variables.

    Greeting service code

    ... and now we have a working microservices app on ACA! 🎉

    Hello World

    Summary and next steps

    We've covered Container Apps networking and the basics of how containers communicate with one another. However, there is a better way to address service-to-service invocation using Dapr, which is an open-source framework for building microservices. It is natively integrated into the ACA service and in a future post, you'll learn how to enable it in your Container App to address microservices concerns and more. So stay tuned!

    Exercises

    As a takeaway for today's post, I encourage you to complete this tutorial and if you'd like to deploy the sample app that was presented in this article, my teammate @StevenMurawski is hosting a docker-compose-examples repo which includes samples for deploying to ACA using Docker Compose files. To learn more about the az containerapp compose command, a link to his blog articles are listed in the Resources section below.

    If you have any questions or feedback, please let us know in the comments below or reach out on Twitter @pauldotyu

    Have fun packing and shipping containers! See you in the next post!

    Resources

    The sample app presented here was inspired by services demonstrated in the book Introducing Distributed Application Runtime (Dapr): Simplifying Microservices Applications Development Through Proven and Reusable Patterns and Practices. Go check it out to leran more about Dapr!

    - + \ No newline at end of file diff --git a/blog/page/10/index.html b/blog/page/10/index.html index e3baad21e2..0818466269 100644 --- a/blog/page/10/index.html +++ b/blog/page/10/index.html @@ -14,13 +14,13 @@ - +

    · 6 min read
    Ramya Oruganti

    Welcome to Day 19 of #30DaysOfServerless!

    Today, we have a special set of posts from our Zero To Hero 🚀 initiative, featuring blog posts authored by our Product Engineering teams for #ServerlessSeptember. Posts were originally published on the Apps on Azure blog on Microsoft Tech Community.


    What We'll Cover

    • Retry Policy Support - in Apache Kafka Extension
    • AutoOffsetReset property - in Apache Kafka Extension
    • Key support for Kafka messages - in Apache Kafka Extension
    • References: Apache Kafka Extension for Azure Functions


    Recently we launched the Apache Kafka extension for Azure functions in GA with some cool new features like deserialization of Avro Generic records and Kafka headers support. We received great responses - so we're back with more updates!

    Retry Policy support

    Handling errors in Azure Functions is important to avoid data loss or miss events or monitor the health of an application. Apache Kafka Extension for Azure Functions supports retry policy which tells the runtime to rerun a failed execution until either successful completion occurs or the maximum number of retries is reached.

    A retry policy is evaluated when a trigger function raises an uncaught exception. As a best practice, you should catch all exceptions in your code and rethrow any errors that you want to result in a retry.

    There are two retry strategies supported by policy that you can configure :- fixed delay and exponential backoff

    1. Fixed Delay - A specified amount of time is allowed to elapse between each retry.
    2. Exponential Backoff - The first retry waits for the minimum delay. On subsequent retries, time is added exponentially to the initial duration for each retry, until the maximum delay is reached. Exponential back-off adds some small randomization to delays to stagger retries in high-throughput scenarios.
    Please Note

    Retry Policy for Kafka extension is NOT supported for C# (in proc and out proc) trigger and output binding. This is supported for languages like Java, Node (JS , TypeScript), PowerShell and Python trigger and output bindings.

    Here is the sample code view of exponential backoff retry strategy

    Error Handling with Apache Kafka extension for Azure Functions

    AutoOffsetReset property

    AutoOffsetReset property enables customers to configure the behaviour in the absence of an initial offset. Imagine a scenario when there is a need to change consumer group name. The consumer connected using a new consumer group had to reprocess all events starting from the oldest (earliest) one, as this was the default one and this setting wasn’t exposed as configurable option in the Apache Kafka extension for Azure Functions(previously). With the help of this kafka setting you can configure on how to start processing events for newly created consumer groups.

    Due to lack of the ability to configure this setting, offset commit errors were causing topics to restart from earliest offset· Users were looking to be able to set offset setting to either latest or earliest based on their requirements.

    We are happy to share that we have enabled the AutoOffsetReset setting as a configurable one to either - Earliest(Default) and Latest. Setting the value to Earliest configures the consumption of the messages from the the earliest/smallest offset or beginning of the topic partition. Setting the property to Latest configures the consumption of the messages from the latest/largest offset or from the end of the topic partition. This is supported for all the Azure Functions supported languages (C# (in & out), Java, Node (JS and TypeScript), PowerShell and python) and can be used for both triggers and output binding

    Error Handling with Apache Kafka extension for Azure Functions

    Key support for Kafka messages

    With keys the producer/output binding can be mapped to broker and partition to write based on the message. So alongside the message value, we can choose to send a message key and that key can be whatever you want it could be a string, it could be a number . In case you don’t send the key, the key is set to null then the data will be sent in a Round Robin fashion to make it very simple. But in case you send a key with your message, all the messages that share the same key will always go to the same partition and thus you can enable grouping of similar messages into partitions

    Previously while consuming a Kafka event message using the Azure Function kafka extension, the event key was always none although the key was present in the event message.

    Key support was implemented in the extension which enables customers to set/view key in the Kafka event messages coming in to the kafka trigger and set keys to the messages going in to kafka topics (with keys set) through output binding. Therefore key support was enabled in the extension to support both trigger and output binding for all Azure Functions supported languages ( (C# (in & out), Java, Node (JS and TypeScript), PowerShell and python)

    Here is the view of an output binding producer code where Kafka messages are being set with key

    Error Handling with Apache Kafka extension for Azure Functions

    Conclusion:

    In this article you have learnt about the latest additions to the Apache Kafka extension for Azure Functions. Incase you have been waiting for these features to get released or need them you are all set and please go head and try them out!! They are available in the latest extension bundles

    Want to learn more?

    Please refer to Apache Kafka bindings for Azure Functions | Microsoft Docs for detail documentation, samples on the Azure function supported languages and more!

    References

    FEEDBACK WELCOME

    Keep in touch with us on Twitter via @AzureFunctions.

    - + \ No newline at end of file diff --git a/blog/page/11/index.html b/blog/page/11/index.html index adcefab3a9..fbd422c5c9 100644 --- a/blog/page/11/index.html +++ b/blog/page/11/index.html @@ -14,14 +14,14 @@ - +

    · 5 min read
    Mike Morton

    Welcome to Day 19 of #30DaysOfServerless!

    Today, we have a special set of posts from our Zero To Hero 🚀 initiative, featuring blog posts authored by our Product Engineering teams for #ServerlessSeptember. Posts were originally published on the Apps on Azure blog on Microsoft Tech Community.


    What We'll Cover

    • Log Streaming - in Azure Portal
    • Console Connect - in Azure Portal
    • Metrics - using Azure Monitor
    • Log Analytics - using Azure Monitor
    • Metric Alerts and Log Alerts - using Azure Monitor


    In past weeks, @kendallroden wrote about what it means to be Cloud-Native and @Anthony Chu the various ways to get your apps running on Azure Container Apps. Today, we will talk about the observability tools you can use to observe, debug, and diagnose your Azure Container Apps.

    Azure Container Apps provides several observability features to help you debug and diagnose your apps. There are both Azure portal and CLI options you can use to help understand the health of your apps and help identify when issues arise.

    While these features are helpful throughout your container app’s lifetime, there are two that are especially helpful. Log streaming and console connect can be a huge help in the initial stages when issues often rear their ugly head. Let's dig into both of these a little.

    Log Streaming

    Log streaming allows you to use the Azure portal to view the streaming logs from your app. You’ll see the logs written from the app to the container’s console (stderr and stdout). If your app is running multiple revisions, you can choose from which revision to view logs. You can also select a specific replica if your app is configured to scale. Lastly, you can choose from which container to view the log output. This is useful when you are running a custom or Dapr sidecar container. view streaming logs

    Here’s an example CLI command to view the logs of a container app.

    az containerapp logs show -n MyContainerapp -g MyResourceGroup

    You can find more information about the different options in our CLI docs.

    Console Connect

    In the Azure portal, you can connect to the console of a container in your app. Like log streaming, you can select the revision, replica, and container if applicable. After connecting to the console of the container, you can execute shell commands and utilities that you have installed in your container. You can view files and their contents, monitor processes, and perform other debugging tasks.

    This can be great for checking configuration files or even modifying a setting or library your container is using. Of course, updating a container in this fashion is not something you should do to a production app, but tweaking and re-testing an app in a non-production environment can speed up development.

    Here’s an example CLI command to connect to the console of a container app.

    az containerapp exec -n MyContainerapp -g MyResourceGroup

    You can find more information about the different options in our CLI docs.

    Metrics

    Azure Monitor collects metric data from your container app at regular intervals to help you gain insights into the performance and health of your container app. Container apps provide these metrics:

    • CPU usage
    • Memory working set bytes
    • Network in bytes
    • Network out bytes
    • Requests
    • Replica count
    • Replica restart count

    Here you can see the metrics explorer showing the replica count for an app as it scaled from one replica to fifteen, and then back down to one.

    You can also retrieve metric data through the Azure CLI.

    Log Analytics

    Azure Monitor Log Analytics is great for viewing your historical logs emitted from your container apps. There are two custom tables of interest, the ContainerAppConsoleLogs_CL which contains all the log messages written by your app (stdout and stderr), and the ContainerAppSystemLogs_CL which contain the system messages from the Azure Container Apps service.

    You can also query Log Analytics through the Azure CLI.

    Alerts

    Azure Monitor alerts notify you so that you can respond quickly to critical issues. There are two types of alerts that you can define:

    You can create alert rules from metric charts in the metric explorer and from queries in Log Analytics. You can also define and manage alerts from the Monitor|Alerts page.

    Here is what creating an alert looks like in the Azure portal. In this case we are setting an alert rule from the metric explorer to trigger an alert if the replica restart count for a specific container app is greater than two within the last fifteen minutes.

    To learn more about alerts, refer to Overview of alerts in Microsoft Azure.

    Conclusion

    In this article, we looked at the several ways to observe, debug, and diagnose your Azure Container Apps. As you can see there are rich portal tools and a complete set of CLI commands to use. All the tools are helpful throughout the lifecycle of your app, be sure to take advantage of them when having an issue and/or to prevent issues.

    To learn more, visit Azure Container Apps | Microsoft Azure today!

    ASK THE EXPERT: LIVE Q&A

    The Azure Container Apps team will answer questions live on September 29.

    - + \ No newline at end of file diff --git a/blog/page/12/index.html b/blog/page/12/index.html index e8cd6a60bb..faf1d004ab 100644 --- a/blog/page/12/index.html +++ b/blog/page/12/index.html @@ -14,14 +14,14 @@ - +

    · 10 min read
    Brian Benz

    Welcome to Day 18 of #30DaysOfServerless!

    Yesterday my Serverless September post introduced you to making Azure Logic Apps and Azure Cosmos DB work together with a sample application that collects weather data. Today I'm sharing a more robust solution that actually reads my mail. Let's learn about Teaching the cloud to read your mail!

    Ready? Let's go!


    What We'll Cover

    • Introduction to the ReadMail solution
    • Setting up Azure storage, Cosmos DB and Computer Vision
    • Connecting it all together with a Logic App
    • Resources: For self-study!


    Introducing the ReadMail solution

    The US Postal system offers a subscription service that sends you images of mail it will be delivering to your home. I decided it would be cool to try getting Azure to collect data based on these images, so that I could categorize my mail and track the types of mail that I received.

    To do this, I used Azure storage, Cosmos DB, Logic Apps, and computer vision. When a new email comes in from the US Postal service (USPS), it triggers a logic app that:

    • Posts attachments to Azure storage
    • Triggers Azure Computer vision to perform an OCR function on attachments
    • Extracts any results into a JSON document
    • Writes the JSON document to Cosmos DB

    workflow for the readmail solution

    In this post I'll walk you through setting up the solution for yourself.

    Prerequisites

    Setup Azure Services

    First, we'll create all of the target environments we need to be used by our Logic App, then we;ll create the Logic App.

    1. Azure Storage

    We'll be using Azure storage to collect attached images from emails as they arrive. Adding images to Azure storage will also trigger a workflow that performs OCR on new attached images and stores the OCR data in Cosmos DB.

    To create a new Azure storage account from the portal dashboard, Select Create a resource > Storage account > Create.

    The Basics tab covers all of the features and information that we will need for this solution:

    SectionFieldRequired or optionalDescription
    Project detailsSubscriptionRequiredSelect the subscription for the new storage account.
    Project detailsResource groupRequiredCreate a new resource group that you will use for storage, Cosmos DB, Computer Vision and the Logic App.
    Instance detailsStorage account nameRequiredChoose a unique name for your storage account. Storage account names must be between 3 and 24 characters in length and may contain numbers and lowercase letters only.
    Instance detailsRegionRequiredSelect the appropriate region for your storage account.
    Instance detailsPerformanceRequiredSelect Standard performance for general-purpose v2 storage accounts (default).
    Instance detailsRedundancyRequiredSelect locally-redundant Storage (LRS) for this example.

    Select Review + create to accept the remaining default options, then validate and create the account.

    2. Azure CosmosDB

    CosmosDB will be used to store the JSON documents returned by the COmputer Vision OCR process.

    See more details and screen shots for setting up CosmosDB in yesterday's Serverless September post - Using Logic Apps with Cosmos DB

    To get started with Cosmos DB, you create an account, then a database, then a container to store JSON documents. To create a new Cosmos DB account from the portal dashboard, Select Create a resource > Azure Cosmos DB > Create. Choose core SQL for the API.

    Select your subscription, then for simplicity use the same resource group you created when you set up storage. Enter an account name and choose a location, select provisioned throughput capacity mode and apply the free tier discount. From here you can select Review and Create, then Create

    Next, create a new database and container. Go to the Data Explorer in your new Cosmos DB account, and choose New Container. Name the database, and keep all the other defaults except:

    SettingAction
    Container IDid
    Container partition/id

    Press OK to create a database and container

    3. Azure Computer Vision

    Azure Cognitive Services' Computer Vision will perform an OCR process on each image attachment that is stored in Azure storage.

    From the portal dashboard, Select Create a resource > AI + Machine Learning > Computer Vision > Create.

    The Basics and Identity tabs cover all of the features and information that we will need for this solution:

    Basics Tab

    SectionFieldRequired or optionalDescription
    Project detailsSubscriptionRequiredSelect the subscription for the new service.
    Project detailsResource groupRequiredUse the same resource group that you used for Azure storage and Cosmos DB.
    Instance detailsRegionRequiredSelect the appropriate region for your Computer Vision service.
    Instance detailsNameRequiredChoose a unique name for your Computer Vision service.
    Instance detailsPricingRequiredSelect the free tier for this example.

    Identity Tab

    SectionFieldRequired or optionalDescription
    System assigned managed identityStatusRequiredEnable system assigned identity to grant the resource access to other existing resources.

    Select Review + create to accept the remaining default options, then validate and create the account.


    Connect it all with a Logic App

    Now we're ready to put this all together in a Logic App workflow!

    1. Create Logic App

    From the portal dashboard, Select Create a resource > Integration > Logic App > Create. Name your Logic App and select a location, the rest of the settings can be left at their defaults.

    2. Create Workflow: Add Trigger

    Once the Logic App is created, select Create a workflow from designer.

    A workflow is a series of steps that defines a task or process. Each workflow starts with a single trigger, after which you must add one or more actions.

    When in designer, search for outlook.com on the right under Add a trigger. Choose outlook.com. Choose When a new email arrives as the trigger.

    A trigger is always the first step in any workflow and specifies the condition for running any further steps in that workflow.

    Set the following values:

    ParameterValue
    FolderInbox
    ImportanceAny
    Only With AttachmentsYes
    Include AttachmentsYes

    Then add a new parameter:

    ParameterValue
    FromAdd the email address that sends you the email with attachments
    3. Create Workflow: Add Action (for Trigger)

    Choose add an action and choose control > for-each.

    logic app for each

    Inside the for-each action, in Select an output from previous steps, choose attachments. Then, again inside the for-each action, add the create blob action:

    Set the following values:

    ParameterValue
    Folder Path/mailreaderinbox
    Blob NameAttachments Name
    Blob ContentAttachments Content

    This extracts attachments from the email and created a new blob for each attachment.

    Next, inside the same for-each action, add the get blob content action.

    Set the following values:

    ParameterValue
    Blobid
    Infer content typeYes

    We create and read from a blob for each attachment because Computer Vision needs a non-virtual source to read from when performing an OCR process. Because we enabled system assigned identity to grant Computer Vision to other existing resources, it can access the blob but not the outlook.com attachment. Also, we pass the ID of the blob to use as a unique ID when writing to Cosmos DB.

    create blob from attachments

    Next, inside the same for-each action, choose add an action and choose control > condition. Set the value to Media Type > is equal to > image/JPEG

    The USPS sends attachments of multiple types, but we only want to scan attachments that have images of our mail, which are always JPEG images. If the condition is true, we will process the image with Computer Vision OCR and write the results to a JSON document in CosmosDB.

    In the True section of the condition, add an action and choose Computer Vision API > Optical Character Recognition (OCR) to JSON.

    Set the following values:

    ParameterValue
    Image SourceImage Content
    Image contentFile Content

    In the same True section of the condition, choose add an action and choose Cosmos DB. Choose Create or Update Document from the actions. Select Access Key, and provide the primary read-write key (found under keys in Cosmos DB), and the Cosmos DB account ID (without 'documents.azure.com').

    Next, fill in your Cosmos DB Database ID and Collection ID. Create a JSON document by selecting dynamic content elements and wrapping JSON formatting around them.

    Be sure to use the ID passed from blob storage as your unique ID for CosmosDB. That way you can troubleshoot and JSON or OCR issues by tracing back the JSON document in Cosmos Db to the blob in Azure storage. Also, include the Computer Vision JSON response, as it contains the results of the Computer Vision OCR scan. all other elements are optional.

    4. TEST WORKFLOW

    When complete, you should have an action the Logic App designer that looks something like this:

    Logic App workflow create or update document in cosmosdb

    Save the workflow and test the connections by clicking Run Trigger > Run. If connections are working, you should see documents flowing into Cosmos DB each time that an email arrives with image attachments.

    Check the data in Cosmos Db by opening the Data explorer, then choosing the container you created and selecting items. You should see documents similar to this:

    Logic App workflow with trigger and action

    1. Congratulations

    You just built your personal ReadMail solution with Logic Apps! 🎉


    Resources: For self-study!

    Once you have an understanding of the basics in ths post, there is so much more to learn!

    Thanks for stopping by!

    - + \ No newline at end of file diff --git a/blog/page/13/index.html b/blog/page/13/index.html index 2768fad3a6..ba9296d443 100644 --- a/blog/page/13/index.html +++ b/blog/page/13/index.html @@ -14,14 +14,14 @@ - +

    · 6 min read
    Brian Benz

    Welcome to Day 17 of #30DaysOfServerless!

    In past weeks, we've covered serverless technologies that provide core capabilities (functions, containers, microservices) for building serverless solutions. This week we're looking at technologies that make service integrations more seamless, starting with Logic Apps. Let's look at one usage example today!

    Ready? Let's Go!


    What We'll Cover

    • Introduction to Logic Apps
    • Settng up Cosmos DB for Logic Apps
    • Setting up a Logic App connection and event
    • Writing data to Cosmos DB from a Logic app
    • Resources: For self-study!


    Introduction to Logic Apps

    Previously in Serverless September, we've covered Azure Functions, where the event triggers code. In Logic Apps, the event triggers a workflow that you design. Logic Apps enable serverless applications to connect to external sources for data then automate business processes via workflows.

    In this post I'll walk you through setting up a Logic App that works with Cosmos DB. For this example, we'll connect to the MSN weather service, an design a logic app workflow that collects data when weather changes, and writes the data to Cosmos DB.

    PREREQUISITES

    Setup Cosmos DB for Logic Apps

    Cosmos DB has many APIs to choose from, but to use the default Logic App connection, we need to choose the a Cosmos DB SQL API. We'll set this up via the Azure Portal.

    To get started with Cosmos DB, you create an account, then a database, then a container to store JSON documents. To create a new Cosmos DB account from the portal dashboard, Select Create a resource > Azure Cosmos DB > Create. Choose core SQL for the API.

    Select your subscription, then create a new resource group called CosmosWeather. Enter an account name and choose a location, select provisioned throughput capacity mode and apply the free tier discount. From here you can select Review and Create, then Create

    Azure Cosmos DB is available in two different capacity modes: provisioned throughput and serverless. You can perform the same database operations in both modes, but the way you get billed for these operations is different. We wil be using provisioned throughput and the free tier for this example.

    Setup the CosmosDB account

    Next, create a new database and container. Go to the Data Explorer in your new Cosmos DB account, and choose New Container. Name the database, and keep all the orher defaults except:

    SettingAction
    Container IDid
    Container partition/id

    Press OK to create a database and container

    A database is analogous to a traditional DBMS namespace. It's used to organize one or more containers.

    Setup the CosmosDB Container

    Now we're ready to set up our logic app an write to Cosmos DB!

    Setup Logic App connection + event

    Once the Cosmos DB SQL API account is created, we can set up our Logic App. From the portal dashboard, Select Create a resource > Integration > Logic App > Create. Name your Logic App and select a location, the rest fo the settings can be left at their defaults. Once you new Logic App is created, select Create a workflow from designer to get started.

    A workflow is a series of steps that defines a task or process. Each workflow starts with a single trigger, after which you must add one or more actions.

    When in designer, search for weather on the right under Add a trigger. Choose MSN Weather. Choose When the current conditions change as the trigger.

    A trigger is always the first step in any workflow and specifies the condition for running any further steps in that workflow.

    Add a location. Valid locations are City, Region, State, Country, Landmark, Postal Code, latitude and longitude. This triggers a new workflow when the conditions change for a location.

    Write data from Logic App to Cosmos DB

    Now we are ready to set up the action to write data to Cosmos DB. Choose add an action and choose Cosmos DB.

    An action is each step in a workflow after the trigger. Every action runs some operation in a workflow.

    In this case, we will be writing a JSON document to the Cosmos DB container we created earlier. Choose Create or Update Document from the actions. At this point you should have a workflow in designer that looks something like this:

    Logic App workflow with trigger

    Start wth the connection for set up the Cosmos DB action. Select Access Key, and provide the primary read-write key (found under keys in Cosmos DB), and the Cosmos DB account ID (without 'documents.azure.com').

    Next, fill in your Cosmos DB Database ID and Collection ID. Create a JSON document bt selecting dynamic content elements and wrapping JSON formatting around them.

    You will need a unique ID for each document that you write to Cosmos DB, for that you can use an expression. Because we declared id to be our unique ID in Cosmos DB, we will use use that for the name. Under expressions, type guid() and press enter to add a unique ID to the JSON document. When complete, you should have a workflow in designer that looks something like this:

    Logic App workflow with trigger and action

    Save the workflow and test the connections by clicking Run Trigger > Run. If connections are working, you should see documents flowing into Cosmos DB over the next few minutes.

    Check the data in Cosmos Db by opening the Data explorer, then choosing the container you created and selecting items. You should see documents similar to this:

    Logic App workflow with trigger and action

    Resources: For self-study!

    Once you've grasped the basics in ths post, there is so much more to learn!

    Thanks for stopping by!

    - + \ No newline at end of file diff --git a/blog/page/14/index.html b/blog/page/14/index.html index 1f8272ed1a..2f91eeec67 100644 --- a/blog/page/14/index.html +++ b/blog/page/14/index.html @@ -14,13 +14,13 @@ - +

    · 4 min read
    Nitya Narasimhan
    Devanshi Joshi

    Welcome to Day 15 of #30DaysOfServerless!

    This post marks the midpoint of our Serverless on Azure journey! Our Week 2 Roadmap showcased two key technologies - Azure Container Apps (ACA) and Dapr - for building serverless microservices. We'll also look at what happened elsewhere in #ServerlessSeptember, then set the stage for our next week's focus: Serverless Integrations.

    Ready? Let's Go!


    What We'll Cover

    • ICYMI: This Week on #ServerlessSeptember
    • Recap: Microservices, Azure Container Apps & Dapr
    • Coming Next: Serverless Integrations
    • Exercise: Take the Cloud Skills Challenge
    • Resources: For self-study!

    This Week In Events

    We had a number of activities happen this week - here's a quick summary:

    This Week in #30Days

    In our #30Days series we focused on Azure Container Apps and Dapr.

    • In Hello Container Apps we learned how Azure Container Apps helps you run microservices and containerized apps on serverless platforms. And we build and deployed our first ACA.
    • In Microservices Communication we explored concepts like environments and virtual networking, with a hands-on example to show how two microservices communicate in a deployed ACA.
    • In Scaling Your Container Apps we learned about KEDA (Kubernetes Event-Driven Autoscaler) and how to configure autoscaling for your ACA based on KEDA-supported triggers.
    • In Build with Dapr we introduced the Distributed Application Runtime (Dapr) and learned how its Building Block APIs and sidecar architecture make it easier to develop microservices with ACA.
    • In Secure ACA Access we learned how to secure ACA access to external services with - and without - Dapr, covering Secret Stores and Managed Identity.
    • Finally, Build ACA with Dapr tied it all together with a enterprise app scenario where an orders processor (ACA) uses Dapr APIs (PubSub, State Management) to receive and store order messages from Azure Service Bus.

    Here's a visual recap:

    Self Study: Code Samples & Tutorials

    There's no better way to get familiar with the concepts, than to dive in and play with code samples and hands-on tutorials. Here are 4 resources to bookmark and try out:

    1. Dapr Quickstarts - these walk you through samples showcasing individual Building Block APIs - with multiple language options available.
    2. Dapr Tutorials provides more complex examples of microservices applications and tools usage, including a Distributed Calculator polyglot app.
    3. Next, try to Deploy a Dapr application to Azure Container Apps to get familiar with the process of setting up the environment, then deploying the app.
    4. Or, explore the many Azure Container Apps samples showcasing various features and more complex architectures tied to real world scenarios.

    What's Next: Serverless Integrations!

    So far we've talked about core technologies (Azure Functions, Azure Container Apps, Dapr) that provide foundational support for your serverless solution. Next, we'll look at Serverless Integrations - specifically at technologies like Azure Logic Apps and Azure Event Grid that automate workflows and create seamless end-to-end solutions that integrate other Azure services in serverless-friendly ways.

    Take the Challenge!

    The Cloud Skills Challenge is still going on, and we've already had hundreds of participants join and complete the learning modules to skill up on Serverless.

    There's still time to join and get yourself on the leaderboard. Get familiar with Azure Functions, SignalR, Logic Apps, Azure SQL and more - in serverless contexts!!


    - + \ No newline at end of file diff --git a/blog/page/15/index.html b/blog/page/15/index.html index b70f29b5e0..b7b09759d1 100644 --- a/blog/page/15/index.html +++ b/blog/page/15/index.html @@ -14,7 +14,7 @@ - + @@ -24,7 +24,7 @@ Image showing container apps role assignment

  • Lastly, we need to restart the container app revision, to do so run the command below:

     ##Get revision name and assign it to a variable
    $REVISION_NAME = (az containerapp revision list `
    --name $BACKEND_SVC_NAME `
    --resource-group $RESOURCE_GROUP `
    --query [0].name)

    ##Restart revision by name
    az containerapp revision restart `
    --resource-group $RESOURCE_GROUP `
    --name $BACKEND_SVC_NAME `
    --revision $REVISION_NAME
  • Run end-to-end Test on Azure

    From the Azure Portal, select the Azure Container App orders-processor and navigate to Log stream under Monitoring tab, leave the stream connected and opened. From the Azure Portal, select the Azure Service Bus Namespace ordersservices, select the topic orderreceivedtopic, select the subscription named orders-processor-subscription, then click on Service Bus Explorer (preview). From there we need to publish/send a message. Use the JSON payload below

    ```json
    {
    "data": {
    "reference": "Order 150",
    "quantity": 150,
    "createdOn": "2022-05-10T12:45:22.0983978Z"
    }
    }
    ```

    If all is configured correctly, you should start seeing the information logs in Container Apps Log stream, similar to the images below Image showing publishing messages from Azure Service

    Information logs on the Log stream of the deployed Azure Container App Image showing ACA Log Stream

    🎉 CONGRATULATIONS

    You have successfully deployed to the cloud an Azure Container App and configured Dapr Pub/Sub API with Azure Service Bus.

    9. Clean up

    If you are done with the tutorial, use the following command to delete the resource group and all its contained resources to avoid incurring further costs.

    az group delete --name $RESOURCE_GROUP

    Exercise

    I left for you the configuration of the Dapr State Store API with Azure Cosmos DB :)

    When you look at the action method OrderReceived in controller ExternalOrdersController, you will see that I left a line with ToDo: note, this line is responsible to save the received message (OrderModel) into Azure Cosmos DB.

    There is no need to change anything on the code base (other than removing this commented line), that's the beauty of Dapr Building Blocks and how easy it allows us to plug components to our microservice application without any plumping and brining external SDKs.

    For sure you need to work on the configuration part of Dapr State Store by creating a new component file like what we have done with the Pub/Sub API, things that you need to work on are:

    • Provision Azure Cosmos DB Account and obtain its masterKey.
    • Create a Dapr Component file adhering to Dapr Specs.
    • Create an Azure Container Apps component file adhering to ACA component specs.
    • Test locally on your dev machine using Dapr Component file.
    • Register the new Dapr State Store component with Azure Container Apps Environment and set the Cosmos Db masterKey from the Azure Portal. If you want to challenge yourself more, use the Managed Identity approach as done in this post! The right way to protect your keys and you will not worry about managing CosmosDb keys anymore!
    • Build a new image of the application and push it to Azure Container Registry.
    • Update Azure Container Apps and create a new revision which contains the updated code.
    • Verify the results by checking Azure Cosmos DB, you should see the Order Model stored in Cosmos DB.

    If you need help, you can always refer to my blog post Azure Container Apps State Store With Dapr State Management API which contains exactly what you need to implement here, so I'm very confident you will be able to complete this exercise with no issues, happy coding :)

    What's Next?

    If you enjoyed working with Dapr and Azure Container Apps, and you want to have a deep dive with more complex scenarios (Dapr bindings, service discovery, auto scaling with KEDA, sync services communication, distributed tracing, health probes, etc...) where multiple services deployed to a single Container App Environment; I have created a detailed tutorial which should walk you through step by step with through details to build the application.

    So far, the published posts below, and I'm publishing more posts on weekly basis, so stay tuned :)

    Resources

    - + \ No newline at end of file diff --git a/blog/page/16/index.html b/blog/page/16/index.html index aa1b0a1cf5..fd8e3fe0f2 100644 --- a/blog/page/16/index.html +++ b/blog/page/16/index.html @@ -14,14 +14,14 @@ - +

    · 11 min read
    Kendall Roden

    Welcome to Day 13 of #30DaysOfServerless!

    In the previous post, we learned about all things Distributed Application Runtime (Dapr) and highlighted the capabilities you can unlock through managed Dapr in Azure Container Apps! Today, we'll dive into how we can make use of Container Apps secrets and managed identities to securely access cloud-hosted resources that your Container Apps depend on!

    Ready? Let's go.


    What We'll Cover

    • Secure access to external services overview
    • Using Container Apps Secrets
    • Using Managed Identity for connecting to Azure resources
    • Using Dapr secret store component references (Dapr-only)
    • Conclusion
    • Resources: For self-study!


    Securing access to external services

    In most, if not all, microservice-based applications, one or more services in the system will rely on other cloud-hosted resources; Think external services like databases, secret stores, message brokers, event sources, etc. To interact with these services, an application must have the ability to establish a secure connection. Traditionally, an application will authenticate to these backing resources using some type of connection string or password.

    I'm not sure if it was just me, but one of the first things I learned as a developer was to ensure credentials and other sensitive information were never checked into the codebase. The ability to inject these values at runtime is a non-negotiable.

    In Azure Container Apps, applications can securely leverage connection information via Container Apps Secrets. If the resource is Azure-based, a more ideal solution that removes the dependence on secrets altogether is using Managed Identity.

    Specifically for Dapr-enabled container apps, users can now tap into the power of the Dapr secrets API! With this new capability unlocked in Container Apps, users can call the Dapr secrets API from application code to securely access secrets from Key Vault or other backing secret stores. In addition, customers can also make use of a secret store component reference when wiring up Dapr state store components and more!

    ALSO, I'm excited to share that support for Dapr + Managed Identity is now available!!. What does this mean? It means that you can enable Managed Identity for your container app - and when establishing connections via Dapr, the Dapr sidecar can use this identity! This means simplified components without the need for secrets when connecting to Azure services!

    Let's dive a bit deeper into the following three topics:

    1. Using Container Apps secrets in your container apps
    2. Using Managed Identity to connect to Azure services
    3. Connecting to services securely for Dapr-enabled apps

    Secure access to external services without Dapr

    Leveraging Container Apps secrets at runtime

    Users can leverage this approach for any values which need to be securely stored, however, it is recommended to use Managed Identity where possible when connecting to Azure-specific resources.

    First, let's establish a few important points regarding secrets in container apps:

    • Secrets are scoped at the container app level, meaning secrets cannot be shared across container apps today
    • When running in multiple-revision mode,
      • changes to secrets do not generate a new revision
      • running revisions will not be automatically restarted to reflect changes. If you want to force-update existing container app revisions to reflect the changed secrets values, you will need to perform revision restarts.
    STEP 1

    Provide the secure value as a secret parameter when creating your container app using the syntax "SECRET_NAME=SECRET_VALUE"

    az containerapp create \
    --resource-group "my-resource-group" \
    --name queuereader \
    --environment "my-environment-name" \
    --image demos/queuereader:v1 \
    --secrets "queue-connection-string=$CONNECTION_STRING"
    STEP 2

    Create an environment variable which references the value of the secret created in step 1 using the syntax "ENV_VARIABLE_NAME=secretref:SECRET_NAME"

    az containerapp create \
    --resource-group "my-resource-group" \
    --name myQueueApp \
    --environment "my-environment-name" \
    --image demos/myQueueApp:v1 \
    --secrets "queue-connection-string=$CONNECTIONSTRING" \
    --env-vars "QueueName=myqueue" "ConnectionString=secretref:queue-connection-string"

    This ConnectionString environment variable can be used within your application code to securely access the connection string value at runtime.

    Using Managed Identity to connect to Azure services

    A managed identity from Azure Active Directory (Azure AD) allows your container app to access other Azure AD-protected resources. This approach is recommended where possible as it eliminates the need for managing secret credentials in your container apps and allows you to properly scope the permissions needed for a given container app using role-based access control. Both system-assigned and user-assigned identities are available in container apps. For more background on managed identities in Azure AD, see Managed identities for Azure resources.

    To configure your app with a system-assigned managed identity you will follow similar steps to the following:

    STEP 1

    Run the following command to create a system-assigned identity for your container app

    az containerapp identity assign \
    --name "myQueueApp" \
    --resource-group "my-resource-group" \
    --system-assigned
    STEP 2

    Retrieve the identity details for your container app and store the Principal ID for the identity in a variable "PRINCIPAL_ID"

    az containerapp identity show \
    --name "myQueueApp" \
    --resource-group "my-resource-group"
    STEP 3

    Assign the appropriate roles and permissions to your container app's managed identity using the Principal ID in step 2 based on the resources you need to access (example below)

    az role assignment create \
    --role "Storage Queue Data Contributor" \
    --assignee $PRINCIPAL_ID \
    --scope "/subscriptions/<subscription>/resourceGroups/<resource-group>/providers/Microsoft.Storage/storageAccounts/<storage-account>/queueServices/default/queues/<queue>"

    After running the above commands, your container app will be able to access your Azure Store Queue because it's managed identity has been assigned the "Store Queue Data Contributor" role. The role assignments you create will be contingent solely on the resources your container app needs to access. To instrument your code to use this managed identity, see more details here.

    In addition to using managed identity to access services from your container app, you can also use managed identity to pull your container images from Azure Container Registry.

    Secure access to external services with Dapr

    For Dapr-enabled apps, there are a few ways to connect to the resources your solutions depend on. In this section, we will discuss when to use each approach.

    1. Using Container Apps secrets in your Dapr components
    2. Using Managed Identity with Dapr Components
    3. Using Dapr Secret Stores for runtime secrets and component references

    Using Container Apps secrets in Dapr components

    Prior to providing support for the Dapr Secret's Management building block, this was the only approach available for securely storing sensitive values for use in Dapr components.

    In Dapr OSS, when no secret store reference is provided in a Dapr component file, the default secret store is set to "Kubernetes secrets". In Container Apps, we do not expose the ability to use this default store. Rather, Container Apps secrets can be used in it's place.

    With the introduction of the Secrets API and the ability to use Dapr + Managed Identity, this approach is useful for a limited number of scenarios:

    • Quick demos and dev/test scenarios using the Container Apps CLI
    • Securing values when a secret store is not configured or available for use
    • Using service principal credentials to configure an Azure Key Vault secret store component (Using Managed Identity is recommend)
    • Securing access credentials which may be required when creating a non-Azure secret store component
    STEP 1

    Create a Dapr component which can be used by one or more services in the container apps environment. In the below example, you will create a secret to store the storage account key and reference this secret from the appropriate Dapr metadata property.

       componentType: state.azure.blobstorage
    version: v1
    metadata:
    - name: accountName
    value: testStorage
    - name: accountKey
    secretRef: account-key
    - name: containerName
    value: myContainer
    secrets:
    - name: account-key
    value: "<STORAGE_ACCOUNT_KEY>"
    scopes:
    - myApp
    STEP 2

    Deploy the Dapr component using the below command with the appropriate arguments.

     az containerapp env dapr-component set \
    --name "my-environment" \
    --resource-group "my-resource-group" \
    --dapr-component-name statestore \
    --yaml "./statestore.yaml"

    Using Managed Identity with Dapr Components

    Dapr-enabled container apps can now make use of managed identities within Dapr components. This is the most ideal path for connecting to Azure services securely, and allows for the removal of sensitive values in the component itself.

    The Dapr sidecar makes use of the existing identities available within a given container app; Dapr itself does not have it's own identity. Therefore, the steps to enable Dapr + MI are similar to those in the section regarding managed identity for non-Dapr apps. See example steps below specifically for using a system-assigned identity:

    1. Create a system-assigned identity for your container app

    2. Retrieve the identity details for your container app and store the Principal ID for the identity in a variable "PRINCIPAL_ID"

    3. Assign the appropriate roles and permissions (for accessing resources backing your Dapr components) to your ACA's managed identity using the Principal ID

    4. Create a simplified Dapr component without any secrets required

          componentType: state.azure.blobstorage
      version: v1
      metadata:
      - name: accountName
      value: testStorage
      - name: containerName
      value: myContainer
      scopes:
      - myApp
    5. Deploy the component to test the connection from your container app via Dapr!

    Keep in mind, all Dapr components will be loaded by each Dapr-enabled container app in an environment by default. In order to avoid apps without the appropriate permissions from loading a component unsuccessfully, use scopes. This will ensure that only applications with the appropriate identities to access the backing resource load the component.

    Using Dapr Secret Stores for runtime secrets and component references

    Dapr integrates with secret stores to provide apps and other components with secure storage and access to secrets such as access keys and passwords. The Dapr Secrets API is now available for use in Container Apps.

    Using Dapr’s secret store building block typically involves the following:

    • Setting up a component for a specific secret store solution.
    • Retrieving secrets using the Dapr secrets API in the application code.
    • Optionally, referencing secrets in Dapr component files.

    Let's walk through a couple sample workflows involving the use of Dapr's Secrets Management capabilities!

    Setting up a component for a specific secret store solution

    1. Create an Azure Key Vault instance for hosting the secrets required by your application.

      az keyvault create --name "<your-unique-keyvault-name>" --resource-group "my-resource-group" --location "<your-location>"
    2. Create an Azure Key Vault component in your environment without the secrets values, as the connection will be established to Azure Key Vault via Managed Identity.

          componentType: secretstores.azure.keyvault
      version: v1
      metadata:
      - name: vaultName
      value: "[your_keyvault_name]"
      scopes:
      - myApp
      az containerapp env dapr-component set \
      --name "my-environment" \
      --resource-group "my-resource-group" \
      --dapr-component-name secretstore \
      --yaml "./secretstore.yaml"
    3. Run the following command to create a system-assigned identity for your container app

      az containerapp identity assign \
      --name "myApp" \
      --resource-group "my-resource-group" \
      --system-assigned
    4. Retrieve the identity details for your container app and store the Principal ID for the identity in a variable "PRINCIPAL_ID"

      az containerapp identity show \
      --name "myApp" \
      --resource-group "my-resource-group"
    5. Assign the appropriate roles and permissions to your container app's managed identity to access Azure Key Vault

      az role assignment create \
      --role "Key Vault Secrets Officer" \
      --assignee $PRINCIPAL_ID \
      --scope /subscriptions/{subscriptionid}/resourcegroups/{resource-group-name}/providers/Microsoft.KeyVault/vaults/{key-vault-name}
    6. Begin using the Dapr Secrets API in your application code to retrieve secrets! See additional details here.

    Referencing secrets in Dapr component files

    Once a Dapr secret store component is available in the environment, it can be used to retrieve secrets for use in other components. For example, when creating a state store component, you can add a reference to the Dapr secret store from which you would like to source connection information. You will no longer use secrets directly in the component spec, but rather will instruct the Dapr sidecar to retrieve the secrets from the specified store.

          componentType: state.azure.blobstorage
    version: v1
    metadata:
    - name: accountName
    value: testStorage
    - name: accountKey
    secretRef: account-key
    - name: containerName
    value: myContainer
    secretStoreComponent: "<SECRET_STORE_COMPONENT_NAME>"
    scopes:
    - myApp

    Summary

    In this post, we have covered the high-level details on how to work with secret values in Azure Container Apps for both Dapr and Non-Dapr apps. In the next article, we will walk through a complex Dapr example from end-to-end which makes use of the new support for Dapr + Managed Identity. Stayed tuned for additional documentation around Dapr secrets as it will be release in the next two weeks!

    Resources

    Here are the main resources to explore for self-study:

    - + \ No newline at end of file diff --git a/blog/page/17/index.html b/blog/page/17/index.html index 7389335d5e..e00fbcb476 100644 --- a/blog/page/17/index.html +++ b/blog/page/17/index.html @@ -14,13 +14,13 @@ - +

    · 8 min read
    Nitya Narasimhan

    Welcome to Day 12 of #30DaysOfServerless!

    So far we've looked at Azure Container Apps - what it is, how it enables microservices communication, and how it enables auto-scaling with KEDA compliant scalers. Today we'll shift gears and talk about Dapr - the Distributed Application Runtime - and how it makes microservices development with ACA easier with core building blocks and a sidecar architecture!

    Ready? Let's go!


    What We'll Cover

    • What is Dapr and why use it?
    • Building Block APIs
    • Dapr Quickstart and Tutorials
    • Dapr-enabled ACA: A Sidecar Approach
    • Exercise: Build & Deploy a Dapr-enabled ACA.
    • Resources: For self-study!


    Hello, Dapr!

    Building distributed applications is hard. Building reliable and portable microservces means having middleware that deals with challenges like service discovery, sync and async communications, state management, secure information sharing and more. Integrating these support services into your application can be challenging from both development and maintenance perspectives, adding complexity that is independent of the core application logic you want to focus on.

    This is where Dapr (Distributed Application Runtime) shines - it's defined as::

    a portable, event-driven runtime that makes it easy for any developer to build resilient, stateless and stateful applications that run on the cloud and edge and embraces the diversity of languages and developer frameworks.

    But what does this actually mean to me as an app developer?


    Dapr + Apps: A Sidecar Approach

    The strength of Dapr lies in its ability to:

    • abstract complexities of distributed systems middleware - with Building Block APIs that implement components using best practices to tackle key challenges.
    • implement a Sidecar Pattern with interactions via APIs - allowing applications to keep their codebase clean and focus on app logic.
    • be Incrementally Adoptable - allowing developers to start by integrating one API, then evolving to use more as and when needed.
    • be Platform Agnostic - allowing applications to be developed in a preferred language or framework without impacting integration capabilities.

    The application-dapr sidecar interaction is illustrated below. The API abstraction allows applications to get the desired functionality without having to know how it was implemented, or without having to integrate Dapr-specific code into their codebase. Note how the sidecar process listens on port 3500 and the API provides clear routes for the specific building blocks supported by Dapr (e.g, /secrets, /state etc.)


    Dapr Building Blocks: API Interactions

    Dapr Building Blocks refers to HTTP and gRPC endpoints exposed by Dapr API endpoints exposed by the Dapr sidecar, providing key capabilities like state management, observability, service-to-service invocation, pub/sub messaging and more to the associated application.

    Building Blocks: Under the Hood
    The Dapr API is implemented by modular components that codify best practices for tackling the specific challenge that they represent. The API abstraction allows component implementations to evolve, or alternatives to be used , without requiring changes to the application codebase.

    The latest Dapr release has the building blocks shown in the above figure. Not all capabilities are available to Azure Container Apps by default - check the documentation for the latest updates on this. For now, Azure Container Apps + Dapr integration provides the following capabilities to the application:

    In the next section, we'll dive into Dapr-enabled Azure Container Apps. Before we do that, here are a couple of resources to help you explore the Dapr platform by itself, and get more hands-on experience with the concepts and capabilities:

    • Dapr Quickstarts - build your first Dapr app, then explore quickstarts for a core APIs including service-to-service invocation, pub/sub, state mangement, bindings and secrets management.
    • Dapr Tutorials - go beyond the basic quickstart and explore more realistic service integrations and usage scenarios. Try the distributed calculator example!

    Integrate Dapr & Azure Container Apps

    Dapr currently has a v1.9 (preview) version, but Azure Container Apps supports Dapr v1.8. In this section, we'll look at what it takes to enable, configure, and use, Dapr integration with Azure Container Apps. It involves 3 steps: enabling Dapr using settings, configuring Dapr components (API) for use, then invoking the APIs.

    Here's a simple a publisher-subscriber scenario from the documentation. We have two Container apps identified as publisher-app and subscriber-app deployed in a single environment. Each ACA has an activated daprd sidecar, allowing them to use the Pub/Sub API to communicate asynchronously with each other - without having to write the underlying pub/sub implementation themselves. Rather, we can see that the Dapr API uses a pubsub,azure.servicebus component to implement that capability.

    Pub/sub example

    Let's look at how this is setup.

    1. Enable Dapr in ACA: Settings

    We can enable Dapr integration in the Azure Container App during creation by specifying settings in one of two ways, based on your development preference:

    • Using Azure CLI: use custom commandline options for each setting
    • Using Infrastructure-as-Code (IaC): using properties for Bicep, ARM templates

    Once enabled, Dapr will run in the same environment as the Azure Container App, and listen on port 3500 for API requests. The Dapr sidecar can be shared my multiple Container Apps deployed in the same environment.

    There are four main settings we will focus on for this demo - the example below shows the ARM template properties, but you can find the equivalent CLI parameters here for comparison.

    • dapr.enabled - enable Dapr for Azure Container App
    • dapr.appPort - specify port on which app is listening
    • dapr.appProtocol - specify if using http (default) or gRPC for API
    • dapr.appId - specify unique application ID for service discovery, usage

    These are defined under the properties.configuration section for your resource. Changing Dapr settings does not update the revision but it will restart ACA revisions and replicas. Here is what the relevant section of the ARM template looks like for the publisher-app ACA in the scenario shown above.

    "dapr": {
    "enabled": true,
    "appId": "publisher-app",
    "appProcotol": "http",
    "appPort": 80
    }

    2. Configure Dapr in ACA: Components

    The next step after activating the Dapr sidecar, is to define the APIs that you want to use and potentially specify the Dapr components (specific implementations of that API) that you prefer. These components are created at environment-level and by default, Dapr-enabled containers apps in an environment will load the complete set of deployed components -- use the scopes property to ensure only components needed by a given app are loaded at runtime. Here's what the ARM template resources section looks like for the example above. This tells us that the environment has a dapr-pubsub component of type pubsub.azure.servicebus deployed - where that component is loaded by container apps with dapr ids (publisher-app, subscriber-app).

    USING MANAGED IDENTITY + DAPR

    The secrets approach used here is idea for demo purposes. However, we recommend using Managed Identity with Dapr in production. For more details on secrets, check out tomorrow's post on Secrets and Managed Identity in Azure Container Apps

    {
    "resources": [
    {
    "type": "daprComponents",
    "name": "dapr-pubsub",
    "properties": {
    "componentType": "pubsub.azure.servicebus",
    "version": "v1",
    "secrets": [
    {
    "name": "sb-root-connectionstring",
    "value": "value"
    }
    ],
    "metadata": [
    {
    "name": "connectionString",
    "secretRef": "sb-root-connectionstring"
    }
    ],
    // Application scopes
    "scopes": ["publisher-app", "subscriber-app"]

    }
    }
    ]
    }

    With this configuration, the ACA is now set to use pub/sub capabilities from the Dapr sidecar, using standard HTTP requests to the exposed API endpoint for this service.

    Exercise: Deploy Dapr-enabled ACA

    In the next couple posts in this series, we'll be discussing how you can use the Dapr secrets API and doing a walkthrough of a more complex example, to show how Dapr-enabled Azure Container Apps are created and deployed.

    However, you can get hands-on experience with these concepts by walking through one of these two tutorials, each providing an alternative approach to configure and setup the application describe in the scenario below:

    Resources

    Here are the main resources to explore for self-study:

    - + \ No newline at end of file diff --git a/blog/page/18/index.html b/blog/page/18/index.html index 0f2a31ca50..5fc4278819 100644 --- a/blog/page/18/index.html +++ b/blog/page/18/index.html @@ -14,13 +14,13 @@ - +

    · 6 min read
    Melony Qin

    Welcome to Day 12 of #30DaysOfServerless!

    Today, we have a special set of posts from our Zero To Hero 🚀 initiative, featuring blog posts authored by our Product Engineering teams for #ServerlessSeptember. Posts were originally published on the Apps on Azure blog on Microsoft Tech Community.


    What We'll Cover

    • What are Custom Handlers, and why use them?
    • How Custom Handler Works
    • Message Processing With Azure Custom Handler
    • Azure Portal Monitoring


    If you have been working with Azure Functions for a while, you may know Azure Functions is a serverless FaaS (Function as a Service) offered provided by Microsoft Azure, which is built for your key scenarios, including building web APIs, processing file uploads, responding to database changes, processing IoT data streams, managing message queues, and more.

    Custom Handlers: What and Why

    Azure functions support multiple programming languages including C#, F#, Java, JavaScript, Python, typescript, and PowerShell. If you want to get extended language support with Azure functions for other languages such as Go, and Rust, that’s where custom handler comes in.

    An Azure function custom handler allows the use of any language that supports HTTP primitives and author Azure functions. With custom handlers, you can use triggers and input and output bindings via extension bundles, hence it supports all the triggers and bindings you're used to with Azure functions.

    How a Custom Handler Works

    Let’s take a look at custom handlers and how it works.

    • A request is sent to the function host when an event is triggered. It’s up to the function host to issue a request payload to the custom handler, which holds the trigger and inputs binding data as well as other metadata for the function.
    • The custom handler is a local HTTP web server. It executes the function code and returns a response payload to the Functions host.
    • The Functions host passes data from the response to the function's output bindings which will be passed to the downstream stream services for data processing.

    Check out this article to know more about Azure functions custom handlers.


    Message processing with Custom Handlers

    Message processing is one of the key scenarios that Azure functions are trying to address. In the message-processing scenario, events are often collected in queues. These events can trigger Azure functions to execute a piece of business logic.

    You can use the Service Bus trigger to respond to messages from an Azure Service Bus queue - it's then up to the Azure functions custom handlers to take further actions to process the messages. The process is described in the following diagram:

    Building Serverless Go Applications with Azure functions custom handlers

    In Azure function, the function.json defines the function's trigger, input and output bindings, and other configuration settings. Note that every function can have multiple bindings, but it can only have one trigger. The following is an example of setting up the Service Bus queue trigger in the function.json file :

    {
    "bindings": [
    {
    "name": "queueItem",
    "type": "serviceBusTrigger",
    "direction": "in",
    "queueName": "functionqueue",
    "connection": "ServiceBusConnection"
    }
    ]
    }

    You can add a binding definition in the function.json to write the output to a database or other locations of your desire. Supported bindings can be found here.

    As we’re programming in Go, so we need to set the value of defaultExecutablePath to handler in the customHandler.description section in the host.json file.

    Assume we’re programming in Windows OS, and we have named our go application as server.go, after we executed go build server.go command, it produces an executable called server.exe. So here we set server.exe in the host.json as the following example :

      "customHandler": {
    "description": {
    "defaultExecutablePath": "./server.exe",
    "workingDirectory": "",
    "arguments": []
    }
    }

    We’re showcasing a simple Go application here with Azure functions custom handlers where we print out the messages received from the function host. The following is the full code of server.go application :

    package main

    import (
    "encoding/json"
    "fmt"
    "log"
    "net/http"
    "os"
    )

    type InvokeRequest struct {
    Data map[string]json.RawMessage
    Metadata map[string]interface{}
    }

    func queueHandler(w http.ResponseWriter, r *http.Request) {
    var invokeRequest InvokeRequest

    d := json.NewDecoder(r.Body)
    d.Decode(&invokeRequest)

    var parsedMessage string
    json.Unmarshal(invokeRequest.Data["queueItem"], &parsedMessage)

    fmt.Println(parsedMessage)
    }

    func main() {
    customHandlerPort, exists := os.LookupEnv("FUNCTIONS_CUSTOMHANDLER_PORT")
    if !exists {
    customHandlerPort = "8080"
    }
    mux := http.NewServeMux()
    mux.HandleFunc("/MessageProcessorFunction", queueHandler)
    fmt.Println("Go server Listening on: ", customHandlerPort)
    log.Fatal(http.ListenAndServe(":"+customHandlerPort, mux))

    }

    Ensure you have Azure functions core tools installed, then we can use func start command to start our function. Then we’ll have have a C#-based Message Sender application on Github to send out 3000 messages to the Azure service bus queue. You’ll see Azure functions instantly start to process the messages and print out the message as the following:

    Monitoring Serverless Go Applications with Azure functions custom handlers


    Azure portal monitoring

    Let’s go back to Azure portal portal the events see how those messages in Azure Service Bus queue were being processed. There was 3000 messages were queued in the Service Bus queue ( the Blue line stands for incoming Messages ). The outgoing messages (the red line in smaller wave shape ) showing there are progressively being read by Azure functions as the following :

    Monitoring Serverless Go Applications with Azure functions custom handlers

    Check out this article about monitoring Azure Service bus for further information.

    Next steps

    Thanks for following along, we’re looking forward to hearing your feedback. Also, if you discover potential issues, please record them on Azure Functions host GitHub repository or tag us @AzureFunctions on Twitter.

    RESOURCES

    Start to build your serverless applications with custom handlers, check out the official documentation:

    Life is a journey of learning. Let’s stay tuned!

    - + \ No newline at end of file diff --git a/blog/page/19/index.html b/blog/page/19/index.html index 516d519594..736f10ca4c 100644 --- a/blog/page/19/index.html +++ b/blog/page/19/index.html @@ -14,13 +14,13 @@ - +

    · 5 min read
    Anthony Chu

    Welcome to Day 12 of #30DaysOfServerless!

    Today, we have a special set of posts from our Zero To Hero 🚀 initiative, featuring blog posts authored by our Product Engineering teams for #ServerlessSeptember. Posts were originally published on the Apps on Azure blog on Microsoft Tech Community.


    What We'll Cover

    • Using Visual Studio
    • Using Visual Studio Code: Docker, ACA extensions
    • Using Azure CLI
    • Using CI/CD Pipelines


    Last week, @kendallroden wrote about what it means to be Cloud-Native and how Azure Container Apps provides a serverless containers platform for hosting all of your Cloud-Native applications. Today, we’ll walk through a few ways to get your apps up and running on Azure Container Apps.

    Depending on where you are in your Cloud-Native app development journey, you might choose to use different tools to deploy your apps.

    • “Right-click, publish” – Deploying an app directly from an IDE or code editor is often seen as a bad practice, but it’s one of the quickest ways to test out an app in a cloud environment.
    • Command line interface – CLIs are useful for deploying apps from a terminal. Commands can be run manually or in a script.
    • Continuous integration/deployment – To deploy production apps, the recommended approach is to automate the process in a robust CI/CD pipeline.

    Let's explore some of these options in more depth.

    Visual Studio

    Visual Studio 2022 has built-in support for deploying .NET applications to Azure Container Apps. You can use the familiar publish dialog to provision Container Apps resources and deploy to them directly. This helps you prototype an app and see it run in Azure Container Apps with the least amount of effort.

    Journey to the cloud with Azure Container Apps

    Once you’re happy with the app and it’s ready for production, Visual Studio allows you to push your code to GitHub and set up a GitHub Actions workflow to build and deploy your app every time you push changes. You can do this by checking a box.

    Journey to the cloud with Azure Container Apps

    Visual Studio Code

    There are a couple of valuable extensions that you’ll want to install if you’re working in VS Code.

    Docker extension

    The Docker extension provides commands for building a container image for your app and pushing it to a container registry. It can even do this without requiring Docker Desktop on your local machine --- the “Build image in Azure” command remotely builds and pushes a container image to Azure Container Registry.

    Journey to the cloud with Azure Container Apps

    And if your app doesn’t have a dockerfile, the extension will generate one for you.

    Journey to the cloud with Azure Container Apps

    Azure Container Apps extension

    Once you’ve built your container image and pushed it to a registry, the Azure Container Apps VS Code extension provides commands for creating a container app and deploying revisions using the image you’ve built.

    Journey to the cloud with Azure Container Apps

    Azure CLI

    The Azure CLI can be used to manage pretty much anything in Azure. For Azure Container Apps, you’ll find commands for creating, updating, and managing your Container Apps resources.

    Just like in VS Code, with a few commands in the Azure CLI, you can create your Azure resources, build and push your container image, and then deploy it to a container app.

    To make things as simple as possible, the Azure CLI also has an “az containerapp up” command. This single command takes care of everything that’s needed to turn your source code from your local machine to a cloud-hosted application in Azure Container Apps.

    az containerapp up --name myapp --source ./src

    We saw earlier that Visual Studio can generate a GitHub Actions workflow to automatically build and deploy your app on every commit. “az containerapp up” can do this too. The following adds a workflow to a repo.

    az containerapp up --name myapp --repo https://github.com/myorg/myproject

    CI/CD pipelines

    When it’s time to take your app to production, it’s strongly recommended to set up a CI/CD pipeline to automatically and repeatably build, test, and deploy it. We’ve already seen that tools such as Visual Studio and Azure CLI can automatically generate a workflow for GitHub Actions. You can set up a pipeline in Azure DevOps too. This is an example Azure DevOps pipeline.

    trigger:
    branches:
    include:
    - main

    pool:
    vmImage: ubuntu-latest

    stages:

    - stage: Build

    jobs:
    - job: build
    displayName: Build app

    steps:
    - task: Docker@2
    inputs:
    containerRegistry: 'myregistry'
    repository: 'hello-aca'
    command: 'buildAndPush'
    Dockerfile: 'hello-container-apps/Dockerfile'
    tags: '$(Build.BuildId)'

    - stage: Deploy

    jobs:
    - job: deploy
    displayName: Deploy app

    steps:
    - task: AzureCLI@2
    inputs:
    azureSubscription: 'my-subscription(5361b9d6-46ea-43c3-a898-15f14afb0db6)'
    scriptType: 'bash'
    scriptLocation: 'inlineScript'
    inlineScript: |
    # automatically install Container Apps CLI extension
    az config set extension.use_dynamic_install=yes_without_prompt

    # ensure registry is configured in container app
    az containerapp registry set \
    --name hello-aca \
    --resource-group mygroup \
    --server myregistry.azurecr.io \
    --identity system

    # update container app
    az containerapp update \
    --name hello-aca \
    --resource-group mygroup \
    --image myregistry.azurecr.io/hello-aca:$(Build.BuildId)

    Conclusion

    In this article, we looked at a few ways to deploy your Cloud-Native applications to Azure Container Apps and how to decide which one to use based on where you are in your app’s journey to the cloud.

    To learn more, visit Azure Container Apps | Microsoft Azure today!

    ASK THE EXPERT: LIVE Q&A

    The Azure Container Apps team will answer questions live on September 29.

    - + \ No newline at end of file diff --git a/blog/page/2/index.html b/blog/page/2/index.html index de05f89ff0..1a74ce9020 100644 --- a/blog/page/2/index.html +++ b/blog/page/2/index.html @@ -14,14 +14,14 @@ - +

    · 7 min read
    Devanshi Joshi

    It's Serverless September in a Nutshell! Join us as we unpack our month-long learning journey exploring the core technology pillars for Serverless architectures on Azure. Then end with a look at next steps to build your Cloud-native applications on Azure.


    What We'll Cover

    • Functions-as-a-Service (FaaS)
    • Microservices and Containers
    • Serverless Integrations
    • End-to-End Solutions
    • Developer Tools & #Hacktoberfest

    Banner for Serverless September


    Building Cloud-native Apps

    By definition, cloud-native technologies empower organizations to build and run scalable applications in modern, dynamic environments such as public, private, and hybrid clouds. You can learn more about cloud-native in Kendall Roden's #ServerlessSeptember post on Going Cloud-native with Azure Container Apps.

    Serveless technologies accelerate productivity and minimize costs for deploying applications at cloud scale. So, what can we build with serverless technologies in cloud-native on Azure? Anything that is event-driven - examples include:

    • Microservices - scaled by KEDA-compliant triggers
    • Public API Endpoints - scaled by #concurrent HTTP requests
    • Event-Driven Applications - scaled by length of message queue
    • Web Applications - scaled by #concurrent HTTP requests
    • Background Process - scaled by CPU and Memory usage

    Great - but as developers, we really want to know how we can get started building and deploying serverless solutions on Azure. That was the focus of our #ServerlessSeptember journey. Let's take a quick look at the four key themes.

    Functions-as-a-Service (FaaS)

    Functions-as-a-Service (FaaS) is the epitome of developer productivity for full-stack modern apps. As developers, you don't manage infrastructure and focus only on business logic and application code. And, with Serverless Compute you only pay for when your code runs - making this the simplest first step to begin migrating your application to cloud-native.

    In Azure, FaaS is provided by Azure Functions. Check out our Functions + Serverless on Azure to go from learning core concepts, to building your first Functions app in your programming language of choice. Azure functions support multiple programming languages including C#, F#, Java, JavaScript, Python, Typescript, and PowerShell.

    Want to get extended language support for languages like Go, and Rust? You can Use Custom Handlers to make this happen! But what if you want to have long-running functions, or create complex workflows involving more than one function? Read our post on Durable Entities to learn how you can orchestrate this with Azure Functions.

    Check out this recent AskTheExpert Q&A session with the Azure Functions team to get answers to popular community questions on Azure Functions features and usage.

    Microservices and Containers

    Functions-as-a-Service is an ideal first step towards serverless development. But Functions are just one of the 5 pillars of cloud-native. This week we'll look at two of the other pillars: microservices and containers - with specific focus on two core technologies: Azure Container Apps and Dapr (Distributed Application Runtime).

    In this 6-part series of posts, we walk through each technology independently, before looking at the value of building Azure Container Apps with Dapr.

    • In Hello Container Apps we learned core concepts & deployed our first ACA.
    • In Microservices Communication we learned about ACA environments and virtual networks, and how microservices communicate in ACA with a hands-on tutorial.
    • In Scaling Your Container Apps we learned about KEDA (Kubernetes Event-Driven Autoscaler) and configuring ACA for autoscaling with KEDA-compliant triggers.
    • In Build with Dapr we introduced the Distributed Application Runtime (Dapr), exploring its Building Block APIs and sidecar architecture for working with ACA.
    • In Secure ACA Access we learned how to secure ACA access to external services with - and without - Dapr, covering Secret Stores and Managed Identity.
    • Finally, Build ACA with Dapr tied it all together with a enterprise app scenario where an orders processor (ACA) uses Dapr APIs (PubSub, State Management) to receive and store order messages from Azure Service Bus.

    Build ACA with Dapr

    Check out this recent AskTheExpert Q&A session with the Azure Container Apps team for answers to popular community questions on core features and usage.

    Serverless Integrations

    In the first half of the month we looked at compute resources for building and deploying serverless applications. In the second half, we look at integration tools and resources that automate developer workflows to streamline the end-to-end developer experience.

    In Azure, this is enabled by services like Azure Logic Apps and Azure Event Grid. Azure Logic Apps provides a visual designer to create and automate workflows with little or no code involved. Azure Event Grid provides a highly-scable event broker with support for pub/sub communications to drive async event-driven architectures.

    • In Tracking Weather Data Changes With Logic Apps we look at how you can use Logic Apps to integrate the MSN weather service with Azure CosmosDB, allowing automated collection of weather data on changes.

    • In Teach the Cloud to Read & Categorize Mail we take it a step further, using Logic Apps to automate a workflow that includes a Computer Vision service to "read" images and store the results to CosmosDB.

    • In Integrate with Microsoft Graph we explore a multi-cloud scenario (Azure + M365) where change notifications from Microsoft Graph can be integrated using Logic Apps and Event Hubs to power an onboarding workflow.

    • In Cloud Events with Event Grid we learn about the CloudEvents specification (for consistently describing event data) - and learn how Event Grid brokers events in this format. Azure Logic Apps can be an Event handler (subscriber) that uses the event to trigger an automated workflow on receipt.

      Azure Event Grid And Logic Apps

    Want to explore other such integrations? Browse Azure Architectures and filter by selected Azure services for more real-world scenarios.


    End-to-End Solutions

    We've covered serverless compute solutions (for building your serverless applications) and serverless integration services to automate end-to-end workflows in synchronous or asynchronous event-driven architectures. In this final week, we want to leave you with a sense of end-to-end development tools and use cases that can be enabled by Serverless on Azure. Here are some key examples:

    ArticleDescription
    In this tutorial, you'll learn to deploy a containerized ASP.NET Core 6.0 application to Azure Container Apps - with a Blazor front-end and two Web API projects
    Deploy Java containers to cloudIn this tutorial you learn to build and deploy a Java application running on Spring Boot, by publishing it in a container to Azure Container Registry, then deploying to Azure Container Apps,, from ACR, via the Azure Portal.
    **Where am I? My GPS Location with Serverless Power Platform Custom Connector**In this step-by-step tutorial you learn to integrate a serverless application (built on Azure Functions and OpenAPI) with Power Platforms custom connectors via Azure API Management (API-M).This pattern can empower a new ecosystem of fusion apps for cases like inventory management.
    And in our Serverless Hacks initiative, we walked through an 8-step hack to build a serverless tollbooth. Check out this 12-part video walkthrough of a reference solution using .NET.

    Developer Tools

    But wait - there's more. Those are a sample of the end-to-end application scenarios that are built on serverless on Azure. But what about the developer experience? In this article, we say hello to the Azure Developer CLI - an open-source tool that streamlines your develop-deploy workflow, with simple commands that map to core stages of your development journey. Go from code to cloud with one CLI

    And watch this space for more such tutorials and content through October, including a special #Hacktoberfest focused initiative to encourage and support first-time contributors to open-source. Here's a sneak peek at the project we plan to share - the new awesome-azd templates gallery.


    Join us at Microsoft Ignite!

    Want to continue your learning journey, and learn about what's next for Serverless on Azure? Microsoft Ignite happens Oct 12-14 this year and has multiple sessions on relevant technologies and tools. Check out the Session Catalog and register here to attend online.

    - + \ No newline at end of file diff --git a/blog/page/20/index.html b/blog/page/20/index.html index 75a541eac2..55c517f54c 100644 --- a/blog/page/20/index.html +++ b/blog/page/20/index.html @@ -14,13 +14,13 @@ - +

    · 7 min read
    Paul Yu

    Welcome to Day 11 of #30DaysOfServerless!

    Yesterday we explored Azure Container Concepts related to environments, networking and microservices communication - and illustrated these with a deployment example. Today, we turn our attention to scaling your container apps with demand.


    What We'll Cover

    • What makes ACA Serverless?
    • What is Keda?
    • Scaling Your ACA
    • ACA Scaling In Action
    • Exercise: Explore azure-opensource-labs examples
    • Resources: For self-study!


    So, what makes Azure Container Apps "serverless"?

    Today we are going to focus on what makes Azure Container Apps (ACA) a "serverless" offering. But what does the term "serverless" really mean? As much as we'd like to think there aren't any servers involved, that is certainly not the case. In general, "serverless" means that most (if not all) server maintenance has been abstracted away from you.

    With serverless, you don't spend any time managing and patching servers. This concern is offloaded to Azure and you simply focus on adding business value through application delivery. In addition to operational efficiency, cost efficiency can be achieved with serverless on-demand pricing models. Your workload horizontally scales out based on need and you only pay for what you use. To me, this is serverless, and my teammate @StevenMurawski said it best... "being able to scale to zero is what gives ACA it's serverless magic."

    Scaling your Container Apps

    If you don't know by now, ACA is built on a solid open-source foundation. Behind the scenes, it runs on a managed Kubernetes cluster and includes several open-source components out-of-the box including Dapr to help you build and run microservices, Envoy Proxy for ingress capabilities, and KEDA for event-driven autoscaling. Again, you do not need to install these components yourself. All you need to be concerned with is enabling and/or configuring your container app to leverage these components.

    Let's take a closer look at autoscaling in ACA to help you optimize your container app.

    What is KEDA?

    KEDA stands for Kubernetes Event-Driven Autoscaler. It is an open-source project initially started by Microsoft and Red Hat and has been donated to the Cloud-Native Computing Foundation (CNCF). It is being maintained by a community of 200+ contributors and adopted by many large organizations. In terms of its status as a CNCF project it is currently in the Incubating Stage which means the project has gone through significant due diligence and on its way towards the Graduation Stage.

    Prior to KEDA, horizontally scaling your Kubernetes deployment was achieved through the Horizontal Pod Autoscaler (HPA) which relies on resource metrics such as CPU and memory to determine when additional replicas should be deployed. Being limited to CPU and memory falls a bit short for certain workloads. This is especially true for apps that need to processes messages from a queue or HTTP-based apps that can handle a specific amount of incoming HTTP requests at a time. KEDA aims to fill that gap and provides a much more robust framework for scaling by working in conjunction with HPA. It offers many scalers for you to implement and even allows your deployments to scale to zero! 🥳

    KEDA architecture

    Configuring ACA scale rules

    As I mentioned above, ACA's autoscaling feature leverages KEDA and gives you the ability to configure the number of replicas to deploy based on rules (event triggers). The number of replicas can be configured as a static number or a range (minimum and maximum). So if you need your containers to run 24/7, set the min and max to be the same value. By default, when you deploy a container app, it is set to scale from 0 to 10 replicas. The default scaling rule uses HTTP scaling and defaults to a minimum of 10 concurrent requests per second. Once the threshold of 10 concurrent request per second is met, another replica will be deployed until it reaches the maximum number of replicas.

    At the time of this writing, a container app can have up to 30 replicas.

    Default autoscaler

    As a best practice, if you have a Min / max replicas range configured, you should configure a scaling rule even if it is just explicitly setting the default values.

    Adding HTTP scaling rule

    In addition to HTTP scaling, you can also configure an Azure queue rule, which allows you to use Azure Storage Queues as an event data source.

    Adding Azure Queue scaling rule

    The most flexibility comes with the Custom rule type. This opens up a LOT more options for scaling. All of KEDA's event-based scalers are supported with this option 🚀

    Adding Custom scaling rule

    Translating KEDA templates to Azure templates

    When you implement Custom rules, you need to become familiar with translating KEDA templates to Azure Resource Manager templates or ACA YAML manifests. The KEDA scaler documentation is great and it should be simple to translate KEDA template metadata to an ACA rule metadata.

    The images below shows how to translated a scaling rule which uses Azure Service Bus as an event data source. The custom rule type is set to azure-servicebus and details of the service bus is added to the Metadata section. One important thing to note here is that the connection string to the service bus was added as a secret on the container app and the trigger parameter must be set to connection.

    Azure Container App custom rule metadata

    Azure Container App custom rule metadata

    Additional examples of KEDA scaler conversion can be found in the resources section and example video below.

    See Container App scaling in action

    Now that we've built up some foundational knowledge on how ACA autoscaling is implemented and configured, let's look at a few examples.

    Autoscaling based on HTTP traffic load

    Autoscaling based on Azure Service Bus message queues

    Summary

    ACA brings you a true serverless experience and gives you the ability to configure autoscaling rules based on KEDA scaler templates. This gives you flexibility to scale based on a wide variety of data sources in an event-driven manner. With the amount built-in scalers currently available, there is probably a scaler out there for all your use cases. If not, I encourage you to get involved with the KEDA community and help make it better!

    Exercise

    By now, you've probably read and seen enough and now ready to give this autoscaling thing a try. The example I walked through in the videos above can be found at the azure-opensource-labs repo. I highly encourage you to head over to the containerapps-terraform folder and try the lab out. There you'll find instructions which will cover all the steps and tools you'll need implement autoscaling container apps within your own Azure subscription.

    If you have any questions or feedback, please let us know in the comments below or reach out on Twitter @pauldotyu

    Have fun scaling your containers!

    Resources

    - + \ No newline at end of file diff --git a/blog/page/21/index.html b/blog/page/21/index.html index c774e927d0..478745dff2 100644 --- a/blog/page/21/index.html +++ b/blog/page/21/index.html @@ -14,13 +14,13 @@ - +

    · 8 min read
    Paul Yu

    Welcome to Day 10 of #30DaysOfServerless!

    We continue our exploraton into Azure Container Apps, with today's focus being communication between microservices, and how to configure your Azure Container Apps environment in the context of a deployment example.


    What We'll Cover

    • ACA Environments & Virtual Networking
    • Basic Microservices Communications
    • Walkthrough: ACA Deployment Example
    • Summary and Next Steps
    • Exercise: Try this yourself!
    • Resources: For self-study!


    Introduction

    In yesterday's post, we learned what the Azure Container Apps (ACA) service is and the problems it aims to solve. It is considered to be a Container-as-a-Service platform since much of the complex implementation details of running a Kubernetes cluster is managed for you.

    Some of the use cases for ACA include event-driven processing jobs and background tasks, but this article will focus on hosting microservices, and how they can communicate with each other within the ACA service. At the end of this article, you will have a solid understanding of how networking and communication is handled and will leave you with a few tutorials to try.

    Environments and virtual networking in ACA

    Before we jump into microservices communication, we should review how networking works within ACA. With ACA being a managed service, Azure will take care of most of your underlying infrastructure concerns. As you provision an ACA resource, Azure provisions an Environment to deploy Container Apps into. This environment is your isolation boundary.

    Azure Container Apps Environment

    By default, Azure creates and manages a new Virtual Network (VNET) for you and the VNET is associated with the environment. As you deploy container apps, they are deployed into the same VNET and the environment is assigned a static public IP address which allows your apps to be accessible over the internet. This VNET is not visible or manageable.

    If you need control of the networking flows within the VNET, you can pre-provision one and tell Azure to deploy an environment within it. This "bring-your-own" VNET model allows you to deploy an environment in either External or Internal modes. Deploying an environment in External mode gives you the flexibility of managing your own VNET, while still allowing your containers to be accessible from outside the environment; a static public IP address is assigned to the environment. When deploying in Internal mode, your containers are accessible within the environment and/or VNET but not accessible from the internet.

    Bringing your own VNET will require some planning and you will need dedicate an empty subnet which will be used exclusively by the ACA environment. The size of your subnet will be dependant on how many containers you plan on deploying and your scaling requirements and one requirement to know is that the subnet address range must have have a /23 CIDR prefix at minimum. You will also need to think about your deployment strategy since ACA has the concept of Revisions which will also consume IPs from your subnet.

    Some additional restrictions to consider when planning your subnet address space is listed in the Resources section below and can be addressed in future posts, so be sure to follow us on dev.to and bookmark the ServerlessSeptember site.

    Basic microservices communication in ACA

    When it comes to communications between containers, ACA addresses this concern with its Ingress capabilities. With HTTP Ingress enabled on your container app, you can expose your app on a HTTPS endpoint.

    If your environment is deployed using default networking and your containers needs to be accessible from outside the environment, you will need to set the Ingress traffic option to Accepting traffic from anywhere. This will generate a Full-Qualified Domain Name (FQDN) which you can use to access your app right away. The ingress feature also generates and assigns a Secure Socket Layer (SSL) certificate for the FQDN.

    External ingress on Container App

    If your environment is deployed using default networking and your containers only need to communicate with other containers in the environment, you'll need to set the Ingress traffic option to Limited to Container Apps Environment. You get a FQDN here as well, but in the section below we'll see how that changes.

    Internal ingress on Container App

    As mentioned in the networking section above, if you deploy your ACA environment into a VNET in internal mode, your options will be Limited to Container Apps Environment or Limited to VNet.

    Ingress on internal virtual network

    Note how the Accepting traffic from anywhere option is greyed out. If your VNET is deployed in external mode, then the option will be available.

    Let's walk though an example ACA deployment

    The diagram below illustrates a simple microservices application that I deployed to ACA. The three container apps all have ingress enabled. The greeting-service app calls two backend services; a hello-service that returns the text Hello (in random casing) and a world-service that returns the text World (in a few random languages). The greeting-service concatenates the two strings together and returns Hello World to the browser. The greeting-service is the only service accessible via external ingress while two backend services are only accessible via internal ingress.

    Greeting Service overview

    With ingress enabled, let's take a quick look at the FQDN structures. Here is the FQDN of the external greeting-service.

    https://greeting-service.victoriouswave-3749d046.eastus.azurecontainerapps.io

    We can break it down into these components:

    https://[YOUR-CONTAINER-APP-NAME].[RANDOM-NAME]-[RANDOM-CHARACTERS].[AZURE-REGION].containerapps.io

    And here is the FQDN of the internal hello-service.

    https://hello-service.internal.victoriouswave-3749d046.eastus.azurecontainerapps.io

    Can you spot the difference between FQDNs?

    That was too easy 😉... the word internal is added as a subdomain in the FQDN between your container app name and the random name for all internal ingress endpoints.

    https://[YOUR-CONTAINER-APP-NAME].internal.[RANDOM-NAME]-[RANDOM-CHARACTERS].[AZURE-REGION].containerapps.io

    Now that we know the internal service FQDNs, we use them in the greeting-service app to achieve basic service-to-service communications.

    So we can inject FQDNs of downstream APIs to upstream apps using environment variables, but the downside to this approach is that need to deploy downstream containers ahead of time and this dependency will need to be planned for during your deployment process. There are ways around this and one option is to leverage the auto-injected environment variables within your app code.

    If I use the Console blade for the hello-service container app and run the env command, you will see environment variables named CONTAINER_APP_NAME and CONTAINER_APP_ENV_DNS_SUFFIX. You can use these values to determine FQDNs within your upstream app.

    hello-service environment variables

    Back in the greeting-service container I can invoke the hello-service container's sayhello method. I know the container app name is hello-service and this service is exposed over an internal ingress, therefore, if I add the internal subdomain to the CONTAINER_APP_ENV_DNS_SUFFIX I can invoke a HTTP request to the hello-service from my greeting-service container.

    Invoke the sayHello method from the greeting-service container

    As you can see, the ingress feature enables communications to other container apps over HTTP/S and ACA will inject environment variables into our container to help determine what the ingress FQDNs would be. All we need now is a little bit of code modification in the greeting-service app and build the FQDNs of our backend APIs by retrieving these environment variables.

    Greeting service code

    ... and now we have a working microservices app on ACA! 🎉

    Hello World

    Summary and next steps

    We've covered Container Apps networking and the basics of how containers communicate with one another. However, there is a better way to address service-to-service invocation using Dapr, which is an open-source framework for building microservices. It is natively integrated into the ACA service and in a future post, you'll learn how to enable it in your Container App to address microservices concerns and more. So stay tuned!

    Exercises

    As a takeaway for today's post, I encourage you to complete this tutorial and if you'd like to deploy the sample app that was presented in this article, my teammate @StevenMurawski is hosting a docker-compose-examples repo which includes samples for deploying to ACA using Docker Compose files. To learn more about the az containerapp compose command, a link to his blog articles are listed in the Resources section below.

    If you have any questions or feedback, please let us know in the comments below or reach out on Twitter @pauldotyu

    Have fun packing and shipping containers! See you in the next post!

    Resources

    The sample app presented here was inspired by services demonstrated in the book Introducing Distributed Application Runtime (Dapr): Simplifying Microservices Applications Development Through Proven and Reusable Patterns and Practices. Go check it out to leran more about Dapr!

    - + \ No newline at end of file diff --git a/blog/page/22/index.html b/blog/page/22/index.html index 343a412f1b..55cc0040cb 100644 --- a/blog/page/22/index.html +++ b/blog/page/22/index.html @@ -14,13 +14,13 @@ - +

    · 12 min read
    Nitya Narasimhan

    Welcome to Day 9 of #30DaysOfServerless!


    What We'll Cover

    • The Week Ahead
    • Hello, Container Apps!
    • Quickstart: Build Your First ACA!
    • Under The Hood: Core ACA Concepts
    • Exercise: Try this yourself!
    • Resources: For self-study!


    The Week Ahead

    Welcome to Week 2 of #ServerlessSeptember, where we put the focus on Microservices and building Cloud-Native applications that are optimized for serverless solutions on Azure. One week is not enough to do this complex topic justice so consider this a 7-part jumpstart to the longer journey.

    1. Hello, Container Apps (ACA) - Learn about Azure Container Apps, a key service that helps you run microservices and containerized apps on a serverless platform. Know the core concepts. (Tutorial 1: First ACA)
    2. Communication with Microservices - Dive deeper into two key concepts: environments and virtual networking. Learn how microservices communicate in ACA, and walkthrough an example. (Tutorial 2: ACA with 3 Microservices)
    3. Scaling Your Container Apps - Learn about KEDA. Understand how to configure your ACA for auto-scaling with KEDA-supported triggers. Put this into action by walking through a tutorial. (Tutorial 3: Configure Autoscaling)
    4. Hello, Distributed Application Runtime (Dapr) - Learn about Dapr and how its Building Block APIs simplify microservices development with ACA. Know how the sidecar pattern enables incremental adoption of Dapr APIs without requiring any Dapr code integration in app. (Tutorial 4: Setup & Explore Dapr)
    5. Building ACA with Dapr - See how Dapr works with ACA by building a Dapr-enabled Azure Container App. Walk through a .NET tutorial using Pub/Sub and State Management APIs in an enterprise scenario. (Tutorial 5: Build ACA with Dapr)
    6. Managing Secrets With Dapr - We'll look at the Secrets API (a key Building Block of Dapr) and learn how it simplifies management of sensitive information in ACA.
    7. Microservices + Serverless On Azure - We recap Week 2 (Microservices) and set the stage for Week 3 ( Integrations) of Serverless September. Plus, self-study resources including ACA development tutorials in different languages.

    Ready? Let's go!


    Azure Container Apps!

    When building your application, your first decision is about where you host your application. The Azure Architecture Center has a handy chart to help you decide between choices like Azure Functions, Azure App Service, Azure Container Instances, Azure Container Apps and more. But if you are new to this space, you'll need a good understanding of the terms and concepts behind the services Today, we'll focus on Azure Container Apps (ACA) - so let's start with the fundamentals.

    Containerized App Defined

    A containerized app is one where the application components, dependencies, and configuration, are packaged into a single file (container image), which can be instantiated in an isolated runtime environment (container) that is portable across hosts (OS). This makes containers lightweight and scalable - and ensures that applications behave consistently on different host platforms.

    Container images can be shared via container registries (public or private) helping developers discover and deploy related apps with less effort. Scaling a containerized app can be as simple as activating more instances of its container image. However, this requires container orchestrators to automate the management of container apps for efficiency. Orchestrators use technologies like Kubernetes to support capabilities like workload scheduling, self-healing and auto-scaling on demand.

    Cloud-Native & Microservices

    Containers are seen as one of the 5 pillars of Cloud-Native app development, an approach where applications are designed explicitly to take advantage of the unique benefits of modern dynamic environments (involving public, private and hybrid clouds). Containers are particularly suited to serverless solutions based on microservices.

    • With serverless - developers use managed services instead of managing their own infrastructure. Services are typically event-driven and can be configured for autoscaling with rules tied to event triggers. Serverless is cost-effective, with developers paying only for the compute cycles and resources they use.
    • With microservices - developers compose their applications from independent components. Each component can be deployed in its own container, and scaled at that granularity. This simplifies component reuse (across apps) and maintainability (over time) - with developers evolving functionality at microservice (vs. app) levels.

    Hello, Azure Container Apps!

    Azure Container Apps is the managed service that helps you run containerized apps and microservices as a serverless compute solution, on Azure. You can:

    • deploy serverless API endpoints - autoscaled by HTTP request traffic
    • host background processing apps - autoscaled by CPU or memory load
    • handle event-driven processing - autoscaled by #messages in queue
    • run microservices - autoscaled by any KEDA-supported scaler.

    Want a quick intro to the topic? Start by watching the short video below - then read these two posts from our ZeroToHero series:


    Deploy Your First ACA

    Dev Options

    We typically have three options for development:

    • Use the Azure Portal - provision and deploy from a browser.
    • Use Visual Studio Code (with relevant extensions) - if you prefer an IDE
    • Using Azure CLI - if you prefer to build and deploy from command line.

    The documentation site has quickstarts for three contexts:

    For this quickstart, we'll go with the first option (sample image) so we can move quickly to core concepts. We'll leave the others as an exercise for you to explore.

    1. Setup Resources

    PRE-REQUISITES

    You need:

    • An Azure account with an active subscription
    • An installed Azure CLI

    Start by logging into Azure from the CLI. The command should launch a browser to complete the auth flow (or give you an option to take an alternative path).

    $ az login

    Successful authentication will result in extensive command-line output detailing the status of your subscription.

    Next, install the Azure Container Apps extension for the CLI

    $ az extension add --name containerapp --upgrade
    ...
    The installed extension 'containerapp' is in preview.

    Once successfully installed, register the Microsoft.App namespace.

    $ az provider register --namespace Microsoft.App

    Then set local environment variables in that terminal - and verify they are set correctly:

    $ RESOURCE_GROUP="my-container-apps"
    $ LOCATION="canadacentral"
    $ CONTAINERAPPS_ENVIRONMENT="my-environment"

    $ echo $LOCATION $RESOURCE_GROUP $CONTAINERAPPS_ENVIRONMENT
    canadacentral my-container-apps my-environment

    Now you can use Azure CLI to provision a resource group for this tutorial. Creating a resource group also makes it easier for us to delete/reclaim all resources used at the end of this tutorial.

    az group create \
    --name $RESOURCE_GROUP \
    --location $LOCATION
    Congratulations

    You completed the Setup step!

    On completion, the console should print out the details of the newly created resource group. You should also be able to visit the Azure Portal and find the newly-active my-container-apps resource group under your active subscription.

    2. Create Environment

    An environment is like the picket fence around your property. It creates a secure boundary that contains a group of container apps - such that all apps deployed to it share the same virtual network and logging resources.

    $ az containerapp env create \
    --name $CONTAINERAPPS_ENVIRONMENT \
    --resource-group $RESOURCE_GROUP \
    --location $LOCATION

    No Log Analytics workspace provided.
    Generating a Log Analytics workspace with name ...

    This can take a few minutes. When done, you will see the terminal display more details. You can also check the resource group in the portal and see that a Container Apps Environment and a Log Analytics Workspace are created for you as part of this step.

    You've got the fence set up. Now it's time to build your home - er, container app!

    3. Create Container App

    Here's the command we'll use to create our first Azure Container App. Note that the --image argument provides the link to a pre-existing containerapps-helloworld image.

    az containerapp create \
    --name my-container-app \
    --resource-group $RESOURCE_GROUP \
    --environment $CONTAINERAPPS_ENVIRONMENT \
    --image mcr.microsoft.com/azuredocs/containerapps-helloworld:latest \
    --target-port 80 \
    --ingress 'external' \
    --query properties.configuration.ingress.fqdn
    ...
    ...

    Container app created. Access your app at <URL>

    The --ingress property shows that the app is open to external requests; in other words, it is publicly visible at the <URL> that is printed out on the terminal on successsful completion of this step.

    4. Verify Deployment

    Let's see if this works. You can verify that your container app by visitng the URL returned above in your browser. You should see something like this!

    Container App Hello World

    You can also visit the Azure Portal and look under the created Resource Group. You should see a new Container App type of resource was created after this step.

    Congratulations

    You just created and deployed your first "Hello World" Azure Container App! This validates your local development environment setup and existence of a valid Azure subscription.

    5. Clean Up Your Resources

    It's good practice to clean up resources once you are done with a tutorial.

    THIS ACTION IS IRREVERSIBLE

    This command deletes the resource group we created above - and all resources in it. So make sure you specified the right name, then confirm deletion.

    $ az group delete --name $RESOURCE_GROUP
    Are you sure you want to perform this operation? (y/n):

    Note that you can also delete the resource group from the Azure Portal interface if that feels more comfortable. For now, we'll just use the Portal to verify that deletion occurred. If you had previously opened the Resource Group page for the created resource, just refresh it. You should see something like this:

    Resource Not Found


    Core Concepts

    COMING SOON

    An illustrated guide summarizing these concepts in a single sketchnote.

    We covered a lot today - we'll stop with a quick overview of core concepts behind Azure Container Apps, each linked to documentation for self-study. We'll dive into more details on some of these concepts in upcoming articles:

    • Environments - are the secure boundary around a group of container apps that are deployed in the same virtual network. They write logs to a shared Log Analytics workspace and can communicate seamlessly using Dapr, if used.
    • Containers refer to the container image deployed in the Azure Container App. They can use any runtime, programming language, or development stack - and be discovered using any public or private container registry. A container app can support multiple containers.
    • Revisions are immutable snapshots of an Azure Container App. The first revision is created when the ACA is first deployed, with new revisions created when redeployment occurs with revision-scope changes. Multiple revisions can run concurrently in an environment.
    • Application Lifecycle Management revolves around these revisions, with a container app having three phases: deployment, update and deactivation.
    • Microservices are independent units of functionality in Cloud-Native architectures. A single container app typically represents a single microservice, and can be composed from one or more containers. Microservices can now be scaled and upgraded indepedently, giving your application more flexbility and control.
    • Networking architecture consist of a virtual network (VNET) associated with the environment. Unless you provide a custom VNET at environment creation time, a default VNET is automatically created. The VNET configuration determines access (ingress, internal vs. external) and can influence auto-scaling choices (e.g., use HTTP Edge Proxy and scale based on number of HTTP requests).
    • Observability is about monitoring the health of your application and diagnosing it to improve reliability or performance. Azure Container Apps has a number of features - from Log streaming and Container console to integration with Azure Monitor - to provide a holistic view of application status over time.
    • Easy Auth is possible with built-in support for authentication and authorization including support for popular identity providers like Facebook, Google, Twitter and GitHub - alongside the Microsoft Identity Platform.

    Keep these terms in mind as we walk through more tutorials this week, to see how they find application in real examples. Finally, a note on Dapr, the Distributed Application Runtime that abstracts away many of the challenges posed by distributed systems - and lets you focus on your application logic.

    DAPR INTEGRATION MADE EASY

    Dapr uses a sidecar architecture, allowing Azure Container Apps to communicate with Dapr Building Block APIs over either gRPC or HTTP. Your ACA can be built to run with or without Dapr - giving you the flexibility to incrementally adopt specific APIs and unlock related capabilities as the need arises.

    In later articles this week, we'll do a deeper dive into Dapr and build our first Dapr-enable Azure Container App to get a better understanding of this integration.

    Exercise

    Congratulations! You made it! By now you should have a good idea of what Cloud-Native development means, why Microservices and Containers are important to that vision - and how Azure Container Apps helps simplify the building and deployment of microservices based applications using serverless architectures on Azure.

    Now it's your turn to reinforce learning by doing.

    Resources

    Three key resources to bookmark and explore:

    - + \ No newline at end of file diff --git a/blog/page/23/index.html b/blog/page/23/index.html index c29bb7db86..e4de7d7f8a 100644 --- a/blog/page/23/index.html +++ b/blog/page/23/index.html @@ -14,13 +14,13 @@ - +

    · 5 min read
    Nitya Narasimhan
    Devanshi Joshi

    SEP 08: CHANGE IN PUBLISHING SCHEDULE

    Starting from Week 2 (Sep 8), we'll be publishing blog posts in batches rather than on a daily basis, so you can read a series of related posts together. Don't want to miss updates? Just subscribe to the feed


    Welcome to Day 8 of #30DaysOfServerless!

    This marks the end of our Week 1 Roadmap focused on Azure Functions!! Today, we'll do a quick recap of all #ServerlessSeptember activities in Week 1, set the stage for Week 2 - and leave you with some excellent tutorials you should explore to build more advanced scenarios with Azure Functions.

    Ready? Let's go.


    What We'll Cover

    • Azure Functions: Week 1 Recap
    • Advanced Functions: Explore Samples
    • End-to-End: Serverless Hacks & Cloud Skills
    • What's Next: Hello, Containers & Microservices
    • Challenge: Complete the Learning Path


    Week 1 Recap: #30Days & Functions

    Congratulations!! We made it to the end of Week 1 of #ServerlessSeptember. Let's recap what we learned so far:

    • In Core Concepts we looked at where Azure Functions fits into the serverless options available on Azure. And we learned about key concepts like Triggers, Bindings, Custom Handlers and Durable Functions.
    • In Build Your First Function we looked at the tooling options for creating Functions apps, testing them locally, and deploying them to Azure - as we built and deployed our first Functions app.
    • In the next 4 posts, we explored new Triggers, Integrations, and Scenarios - as we looked at building Functions Apps in Java, JavaScript, .NET and Python.
    • And in the Zero-To-Hero series, we learned about Durable Entities - and how we can use them to create stateful serverless solutions using a Chirper Sample as an example scenario.

    The illustrated roadmap below summarizes what we covered each day this week, as we bring our Functions-as-a-Service exploration to a close.


    Advanced Functions: Code Samples

    So, now that we've got our first Functions app under our belt, and validated our local development setup for tooling, where can we go next? A good next step is to explore different triggers and bindings, that drive richer end-to-end scenarios. For example:

    • Integrate Functions with Azure Logic Apps - we'll discuss Azure Logic Apps in Week 3. For now, think of it as a workflow automation tool that lets you integrate seamlessly with other supported Azure services to drive an end-to-end scenario. In this tutorial, we set up a workflow connecting Twitter (get tweet) to Azure Cognitive Services (analyze sentiment) - and use that to trigger an Azure Functions app to send email about the result.
    • Integrate Functions with Event Grid - we'll discuss Azure Event Grid in Week 3. For now, think of it as an eventing service connecting event sources (publishers) to event handlers (subscribers) at cloud scale. In this tutorial, we handle a common use case - a workflow where loading an image to Blob Storage triggers an Azure Functions app that implements a resize function, helping automatically generate thumbnails for the uploaded image.
    • Integrate Functions with CosmosDB and SignalR to bring real-time push-based notifications to your web app. It achieves this by using a Functions app that is triggered by changes in a CosmosDB backend, causing it to broadcast that update (push notification to connected web clients over SignalR, in real time.

    Want more ideas? Check out the Azure Samples for Functions for implementations, and browse the Azure Architecture Center for reference architectures from real-world scenarios that involve Azure Functions usage.


    E2E Scenarios: Hacks & Cloud Skills

    Want to systematically work your way through a single End-to-End scenario involving Azure Functions alongside other serverless support technologies? Check out the Serverless Hacks activity happening during #ServerlessSeptember, and learn to build this "Serverless Tollbooth Application" in a series of 10 challenges. Check out the video series for a reference solution in .NET and sign up for weekly office hours to join peers and discuss your solutions or challenges.

    Or perhaps you prefer to learn core concepts with code in a structured learning path? We have that covered. Check out the 12-module "Create Serverless Applications" course from Microsoft Learn which walks your through concepts, one at a time, with code. Even better - sign up for the free Cloud Skills Challenge and complete the same path (in under 30 days) but this time, with the added fun of competing against your peers for a spot on a leaderboard, and swag.


    What's Next? Hello, Cloud-Native!

    So where to next? In Week 2 we turn our attention from Functions-as-a-Service to building more complex backends using Containers and Microservices. We'll focus on two core technologies - Azure Container Apps and Dapr (Distributed Application Runtime) - both key components of a broader vision around Building Cloud-Native Applications in Azure.

    What is Cloud-Native you ask?

    Fortunately for you, we have an excellent introduction in our Zero-to-Hero article on Go Cloud-Native with Azure Container Apps - that explains the 5 pillars of Cloud-Native and highlights the value of Azure Container Apps (scenarios) and Dapr (sidecar architecture) for simplified microservices-based solution with auto-scale capability. Prefer a visual summary? Here's an illustrate guide to that article for convenience.

    Go Cloud-Native Download a higher resolution version of the image


    Take The Challenge

    We typically end each post with an exercise or activity to reinforce what you learned. For Week 1, we encourage you to take the Cloud Skills Challenge and work your way through at least a subset of the modules, for hands-on experience with the different Azure Functions concepts, integrations, and usage.

    See you in Week 2!

    - + \ No newline at end of file diff --git a/blog/page/24/index.html b/blog/page/24/index.html index d9a8e70a9e..154494859c 100644 --- a/blog/page/24/index.html +++ b/blog/page/24/index.html @@ -14,7 +14,7 @@ - + @@ -22,7 +22,7 @@

    · 7 min read
    Jay Miller

    Welcome to Day 7 of #30DaysOfServerless!

    Over the past couple of days, we've explored Azure Functions from the perspective of specific programming languages. Today we'll continue that trend by looking at Python - exploring the Timer Trigger and CosmosDB binding, and showcasing integration with a FastAPI-implemented web app.

    Ready? Let's go.


    What We'll Cover

    • Developer Guidance: Azure Functions On Python
    • Build & Deploy: Wildfire Detection Apps with Timer Trigger + CosmosDB
    • Demo: My Fire Map App: Using FastAPI and Azure Maps to visualize data
    • Next Steps: Explore Azure Samples
    • Exercise: Try this yourself!
    • Resources: For self-study!


    Developer Guidance

    If you're a Python developer new to serverless on Azure, start with the Azure Functions Python Developer Guide. It covers:

    • Quickstarts with Visual Studio Code and Azure CLI
    • Adopting best practices for hosting, reliability and efficiency.
    • Tutorials showcasing Azure automation, image classification and more
    • Samples showcasing Azure Functions features for Python developers

    Now let's dive in and build our first Python-based Azure Functions app.


    Detecting Wildfires Around the World?

    I live in California which is known for lots of wildfires. I wanted to create a proof of concept for developing an application that could let me know if there was a wildfire detected near my home.

    NASA has a few satelites orbiting the Earth that can detect wildfires. These satelites take scans of the radiative heat in and use that to determine the likelihood of a wildfire. NASA updates their information about every 30 minutes and it can take about four hours for to scan and process information.

    Fire Point Near Austin, TX

    I want to get the information but I don't want to ping NASA or another service every time I check.

    What if I occaisionally download all the data I need? Then I can ping that as much as I like.

    I can create a script that does just that. Any time I say I can create a script that is a verbal queue for me to consider using an Azure function. With the function being ran in the cloud, I can ensure the script runs even when I'm not at my computer.

    How the Timer Trigger Works

    This function will utilize the Timer Trigger. This means Azure will call this function to run at a scheduled interval. This isn't the only way to keep the data in sync, but we know that arcgis, the service that we're using says that data is only updated every 30 minutes or so.

    To learn more about the TimerTrigger as a concept, check out the Azure Functions documentation around Timers.

    When we create the function we tell it a few things like where the script will live (in our case in __init__.py) the type and direction and notably often it should run. We specify the timer using schedule": <The CRON INTERVAL>. For us we're using 0 0,30 * * * which means every 30 minutes at the hour and half-hour.

    {
    "scriptFile": "__init__.py",
    "bindings": [
    {
    "name": "reqTimer",
    "type": "timerTrigger",
    "direction": "in",
    "schedule": "0 0,30 * * * *"
    }
    ]
    }

    Next, we create the code that runs when the function is called.

    Connecting to the Database and our Source

    Disclaimer: The data that we're pulling is for educational purposes only. This is not meant to be a production level application. You're welcome play with this project but ensure that you're using the data in compliance with Esri.

    Our function does two important things.

    1. It pulls data from ArcGIS that meets the parameters
    2. It stores that pulled data into our database

    If you want to check out the code in its entirety, check out the GitHub repository.

    Pulling the data from ArcGIS is easy. We can use the ArcGIS Python API. Then, we need to load the service layer. Finally we query that layer for the specific data.

    def write_new_file_data(gis_id:str, layer:int=0) -> FeatureSet:
    """Returns a JSON String of the Dataframe"""
    fire_data = g.content.get(gis_id)
    feature = fire_data.layers[layer] # Loading Featured Layer from ArcGIS
    q = feature.query(
    where="confidence >= 65 AND hours_old <= 4", #The filter for the query
    return_distince_values=True,
    out_fields="confidence, hours_old", # The data we want to store with our points
    out_sr=4326, # The spatial reference of the data
    )
    return q

    Then we need to store the data in our database.

    We're using Cosmos DB for this. COSMOSDB is a NoSQL database, which means that the data looks a lot like a python dictionary as it's JSON. This means that we don't need to worry about converting the data into a format that can be stored in a relational database.

    The second reason is that Cosmos DB is tied into the Azure ecosystem so that if we want to create functions Azure events around it, we can.

    Our script grabs the information that we pulled from ArcGIS and stores it in our database.

    async with CosmosClient.from_connection_string(COSMOS_CONNECTION_STRING) as client:
    container = database.get_container_client(container=CONTAINER)
    for record in data:
    await container.create_item(
    record,
    enable_automatic_id_generation=True,
    )

    In our code each of these functions live in their own space. So in the main function we focus solely on what azure functions will be doing. The script that gets called is __init__.py. There we'll have the function call the other functions running.

    We created another function called load_and_write that does all the work outlined above. __init__.py will call that.

    async def main(reqTimer: func.TimerRequest) -> None:
    database=database
    container=container
    await update_db.load_and_write(gis_id=GIS_LAYER_ID, database=database, container=container)

    Then we deploy the function to Azure. I like to use VS Code's Azure Extension but you can also deploy it a few other ways.

    Deploying the function via VS Code

    Once the function is deployed we can load the Azure portal and see a ping whenever the function is called. The pings correspond to the Function being ran

    We can also see the data now living in the datastore. Document in Cosmos DB

    It's in the Database, Now What?

    Now the real fun begins. We just loaded the last bit of fire data into a database. We can now query that data and serve it to others.

    As I mentioned before, our Cosmos DB data is also stored in Azure, which means that we can deploy Azure Functions to trigger when new data is added. Perhaps you can use this to check for fires near you and use a Logic App to send an alert to your phone or email.

    Another option is to create a web application that talks to the database and displays the data. I've created an example of this using FastAPI – https://jm-func-us-fire-notify.azurewebsites.net.

    Website that Checks for Fires


    Next Steps

    This article showcased the Timer Trigger and the HTTP Trigger for Azure Functions in Python. Now try exploring other triggers and bindings by browsing Bindings code samples for Python and Azure Functions samples for Python

    Once you've tried out the samples, you may want to explore more advanced integrations or extensions for serverless Python scenarios. Here are some suggestions:

    And check out the resources for more tutorials to build up your Azure Functions skills.

    Exercise

    I encourage you to fork the repository and try building and deploying it yourself! You can see the TimerTrigger and a HTTPTrigger building the website.

    Then try extending it. Perhaps if wildfires are a big thing in your area, you can use some of the data available in Planetary Computer to check out some other datasets.

    Resources

    - + \ No newline at end of file diff --git a/blog/page/25/index.html b/blog/page/25/index.html index 6fc560970e..9a3fc73f08 100644 --- a/blog/page/25/index.html +++ b/blog/page/25/index.html @@ -14,13 +14,13 @@ - +

    · 10 min read
    Mike James
    Matt Soucoup

    Welcome to Day 6 of #30DaysOfServerless!

    The theme for this week is Azure Functions. Today we're going to talk about why Azure Functions are a great fit for .NET developers.


    What We'll Cover

    • What is serverless computing?
    • How does Azure Functions fit in?
    • Let's build a simple Azure Function in .NET
    • Developer Guide, Samples & Scenarios
    • Exercise: Explore the Create Serverless Applications path.
    • Resources: For self-study!

    A banner image that has the title of this article with the author&#39;s photo and a drawing that summarizes the demo application.


    The leaves are changing colors and there's a chill in the air, or for those lucky folks in the Southern Hemisphere, the leaves are budding and a warmth is in the air. Either way, that can only mean one thing - it's Serverless September!🍂 So today, we're going to take a look at Azure Functions - what they are, and why they're a great fit for .NET developers.

    What is serverless computing?

    For developers, serverless computing means you write highly compact individual functions that do one thing - and run in the cloud. These functions are triggered by some external event. That event could be a record being inserted into a database, a file uploaded into BLOB storage, a timer interval elapsed, or even a simple HTTP request.

    But... servers are still definitely involved! What has changed from other types of cloud computing is that the idea and ownership of the server has been abstracted away.

    A lot of the time you'll hear folks refer to this as Functions as a Service or FaaS. The defining characteristic is all you need to do is put together your application logic. Your code is going to be invoked in response to events - and the cloud provider takes care of everything else. You literally get to focus on only the business logic you need to run in response to something of interest - no worries about hosting.

    You do not need to worry about wiring up the plumbing between the service that originates the event and the serverless runtime environment. The cloud provider will handle the mechanism to call your function in response to whatever event you chose to have the function react to. And it passes along any data that is relevant to the event to your code.

    And here's a really neat thing. You only pay for the time the serverless function is running. So, if you have a function that is triggered by an HTTP request, and you rarely get requests to your function, you would rarely pay.

    How does Azure Functions fit in?

    Microsoft's Azure Functions is a modern serverless architecture, offering event-driven cloud computing that is easy for developers to use. It provides a way to run small pieces of code or Functions in the cloud without developers having to worry themselves about the infrastructure or platform the Function is running on.

    That means we're only concerned about writing the logic of the Function. And we can write that logic in our choice of languages... like C#. We are also able to add packages from NuGet to Azure Functions—this way, we don't have to reinvent the wheel and can use well-tested libraries.

    And the Azure Functions runtime takes care of a ton of neat stuff for us, like passing in information about the event that caused it to kick off - in a strongly typed variable. It also "binds" to other services, like Azure Storage, we can easily access those services from our code without having to worry about new'ing them up.

    Let's build an Azure Function!

    Scaffold the Function

    Don't worry about having an Azure subscription or even being connected to the internet—we can develop and debug Azure Functions locally using either Visual Studio or Visual Studio Code!

    For this example, I'm going to use Visual Studio Code to build up a Function that responds to an HTTP trigger and then writes a message to an Azure Storage Queue.

    Diagram of the how the Azure Function will use the HTTP trigger and the Azure Storage Queue Binding

    The incoming HTTP call is the trigger and the message queue the Function writes to is an output binding. Let's have at it!

    info

    You do need to have some tools downloaded and installed to get started. First and foremost, you'll need Visual Studio Code. Then you'll need the Azure Functions extension for VS Code to do the development with. Finally, you'll need the Azurite Emulator installed as well—this will allow us to write to a message queue locally.

    Oh! And of course, .NET 6!

    Now with all of the tooling out of the way, let's write a Function!

    1. Fire up Visual Studio Code. Then, from the command palette, type: Azure Functions: Create New Project

      Screenshot of create a new function dialog in VS Code

    2. Follow the steps as to which directory you want to create the project in and which .NET runtime and language you want to use.

      Screenshot of VS Code prompting which directory and language to use

    3. Pick .NET 6 and C#.

      It will then prompt you to pick the folder in which your Function app resides and then select a template.

      Screenshot of VS Code prompting you to pick the Function trigger template

      Pick the HTTP trigger template. When prompted for a name, call it: PostToAQueue.

    Execute the Function Locally

    1. After giving it a namespace, it prompts for an authorization level—pick Anonymous. Now we have a Function! Let's go ahead and hit F5 and see it run!
    info

    After the templates have finished installing, you may get a prompt to download additional components—these are NuGet packages. Go ahead and do that.

    When it runs, you'll see the Azure Functions logo appear in the Terminal window with the URL the Function is located at. Copy that link.

    Screenshot of the Azure Functions local runtime starting up

    1. Type the link into a browser, adding a name parameter as shown in this example: http://localhost:7071/api/PostToAQueue?name=Matt. The Function will respond with a message. You can even set breakpoints in Visual Studio Code and step through the code!

    Write To Azure Storage Queue

    Next, we'll get this HTTP trigger Function to write to a local Azure Storage Queue. First we need to add the Storage NuGet package to our project. In the terminal, type:

    dotnet add package Microsoft.Azure.WebJobs.Extensions.Storage

    Then set a configuration setting to tell the Function runtime where to find the Storage. Open up local.settings.json and set "AzureWebJobsStorage" to "UseDevelopmentStorage=true". The full file will look like:

    {
    "IsEncrypted": false,
    "Values": {
    "AzureWebJobsStorage": "UseDevelopmentStorage=true",
    "AzureWebJobsDashboard": ""
    }
    }

    Then create a new class within your project. This class will hold nothing but properties. Call it whatever you want and add whatever properties you want to it. I called mine TheMessage and added an Id and Name properties to it.

    public class TheMessage
    {
    public string Id { get; set; }
    public string Name { get; set; }
    }

    Finally, change your PostToAQueue Function, so it looks like the following:


    public static class PostToAQueue
    {
    [FunctionName("PostToAQueue")]
    public static async Task<IActionResult> Run(
    [HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)] HttpRequest req,
    [Queue("demoqueue", Connection = "AzureWebJobsStorage")] IAsyncCollector<TheMessage> messages,
    ILogger log)
    {
    string name = req.Query["name"];

    await messages.AddAsync(new TheMessage { Id = System.Guid.NewGuid().ToString(), Name = name });

    return new OkResult();
    }
    }

    Note the addition of the messages variable. This is telling the Function to use the storage connection we specified before via the Connection property. And it is also specifying which queue to use in that storage account, in this case demoqueue.

    All the code is doing is pulling out the name from the query string, new'ing up a new TheMessage class and adding that to the IAsyncCollector variable.

    That will add the new message to the queue!

    Make sure Azurite is started within VS Code (both the queue and blob emulators). Run the app and send the same GET request as before: http://localhost:7071/api/PostToAQueue?name=Matt.

    If you have the Azure Storage Explorer installed, you can browse your local Queue and see the new message in there!

    Screenshot of Azure Storage Explorer with the new message in the queue

    Summing Up

    We had a quick look at what Microsoft's serverless offering, Azure Functions, is comprised of. It's a full-featured FaaS offering that enables you to write functions in your language of choice, including reusing packages such as those from NuGet.

    A highlight of Azure Functions is the way they are triggered and bound. The triggers define how a Function starts, and bindings are akin to input and output parameters on it that correspond to external services. The best part is that the Azure Function runtime takes care of maintaining the connection to the external services so you don't have to worry about new'ing up or disposing of the connections yourself.

    We then wrote a quick Function that gets triggered off an HTTP request and then writes a query string parameters from that request into a local Azure Storage Queue.

    What's Next

    So, where can you go from here?

    Think about how you can build real-world scenarios by integrating other Azure services. For example, you could use serverless integrations to build a workflow where the input payload received using an HTTP Trigger, is now stored in Blob Storage (output binding), which in turn triggers another service (e.g., Cognitive Services) that processes the blob and returns an enhanced result.

    Keep an eye out for an update to this post where we walk through a scenario like this with code. Check out the resources below to help you get started on your own.

    Exercise

    This brings us close to the end of Week 1 with Azure Functions. We've learned core concepts, built and deployed our first Functions app, and explored quickstarts and scenarios for different programming languages. So, what can you do to explore this topic on your own?

    • Explore the Create Serverless Applications learning path which has several modules that explore Azure Functions integrations with various services.
    • Take up the Cloud Skills Challenge and complete those modules in a fun setting where you compete with peers for a spot on the leaderboard!

    Then come back tomorrow as we wrap up the week with a discussion on end-to-end scenarios, a recap of what we covered this week, and a look at what's ahead next week.

    Resources

    Start here for developer guidance in getting started with Azure Functions as a .NET/C# developer:

    Then learn about supported Triggers and Bindings for C#, with code snippets to show how they are used.

    Finally, explore Azure Functions samples for C# and learn to implement serverless solutions. Examples include:

    - + \ No newline at end of file diff --git a/blog/page/26/index.html b/blog/page/26/index.html index c02fcd235b..b49e166dab 100644 --- a/blog/page/26/index.html +++ b/blog/page/26/index.html @@ -14,7 +14,7 @@ - + @@ -22,7 +22,7 @@

    · 8 min read
    David Justo

    Welcome to Day 6 of #30DaysOfServerless!

    Today, we have a special set of posts from our Zero To Hero 🚀 initiative, featuring blog posts authored by our Product Engineering teams for #ServerlessSeptember. Posts were originally published on the Apps on Azure blog on Microsoft Tech Community.


    What We'll Cover

    • What are Durable Entities
    • Some Background
    • A Programming Model
    • Entities for a Micro-Blogging Platform


    Durable Entities are a special type of Azure Function that allow you to implement stateful objects in a serverless environment. They make it easy to introduce stateful components to your app without needing to manually persist data to external storage, so you can focus on your business logic. We’ll demonstrate their power with a real-life example in the last section.

    Entities 101: Some Background

    Programming Durable Entities feels a lot like object-oriented programming, except that these “objects” exist in a distributed system. Like objects, each Entity instance has a unique identifier, i.e. an entity ID that can be used to read and manipulate their internal state. Entities define a list of operations that constrain how their internal state is managed, like an object interface.

    Some experienced readers may realize that Entities sound a lot like an implementation of the Actor Pattern. For a discussion of the relationship between Entities and Actors, please refer to this documentation.

    Entities are a part of the Durable Functions Extension, an extension of Azure Functions that empowers programmers with stateful abstractions for serverless, such as Orchestrations (i.e. workflows).

    Durable Functions is available in most Azure Functions runtime environments: .NET, Node.js, Python, PowerShell, and Java (preview). For this article, we’ll focus on the C# experience, but note that Entities are also available in Node.js and Python; their availability in other languages is underway.

    Entities 102: The programming model

    Imagine you want to implement a simple Entity that just counts things. Its interface allows you to get the current count, add to the current count, and to reset the count to zero.

    If you implement this in an object-oriented way, you’d probably define a class (say “Counter”) with a method to get the current count (say “Counter.Get”), another to add to the count (say “Counter.Add”), and another to reset the count (say “Counter.Reset”). Well, the implementation of an Entity in C# is not that different from this sketch:

    [JsonObject(MemberSerialization.OptIn)] 
    public class Counter
    {
    [JsonProperty("value")]
    public int Value { get; set; }

    public void Add(int amount)
    {
    this.Value += amount;
    }

    public Task Reset()
    {
    this.Value = 0;
    return Task.CompletedTask;
    }

    public Task<int> Get()
    {
    return Task.FromResult(this.Value);
    }
    [FunctionName(nameof(Counter))]
    public static Task Run([EntityTrigger] IDurableEntityContext ctx)
    => ctx.DispatchAsync<Counter>();

    }

    We’ve defined a class named Counter, with an internal count stored in the variable “Value” which is manipulated through the “Add” and “Reset” methods, and which can be read via “Get”.

    The “Run” method is simply boilerplate required for the Azure Functions framework to interact with the object we’ve defined – it’s the method that the framework calls internally when it needs to load the Entity object. When DispatchAsync is called, the Entity and its corresponded state (the last count in “Value”) is loaded from storage. Again, this is mostly just boilerplate: your Entity’s business logic lies in the rest of the class.

    Finally, the Json annotation on top of the class and the Value field tells the Durable Functions framework that the “Value” field is to be durably persisted as part of the durable state on each Entity invocation. If you were to annotate other class variables with JsonProperty, they would also become part of the managed state.

    Entities for a micro-blogging platform

    We’ll try to implement a simple micro-blogging platform, a la Twitter. Let’s call it “Chirper”. In Chirper, users write chirps (i.e tweets), they can follow, and unfollow other users, and they can read the chirps of users they follow.

    Defining Entity

    Just like in OOP, it’s useful to begin by identifying what are the stateful agents of this scenario. In this case, users have state (who they follow and their chirps), and chirps have state in the form of their content. So, we could model these stateful agents as Entities!

    Below is a potential way to implement a User for Chirper as an Entity:

      [JsonObject(MemberSerialization = MemberSerialization.OptIn)] 
    public class User: IUser
    {
    [JsonProperty]
    public List<string> FollowedUsers { get; set; } = new List<string>();

    public void Add(string user)
    {
    FollowedUsers.Add(user);
    }

    public void Remove(string user)
    {
    FollowedUsers.Remove(user);
    }

    public Task<List<string>> Get()
    {
    return Task.FromResult(FollowedUsers);
    }
    // note: removed boilerplate “Run” method, for conciseness.
    }

    In this case, our Entity’s internal state is stored in “FollowedUsers” which is an array of accounts followed by this user. The operations exposed by this entity allow clients to read and modify this data: it can be read by “Get”, a new follower can be added via “Add”, and a user can be unfollowed via “Remove”.

    With that, we’ve modeled a Chirper’s user as an Entity! Recall that Entity instances each has a unique ID, so we can consider that unique ID to correspond to a specific user account.

    What about chirps? Should we represent them as Entities as well? That would certainly be valid. However, we would then need to create a mapping between an entity ID and every chirp entity ID that this user wrote.

    For demonstration purposes, a simpler approach would be to create an Entity that stores the list of all chirps authored by a given user; call it UserChirps. Then, we could fix each User Entity to share the same entity ID as its corresponding UserChirps Entity, making client operations easier.

    Below is a simple implementation of UserChirps:

      [JsonObject(MemberSerialization = MemberSerialization.OptIn)] 
    public class UserChirps : IUserChirps
    {
    [JsonProperty]
    public List<Chirp> Chirps { get; set; } = new List<Chirp>();

    public void Add(Chirp chirp)
    {
    Chirps.Add(chirp);
    }

    public void Remove(DateTime timestamp)
    {
    Chirps.RemoveAll(chirp => chirp.Timestamp == timestamp);
    }

    public Task<List<Chirp>> Get()
    {
    return Task.FromResult(Chirps);
    }

    // Omitted boilerplate “Run” function
    }

    Here, our state is stored in Chirps, a list of user posts. Our operations are the same as before: Get, Read, and Add. It’s the same pattern as before, but we’re representing different data.

    To put it all together, let’s set up Entity clients to generate and manipulate these Entities according to some REST API.

    Interacting with Entity

    Before going there, let’s talk briefly about how you can interact with an Entity. Entity interactions take one of two forms -- calls and signals:

    Calling an entity is a two-way communication. You send an operation message to the entity and then wait for the response message before you continue. The response can be a result value or an error. Signaling an entity is a one-way (fire-and-forget) communication. You send an operation message but don’t wait for a response. You have the reassurance that the message will be delivered eventually, but you don’t know when and don’t know what the response is. For example, when you read the state of an Entity, you are performing a “call” interaction. When you record that a user has followed another, you may choose to simply signal it.

    Now say user with a given userId (say “durableFan99” ) wants to post a chirp. For this, you can write an HTTP endpoint to signal the UserChips entity to record that chirp. We can leverage the HTTP Trigger functionality from Azure Functions and pair it with an entity client binding that signals the Add operation of our Chirp Entity:

    [FunctionName("UserChirpsPost")] 
    public static async Task<HttpResponseMessage> UserChirpsPost(
    [HttpTrigger(AuthorizationLevel.Function, "post", Route = "user/{userId}/chirps")]
    HttpRequestMessage req,
    DurableClient] IDurableClient client,
    ILogger log,
    string userId)
    {
    Authenticate(req, userId);
    var chirp = new Chirp()
    {
    UserId = userId,
    Timestamp = DateTime.UtcNow,
    Content = await req.Content.ReadAsStringAsync(),
    };
    await client.SignalEntityAsync<IUserChirps>(userId, x => x.Add(chirp));
    return req.CreateResponse(HttpStatusCode.Accepted, chirp);
    }

    Following the same pattern as above, to get all the chirps from a user, you could read the status of your Entity via ReadEntityStateAsync, which follows the call-interaction pattern as your client expects a response:

    [FunctionName("UserChirpsGet")] 
    public static async Task<HttpResponseMessage> UserChirpsGet(
    [HttpTrigger(AuthorizationLevel.Function, "get", Route = "user/{userId}/chirps")] HttpRequestMessage req,
    [DurableClient] IDurableClient client,
    ILogger log,
    string userId)
    {

    Authenticate(req, userId);
    var target = new EntityId(nameof(UserChirps), userId);
    var chirps = await client.ReadEntityStateAsync<UserChirps>(target);
    return chirps.EntityExists
    ? req.CreateResponse(HttpStatusCode.OK, chirps.EntityState.Chirps)
    : req.CreateResponse(HttpStatusCode.NotFound);
    }

    And there you have it! To play with a complete implementation of Chirper, you can try out our sample in the Durable Functions extension repo.

    Thank you!

    info

    Thanks for following along, and we hope you find Entities as useful as we do! If you have questions or feedback, please file issues in the repo above or tag us @AzureFunctions on Twitter

    - + \ No newline at end of file diff --git a/blog/page/27/index.html b/blog/page/27/index.html index 34fb49ec3a..4d50d065a9 100644 --- a/blog/page/27/index.html +++ b/blog/page/27/index.html @@ -14,13 +14,13 @@ - +

    · 8 min read
    Kendall Roden

    Welcome to Day 6 of #30DaysOfServerless!

    Today, we have a special set of posts from our Zero To Hero 🚀 initiative, featuring blog posts authored by our Product Engineering teams for #ServerlessSeptember. Posts were originally published on the Apps on Azure blog on Microsoft Tech Community.


    What We'll Cover

    • Defining Cloud-Native
    • Introduction to Azure Container Apps
    • Dapr In Azure Container Apps
    • Conclusion


    Defining Cloud-Native

    While I’m positive I’m not the first person to ask this, I think it’s an appropriate way for us to kick off this article: “How many developers does it take to define Cloud-Native?” I hope you aren’t waiting for a punch line because I seriously want to know your thoughts (drop your perspectives in the comments..) but if you ask me, the limit does not exist!

    A quick online search of the topic returns a laundry list of articles, e-books, twitter threads, etc. all trying to nail down the one true definition. While diving into the rabbit hole of Cloud-Native, you will inevitably find yourself on the Cloud-Native Computing Foundation (CNCF) site. The CNCF is part of the Linux Foundation and aims to make "Cloud-Native computing ubiquitous" through deep open source project and community involvement. The CNCF has also published arguably the most popularized definition of Cloud-Native which begins with the following statement:

    “Cloud-Native technologies empower organizations to build and run scalable applications in modern, dynamic environments such as public, private, and hybrid clouds."

    Over the past four years, my day-to-day work has been driven primarily by the surging demand for application containerization and the drastic adoption of Kubernetes as the de-facto container orchestrator. Customers are eager to learn and leverage patterns, practices and technologies that enable building "loosely coupled systems that are resilient, manageable, and observable". Enterprise developers at these organizations are being tasked with rapidly deploying event-driven, horizontally-scalable, polyglot services via repeatable, code-to-cloud pipelines.

    While building Cloud-Native solutions can enable rapid innovation, the transition to adopting a Cloud-Native architectural approach comes with a steep learning curve and a new set of considerations. In a document published by Microsoft called What is Cloud-Native?, there are a few key areas highlighted to aid customers in the adoption of best practices for building modern, portable applications which I will summarize below:

    Cloud infrastructure

    • Cloud-Native applications leverage cloud infrastructure and make use of Platform-as-a-service offerings
    • Cloud-Native applications depend on highly-elastic infrastructure with automatic scaling, self-healing, and monitoring capabilities

    Modern application design

    • Cloud-Native applications should be constructed using principles outlined in the 12 factor methodology

    Microservices

    • Cloud-Native applications are typically composed of microservices where each core function, or service, is built and deployed independently

    Containers

    • Cloud-Native applications are typically deployed using containers as a packaging mechanism where an application's code and dependencies are bundled together for consistency of deployment
    • Cloud-Native applications leverage container orchestration technologies- primarily Kubernetes- for achieving capabilities such as workload scheduling, self-healing, auto-scale, etc.

    Backing services

    • Cloud-Native applications are ideally stateless workloads which retrieve and store data in data stores external to the application hosting infrastructure. Cloud providers like Azure provide an array of backing data services which can be securely accessed from application code and provide capabilities for ensuring application data is highly-available

    Automation

    • Cloud-Native solutions should use deployment automation for backing cloud infrastructure via versioned, parameterized Infrastructure as Code (IaC) templates which provide a consistent, repeatable process for provisioning cloud resources.
    • Cloud-Native solutions should make use of modern CI/CD practices and pipelines to ensure successful, reliable infrastructure and application deployment.

    Azure Container Apps

    In many of the conversations I've had with customers that involve talk of Kubernetes and containers, the topics of cost-optimization, security, networking, and reducing infrastructure and operations inevitably arise. I personally have yet to meet with any customers eager to have their developers get more involved with infrastructure concerns.

    One of my former colleagues, Jeff Hollan, made a statement while appearing on a 2019 episode of The Cloud-Native Show where he shared his perspective on Cloud-Native:

    "When I think about Cloud-Native... it's writing applications in a way where you are specifically thinking about the benefits the cloud can provide... to me, serverless is the perfect realization of that because the only reason you can write serverless applications is because the cloud exists."

    I must say that I agree with Jeff's perspective. In addition to optimizing development practices for the Cloud-Native world, reducing infrastructure exposure and operations is equally as important to many organizations and can be achieved as a result of cloud platform innovation.

    In May of 2022, Microsoft announced the general availability of Azure Container Apps. Azure Container Apps provides customers with the ability to run microservices and containerized applications on a serverless, consumption-based platform.

    For those interested in taking advantage of the open source ecosystem while reaping the benefits of a managed platform experience, Container Apps run on Kubernetes and provides a set of managed open source projects embedded directly into the platform including the Kubernetes Event Driven Autoscaler (KEDA), the Distributed Application Runtime (Dapr) and Envoy.

    Azure Kubernetes Service vs. Azure Container Apps

    Container apps provides other Cloud-Native features and capabilities in addition to those above including, but not limited to:

    The ability to dynamically scale and support growing numbers of users, events, and requests is one of the core requirements for most Cloud-Native, distributed applications. Azure Container Apps is purpose-built with this and other Cloud-Native tenants in mind.

    What can you build with Azure Container Apps?

    Dapr in Azure Container Apps

    As a quick personal note before we dive into this section I will say I am a bit bias about Dapr. When Dapr was first released, I had an opportunity to immediately get involved and became an early advocate for the project. It is created by developers for developers, and solves tangible problems customers architecting distributed systems face:

    HOW DO I
    • integrate with external systems that my app has to react and respond to?
    • create event driven apps which reliably send events from one service to another?
    • observe the calls and events between my services to diagnose issues in production?
    • access secrets securely from within my application?
    • discover other services and call methods on them?
    • prevent committing to a technology early and have the flexibility to swap out an alternative based on project or environment changes?

    While existing solutions were in the market which could be used to address some of the concerns above, there was not a lightweight, CNCF-backed project which could provide a unified approach to solve the more fundamental ask from customers: "How do I make it easy for developers to build microservices based on Cloud-Native best practices?"

    Enter Dapr!

    The Distributed Application Runtime (Dapr) provides APIs that simplify microservice connectivity. Whether your communication pattern is service to service invocation or pub/sub messaging, Dapr helps you write resilient and secured microservices. By letting Dapr’s sidecar take care of the complex challenges such as service discovery, message broker integration, encryption, observability, and secret management, you can focus on business logic and keep your code simple."

    The Container Apps platform provides a managed and supported Dapr integration which eliminates the need for deploying and managing the Dapr OSS project. In addition to providing managed upgrades, the platform also exposes a simplified Dapr interaction model to increase developer productivity and reduce the friction required to leverage Dapr capabilities. While the Dapr integration makes it easier for customers to adopt Cloud-Native best practices in container apps it is not required to make use of the container apps platform.

    Image on Dapr

    For additional insight into the dapr integration visit aka.ms/aca-dapr.

    Conclusion

    Backed by and integrated with powerful Cloud-Native technologies, Azure Container Apps strives to make developers productive, while reducing the operational overhead and learning curve that typically accompanies adopting a cloud-native strategy.

    If you are interested in building resilient, portable and highly-scalable apps visit Azure Container Apps | Microsoft Azure today!

    ASK THE EXPERT: LIVE Q&A

    The Azure Container Apps team will answer questions live on September 29.

    - + \ No newline at end of file diff --git a/blog/page/28/index.html b/blog/page/28/index.html index 3a7a21832e..3d6381622d 100644 --- a/blog/page/28/index.html +++ b/blog/page/28/index.html @@ -14,13 +14,13 @@ - +

    · 7 min read
    Aaron Powell

    Welcome to Day 5 of #30DaysOfServerless!

    Yesterday we looked at Azure Functions from the perspective of a Java developer. Today, we'll do a similar walkthrough from the perspective of a JavaScript developer.

    And, we'll use this to explore another popular usage scenario for Azure Functions: building a serverless HTTP API using JavaScript.

    Ready? Let's go.


    What We'll Cover

    • Developer Guidance
    • Create Azure Function with CLI
    • Calling an external API
    • Azure Samples & Scenarios for JS
    • Exercise: Support searching
    • Resources: For self-study!


    Developer Guidance

    If you're a JavaScript developer new to serverless on Azure, start by exploring the Azure Functions JavaScript Developers Guide. It covers:

    • Quickstarts for Node.js - using Visual Code, CLI or Azure Portal
    • Guidance on hosting options and performance considerations
    • Azure Functions bindings and (code samples) for JavaScript
    • Scenario examples - integrations with other Azure Services

    Node.js 18 Support

    Node.js 18 Support (Public Preview)

    Azure Functions support for Node.js 18 entered Public Preview on Aug 31, 2022 and is supported by the Azure Functions v.4.x runtime!

    As we continue to explore how we can use Azure Functions, today we're going to look at using JavaScript to create one, and we're going to be using the newly released Node.js 18 support for Azure Functions to make the most out of the platform.

    Ensure you have Node.js 18 and Azure Functions v4.x versions installed, along with a text editor (I'll use VS Code in this post), and a terminal, then we're ready to go.

    Scenario: Calling The GitHub API

    The application we're going to be building today will use the GitHub API to return a random commit message, so that we don't need to come up with one ourselves! After all, naming things can be really hard! 🤣

    Creating the Azure Function

    To create our Azure Function, we're going to use the Azure Functions CLI, which we can install using npm:

    npm install --global azure-function-core-tools

    Once that's installed, we can use the new func command to initalise our project:

    func init --worker-runtime node --language javascript

    When running func init we can either provide the worker-runtime and language as arguments, or use the menu system that the tool will provide us. For brevity's stake, I've used the arguments here, specifying that we want node as the runtime and javascript as the language, but you could change that to typescript if you'd prefer to use TypeScript.

    Once the init command is completed, you should have a .vscode folder, and the files .gitignore, host.json, local.settings.json, and package.json.

    Files generated by func initFiles generated by func init

    Adding a HTTP Trigger

    We have an empty Functions app so far, what we need to do next is create a Function that it will run, and we're going to make a HTTP Trigger Function, which is a Function that responds to HTTP requests. We'll use the func new command to create that:

    func new --template "HTTP Trigger" --name "get-commit-message"

    When this completes, we'll have a folder for the Function, using the name we provided, that contains the filesfunction.json and index.js. Let's open the function.json to understand it a little bit:

    {
    "bindings": [
    {
    "authLevel": "function",
    "type": "httpTrigger",
    "direction": "in",
    "name": "req",
    "methods": [
    "get",
    "post"
    ]
    },
    {
    "type": "http",
    "direction": "out",
    "name": "res"
    }
    ]
    }

    This file is used to tell Functions about the Function that we've created and what it does, so it knows to handle the appropriate events. We have a bindings node which contains the event bindings for our Azure Function. The first binding is using the type httpTrigger, which indicates that it'll be executed, or triggered, by a HTTP event, and the methods indicates that it's listening to both GET and POST (you can change this for the right HTTP methods that you want to support). The HTTP request information will be bound to a property in the Functions context called req, so we can access query strings, the request body, etc.

    The other binding we have has the direction of out, meaning that it's something that the Function will return to the called, and since this is a HTTP API, the type is http, indicating that we'll return a HTTP response, and that response will be on a property called res that we add to the Functions context.

    Let's go ahead and start the Function and call it:

    func start

    Starting the FunctionStarting the Function

    With the Function started, access the endpoint http://localhost:7071/api/get-commit-message via a browser or using cURL:

    curl http://localhost:7071/api/get-commit-message\?name\=ServerlessSeptember

    Hello from Azure FunctionsHello from Azure Functions

    🎉 CONGRATULATIONS

    You created and ran a JavaScript function app locally!

    Calling an external API

    It's time to update the Function to do what we want to do - call the GitHub Search API and get some commit messages. The endpoint that we'll be calling is https://api.github.com/search/commits?q=language:javascript.

    Note: The GitHub API is rate limited and this sample will call it unauthenticated, so be aware of that in your own testing.

    To call this API, we'll leverage the newly released fetch support in Node 18 and async/await, to make for a very clean Function.

    Open up the index.js file, and delete the contents of the existing Function, so we have a empty one:

    module.exports = async function (context, req) {

    }

    The default template uses CommonJS, but you can use ES Modules with Azure Functions if you prefer.

    Now we'll use fetch to call the API, and unpack the JSON response:

    module.exports = async function (context, req) {
    const res = await fetch("https://api.github.com/search/commits?q=language:javascript");
    const json = await res.json();
    const messages = json.items.map(item => item.commit.message);
    context.res = {
    body: {
    messages
    }
    };
    }

    To send a response to the client, we're setting the context.res property, where res is the name of the output binding in our function.json, and giving it a body that contains the commit messages.

    Run func start again, and call the endpoint:

    curl http://localhost:7071/api/get-commit-message

    The you'll get some commit messages:

    A series of commit messages from the GitHub Search APIA series of commit messages from the GitHub Search API

    🎉 CONGRATULATIONS

    There we go, we've created an Azure Function which is used as a proxy to another API, that we call (using native fetch in Node.js 18) and from which we return a subset of the JSON payload.

    Next Steps

    Other Triggers, Bindings

    This article focused on using the HTTPTrigger and relevant bindings, to build a serverless API using Azure Functions. How can you explore other supported bindings, with code samples to illustrate usage?

    Scenarios with Integrations

    Once you've tried out the samples, try building an end-to-end scenario by using these triggers to integrate seamlessly with other services. Here are some suggestions:

    Exercise: Support searching

    The GitHub Search API allows you to provide search parameters via the q query string. In this sample, we hard-coded it to be language:javascript, but as a follow-on exercise, expand the Function to allow the caller to provide the search terms as a query string to the Azure Function, which is passed to the GitHub Search API. Hint - have a look at the req argument.

    Resources

    - + \ No newline at end of file diff --git a/blog/page/29/index.html b/blog/page/29/index.html index 169284e353..79f32787b7 100644 --- a/blog/page/29/index.html +++ b/blog/page/29/index.html @@ -14,13 +14,13 @@ - +

    · 8 min read
    Rory Preddy

    Welcome to Day 4 of #30DaysOfServerless!

    Yesterday we walked through an Azure Functions Quickstart with JavaScript, and used it to understand the general Functions App structure, tooling and developer experience.

    Today we'll look at developing Functions app with a different programming language - namely, Java - and explore developer guidance, tools and resources to build serverless Java solutions on Azure.


    What We'll Cover


    Developer Guidance

    If you're a Java developer new to serverless on Azure, start by exploring the Azure Functions Java Developer Guide. It covers:

    In this blog post, we'll dive into one quickstart, and discuss other resources briefly, for awareness! Do check out the recommended exercises and resources for self-study!


    My First Java Functions App

    In today's post, we'll walk through the Quickstart: Azure Functions tutorial using Visual Studio Code. In the process, we'll setup our development environment with the relevant command-line tools and VS Code extensions to make building Functions app simpler.

    Note: Completing this exercise may incur a a cost of a few USD cents based on your Azure subscription. Explore pricing details to learn more.

    First, make sure you have your development environment setup and configured.

    PRE-REQUISITES
    1. An Azure account with an active subscription - Create an account for free
    2. The Java Development Kit, version 11 or 8. - Install
    3. Apache Maven, version 3.0 or above. - Install
    4. Visual Studio Code. - Install
    5. The Java extension pack - Install
    6. The Azure Functions extension for Visual Studio Code - Install

    VS Code Setup

    NEW TO VISUAL STUDIO CODE?

    Start with the Java in Visual Studio Code tutorial to jumpstart your learning!

    Install the Extension Pack for Java (shown below) to install 6 popular extensions to help development workflow from creation to testing, debugging, and deployment.

    Extension Pack for Java

    Now, it's time to get started on our first Java-based Functions app.

    1. Create App

    1. Open a command-line terminal and create a folder for your project. Use the code command to launch Visual Studio Code from that directory as shown:

      $ mkdir java-function-resource-group-api
      $ cd java-function-resource-group-api
      $ code .
    2. Open the Visual Studio Command Palette (Ctrl + Shift + p) and select Azure Functions: create new project to kickstart the create workflow. Alternatively, you can click the Azure icon (on activity sidebar), to get the Workspace window, click "+" and pick the "Create Function" option as shown below.

      Screenshot of creating function in Azure from Visual Studio Code.

    3. This triggers a multi-step workflow. Fill in the information for each step as shown in the following prompts. Important: Start this process from an empty folder - the workflow will populate it with the scaffold for your Java-based Functions app.

      PromptValue
      Choose the directory location.You should either create a new folder or choose an empty folder for the project workspace. Don't choose a project folder that is already part of a workspace.
      Select a languageChoose Java.
      Select a version of JavaChoose Java 11 or Java 8, the Java version on which your functions run in Azure. Choose a Java version that you've verified locally.
      Provide a group IDChoose com.function.
      Provide an artifact IDEnter myFunction.
      Provide a versionChoose 1.0-SNAPSHOT.
      Provide a package nameChoose com.function.
      Provide an app nameEnter HttpExample.
      Select the build tool for Java projectChoose Maven.

    Visual Studio Code uses the provided information and generates an Azure Functions project. You can view the local project files in the Explorer - it should look like this:

    Azure Functions Scaffold For Java

    2. Preview App

    Visual Studio Code integrates with the Azure Functions Core tools to let you run this project on your local development computer before you publish to Azure.

    1. To build and run the application, use the following Maven command. You should see output similar to that shown below.

      $ mvn clean package azure-functions:run
      ..
      ..
      Now listening on: http://0.0.0.0:7071
      Application started. Press Ctrl+C to shut down.

      Http Functions:

      HttpExample: [GET,POST] http://localhost:7071/api/HttpExample
      ...
    2. Copy the URL of your HttpExample function from this output to a browser and append the query string ?name=<YOUR_NAME>, making the full URL something like http://localhost:7071/api/HttpExample?name=Functions. The browser should display a message that echoes back your query string value. The terminal in which you started your project also shows log output as you make requests.

    🎉 CONGRATULATIONS

    You created and ran a function app locally!

    With the Terminal panel focused, press Ctrl + C to stop Core Tools and disconnect the debugger. After you've verified that the function runs correctly on your local computer, it's time to use Visual Studio Code and Maven to publish and test the project on Azure.

    3. Sign into Azure

    Before you can deploy, sign in to your Azure subscription.

    az login

    The az login command signs you into your Azure account.

    Use the following command to deploy your project to a new function app.

    mvn clean package azure-functions:deploy

    When the creation is complete, the following Azure resources are created in your subscription:

    • Resource group. Named as java-functions-group.
    • Storage account. Required by Functions. The name is generated randomly based on Storage account name requirements.
    • Hosting plan. Serverless hosting for your function app.The name is java-functions-app-service-plan.
    • Function app. A function app is the deployment and execution unit for your functions. The name is randomly generated based on your artifactId, appended with a randomly generated number.

    4. Deploy App

    1. Back in the Resources area in the side bar, expand your subscription, your new function app, and Functions. Right-click (Windows) or Ctrl - click (macOS) the HttpExample function and choose Execute Function Now....

      Screenshot of executing function in Azure from Visual Studio Code.

    2. In Enter request body you see the request message body value of { "name": "Azure" }. Press Enter to send this request message to your function.

    3. When the function executes in Azure and returns a response, a notification is raised in Visual Studio Code.

    You can also copy the complete Invoke URL shown in the output of the publish command into a browser address bar, appending the query parameter ?name=Functions. The browser should display similar output as when you ran the function locally.

    🎉 CONGRATULATIONS

    You deployed your function app to Azure, and invoked it!

    5. Clean up

    Use the following command to delete the resource group and all its contained resources to avoid incurring further costs.

    az group delete --name java-functions-group

    Next Steps

    So, where can you go from here? The example above used a familiar HTTP Trigger scenario with a single Azure service (Azure Functions). Now, think about how you can build richer workflows by using other triggers and integrating with other Azure or third-party services.

    Other Triggers, Bindings

    Check out Azure Functions Samples In Java for samples (and short use cases) that highlight other triggers - with code! This includes triggers to integrate with CosmosDB, Blob Storage, Event Grid, Event Hub, Kafka and more.

    Scenario with Integrations

    Once you've tried out the samples, try building an end-to-end scenario by using these triggers to integrate seamlessly with other Services. Here are a couple of useful tutorials:

    Exercise

    Time to put this into action and validate your development workflow:

    Resources

    - + \ No newline at end of file diff --git a/blog/page/3/index.html b/blog/page/3/index.html index d5833837fd..23723ad7e3 100644 --- a/blog/page/3/index.html +++ b/blog/page/3/index.html @@ -14,7 +14,7 @@ - + @@ -26,7 +26,7 @@

    ...and that's it! We've successfully deployed our application on Azure!

    But there's more!

    Best practices: Monitoring and CI/CD!

    In my opinion, it's not enough to just set up the application on Azure! I want to know that my web app is performant and serving my users reliably! I also want to make sure that I'm not inadvertently breaking my application as I continue to make changes to it. Thankfully, the Azure Developer CLI also handles all of this via two additional commands - azd monitor and azd pipeline config.

    Application Monitoring

    When we provisioned all of our infrastructure, we also set up application monitoring via a Bicep file in our .infra/ directory that spec'd out an Application Insights dashboard. By running azd monitor we can see the dashboard with live metrics that was configured for the application.

    We can also navigate to the Application Dashboard by clicking on the resource group name, where you can set a specific refresh rate for the dashboard, and see usage, reliability, and performance metrics over time.

    I don't know about everyone else but I have spent a ton of time building out similar dashboards. It can be super time-consuming to write all the queries and create the visualizations so this feels like a real time saver.

    CI/CD

    Finally let's talk about setting up CI/CD! This might be my favorite azd feature. As I mentioned before, the Azure Developer CLI has a command, azd pipeline config, which uses the files in the .github/ directory to set up a GitHub Action. More than that, if there is no upstream repo, the Developer CLI will actually help you create one. But what does this mean exactly? Because our GitHub Action is using the same commands you'd run in the CLI under the hood, we're actually going to have CI/CD set up to run on every commit into the repo, against real Azure resources. What a sweet collaboration feature!

    That's it! We've gone end-to-end with the Azure Developer CLI - initialized a project, provisioned the resources on Azure, deployed our code on Azure, set up monitoring logs and dashboards, and set up a CI/CD pipeline with GitHub Actions to run on every commit into the repo (on real Azure resources!).

    Exercise: Try it yourself or create your own template!

    As an exercise, try out the workflow above with any template on GitHub!

    Or, try turning your own project into an Azure Developer CLI-enabled template by following this guidance. If you create your own template, don't forget to tag the repo with the azd-templates topic on GitHub to help others find it (unfamiliar with GitHub topics? Learn how to add topics to your repo)! We'd also love to chat with you about your experience creating an azd template - if you're open to providing feedback around this, please fill out this form!

    Resources

    - + \ No newline at end of file diff --git a/blog/page/30/index.html b/blog/page/30/index.html index f478adfdd2..ecbda564f3 100644 --- a/blog/page/30/index.html +++ b/blog/page/30/index.html @@ -14,13 +14,13 @@ - +

    · 9 min read
    Nitya Narasimhan

    Welcome to Day 3 of #30DaysOfServerless!

    Yesterday we learned core concepts and terminology for Azure Functions, the signature Functions-as-a-Service option on Azure. Today we take our first steps into building and deploying an Azure Functions app, and validate local development setup.

    Ready? Let's go.


    What We'll Cover


    Developer Guidance

    Before we jump into development, let's familiarize ourselves with language-specific guidance from the Azure Functions Developer Guide. We'll review the JavaScript version but guides for F#, Java, Python, C# and PowerShell are also available.

    1. A function is defined by two things: code (written in a supported programming language) and configuration (specified in a functions.json file, declaring the triggers, bindings and other context for execution).

    2. A function app is the unit of deployment for your functions, and is associated with a single execution context or runtime. It can contain multiple functions, but they must be in the same language.

    3. A host configuration is runtime-specific configuration that affects all functions running in a given function app instance. It is defined in a host.json file.

    4. A recommended folder structure is defined for the function app, but may vary based on the programming language used. Check the documentation on folder structures to learn the default for your preferred language.

    Here's an example of the JavaScript folder structure for a function app containing two functions with some shared dependencies. Note that host.json (runtime configuration) is defined once, in the root directory. And function.json is defined separately for each function.

    FunctionsProject
    | - MyFirstFunction
    | | - index.js
    | | - function.json
    | - MySecondFunction
    | | - index.js
    | | - function.json
    | - SharedCode
    | | - myFirstHelperFunction.js
    | | - mySecondHelperFunction.js
    | - node_modules
    | - host.json
    | - package.json
    | - local.settings.json

    We'll dive into what the contents of these files look like, when we build and deploy the first function. We'll cover local.settings.json in the About Local Testing section at the end.


    My First Function App

    The documentation provides quickstart options for all supported languages. We'll walk through the JavaScript versions in this article. You have two options for development:

    I'm a huge fan of VS Code - so I'll be working through that tutorial today.

    PRE-REQUISITES

    Don't forget to validate your setup by checking the versions of installed software.

    Install VSCode Extension

    Installing the Visual Studio Code extension should automatically open this page in your IDE with similar quickstart instructions, but potentially more recent screenshots.

    Visual Studio Code Extension for VS Code

    Note that it may make sense to install the Azure tools for Visual Studio Code extensions pack if you plan on working through the many projects in Serverless September. This includes the Azure Functions extension by default.

    Create First Function App

    Walk through the Create local [project] steps of the quickstart. The process is quick and painless and scaffolds out this folder structure and files. Note the existence (and locations) of functions.json and host.json files.

    Final screenshot for VS Code workflow

    Explore the Code

    Check out the functions.json configuration file. It shows that the function is activated by an httpTrigger with an input binding (tied to req payload) and an output binding (tied to res payload). And it supports both GET and POST requests on the exposed URL.

    {
    "bindings": [
    {
    "authLevel": "anonymous",
    "type": "httpTrigger",
    "direction": "in",
    "name": "req",
    "methods": [
    "get",
    "post"
    ]
    },
    {
    "type": "http",
    "direction": "out",
    "name": "res"
    }
    ]
    }

    Check out index.js - the function implementation. We see it logs a message to the console when invoked. It then extracts a name value from the input payload (req) and crafts a different responseMessage based on the presence/absence of a valid name. It returns this response in the output payload (res).

    module.exports = async function (context, req) {
    context.log('JavaScript HTTP trigger function processed a request.');

    const name = (req.query.name || (req.body && req.body.name));
    const responseMessage = name
    ? "Hello, " + name + ". This HTTP triggered function executed successfully."
    : "This HTTP triggered function executed successfully. Pass a name in the query string or in the request body for a personalized response.";

    context.res = {
    // status: 200, /* Defaults to 200 */
    body: responseMessage
    };
    }

    Preview Function App Locally

    You can now run this function app locally using Azure Functions Core Tools. VS Code integrates seamlessly with this CLI-based tool, making it possible for you to exploit all its capabilities without leaving the IDE. In fact, the workflow will even prompt you to install those tools if they didn't already exist in your local dev environment.

    Now run the function app locally by clicking on the "Run and Debug" icon in the activity bar (highlighted, left) and pressing the "▶️" (Attach to Node Functions) to start execution. On success, your console output should show something like this.

    Final screenshot for VS Code workflow

    You can test the function locally by visiting the Function Url shown (http://localhost:7071/api/HttpTrigger1) or by opening the Workspace region of the Azure extension, and selecting the Execute Function now menu item as shown.

    Final screenshot for VS Code workflow

    In the latter case, the Enter request body popup will show a pre-populated request of {"name":"Azure"} that you can submit.

    Final screenshot for VS Code workflow

    On successful execution, your VS Code window will show a notification as follows. Take note of the console output - it shows the message encoded in index.js.

    Final screenshot for VS Code workflow

    You can also visit the deployed function URL directly in a local browser - testing the case for a request made with no name payload attached. Note how the response in the browser now shows the non-personalized version of the message!

    Final screenshot for VS Code workflow

    🎉 Congratulations

    You created and ran a function app locally!

    (Re)Deploy to Azure

    Now, just follow the creating a function app in Azure steps to deploy it to Azure, using an active subscription! The deployed app resource should now show up under the Function App Resources where you can click Execute Function Now to test the Azure-deployed version instead. You can also look up the function URL in the portal and visit that link in your local browser to trigger the function without the name context.

    🎉 Congratulations

    You have an Azure-hosted serverless function app!

    Challenge yourself and try to change the code and redeploy to Azure to return something different. You have effectively created a serverless API endpoint!


    About Core Tools

    That was a lot to cover! In the next few days we'll have more examples for Azure Functions app development - focused on different programming languages. So let's wrap today's post by reviewing two helpful resources.

    First, let's talk about Azure Functions Core Tools - the command-line tool that lets you develop, manage, and deploy, Azure Functions projects from your local development environment. It is used transparently by the VS Code extension - but you can use it directly from a terminal for a powerful command-line end-to-end developer experience! The Core Tools commands are organized into the following contexts:

    Learn how to work with Azure Functions Core Tools. Not only can it help with quick command execution, it can also be invaluable for debugging issues that may not always be visible or understandable in an IDE.

    About Local Testing

    You might have noticed that the scaffold also produced a local.settings.json file. What is that and why is it useful? By definition, the local.settings.json file "stores app settings and settings used by local development tools. Settings in the local.settings.json file are used only when you're running your project locally."

    Read the guidance on Code and test Azure Functions Locally to learn more about how to configure development environments locally, for your preferred programming language, to support testing and debugging on the local Functions runtime.

    Exercise

    We made it! Now it's your turn!! Here are a few things you can try to apply what you learned and reinforce your understanding:

    Resources

    Bookmark and visit the #30DaysOfServerless Collection. It's the one-stop collection of resources we will keep updated with links to relevant documentation and learning resources.

    - + \ No newline at end of file diff --git a/blog/page/31/index.html b/blog/page/31/index.html index fd96ec5948..2ea4c73d09 100644 --- a/blog/page/31/index.html +++ b/blog/page/31/index.html @@ -14,7 +14,7 @@ - + @@ -22,7 +22,7 @@

    · 9 min read
    Nitya Narasimhan

    Welcome to Day 2️⃣ of #30DaysOfServerless!

    Today, we kickstart our journey into serveless on Azure with a look at Functions As a Service. We'll explore Azure Functions - from core concepts to usage patterns.

    Ready? Let's Go!


    What We'll Cover

    • What is Functions-as-a-Service? (FaaS)
    • What is Azure Functions?
    • Triggers, Bindings and Custom Handlers
    • What is Durable Functions?
    • Orchestrators, Entity Functions and Application Patterns
    • Exercise: Take the Cloud Skills Challenge!
    • Resources: #30DaysOfServerless Collection.


    1. What is FaaS?

    Faas stands for Functions As a Service (FaaS). But what does that mean for us as application developers? We know that building and deploying modern applications at scale can get complicated and it starts with us needing to take decisions on Compute. In other words, we need to answer this question: "where should I host my application given my resource dependencies and scaling requirements?"

    this useful flowchart

    Azure has this useful flowchart (shown below) to guide your decision-making. You'll see that hosting options generally fall into three categories:

    • Infrastructure as a Service (IaaS) - where you provision and manage Virtual Machines yourself (cloud provider manages infra).
    • Platform as a Service (PaaS) - where you use a provider-managed hosting environment like Azure Container Apps.
    • Functions as a Service (FaaS) - where you forget about hosting environments and simply deploy your code for the provider to run.

    Here, "serverless" compute refers to hosting options where we (as developers) can focus on building apps without having to manage the infrastructure. See serverless compute options on Azure for more information.


    2. Azure Functions

    Azure Functions is the Functions-as-a-Service (FaaS) option on Azure. It is the ideal serverless solution if your application is event-driven with short-lived workloads. With Azure Functions, we develop applications as modular blocks of code (functions) that are executed on demand, in response to configured events (triggers). This approach brings us two advantages:

    • It saves us money. We only pay for the time the function runs.
    • It scales with demand. We have 3 hosting plans for flexible scaling behaviors.

    Azure Functions can be programmed in many popular languages (C#, F#, Java, JavaScript, TypeScript, PowerShell or Python), with Azure providing language-specific handlers and default runtimes to execute them.

    Concept: Custom Handlers
    • What if we wanted to program in a non-supported language?
    • Or we wanted to use a different runtime for a supported language?

    Custom Handlers have you covered! These are lightweight webservers that can receive and process input events from the Functions host - and return responses that can be delivered to any output targets. By this definition, custom handlers can be implemented by any language that supports receiving HTTP events. Check out the quickstart for writing a custom handler in Rust or Go.

    Custom Handlers

    Concept: Trigger and Bindings

    We talked about what functions are (code blocks). But when are they invoked or executed? And how do we provide inputs (arguments) and retrieve outputs (results) from this execution?

    This is where triggers and bindings come in.

    • Triggers define how a function is invoked and what associated data it will provide. A function must have exactly one trigger.
    • Bindings declaratively define how a resource is connected to the function. The resource or binding can be of type input, output, or both. Bindings are optional. A Function can have multiple input, output bindings.

    Azure Functions comes with a number of supported bindings that can be used to integrate relevant services to power a specific scenario. For instance:

    • HTTP Triggers - invokes the function in response to an HTTP request. Use this to implement serverless APIs for your application.
    • Event Grid Triggers invokes the function on receiving events from an Event Grid. Use this to process events reactively, and potentially publish responses back to custom Event Grid topics.
    • SignalR Service Trigger invokes the function in response to messages from Azure SignalR, allowing your application to take actions with real-time contexts.

    Triggers and bindings help you abstract your function's interfaces to other components it interacts with, eliminating hardcoded integrations. They are configured differently based on the programming language you use. For example - JavaScript functions are configured in the functions.json file. Here's an example of what that looks like.

    {
    "disabled":false,
    "bindings":[
    // ... bindings here
    {
    "type": "bindingType",
    "direction": "in",
    "name": "myParamName",
    // ... more depending on binding
    }
    ]
    }

    The key thing to remember is that triggers and bindings have a direction property - triggers are always in, input bindings are in and output bindings are out. Some bindings can support a special inout direction.

    The documentation has code examples for bindings to popular Azure services. Here's an example of the bindings and trigger configuration for a BlobStorage use case.

    // function.json configuration

    {
    "bindings": [
    {
    "queueName": "myqueue-items",
    "connection": "MyStorageConnectionAppSetting",
    "name": "myQueueItem",
    "type": "queueTrigger",
    "direction": "in"
    },
    {
    "name": "myInputBlob",
    "type": "blob",
    "path": "samples-workitems/{queueTrigger}",
    "connection": "MyStorageConnectionAppSetting",
    "direction": "in"
    },
    {
    "name": "myOutputBlob",
    "type": "blob",
    "path": "samples-workitems/{queueTrigger}-Copy",
    "connection": "MyStorageConnectionAppSetting",
    "direction": "out"
    }
    ],
    "disabled": false
    }

    The code below shows the function implementation. In this scenario, the function is triggered by a queue message carrying an input payload with a blob name. In response, it copies that data to the resource associated with the output binding.

    // function implementation

    module.exports = async function(context) {
    context.log('Node.js Queue trigger function processed', context.bindings.myQueueItem);
    context.bindings.myOutputBlob = context.bindings.myInputBlob;
    };
    Concept: Custom Bindings

    What if we have a more complex scenario that requires bindings for non-supported resources?

    There is an option create custom bindings if necessary. We don't have time to dive into details here but definitely check out the documentation


    3. Durable Functions

    This sounds great, right?. But now, let's talk about one challenge for Azure Functions. In the use cases so far, the functions are stateless - they take inputs at runtime if necessary, and return output results if required. But they are otherwise self-contained, which is great for scalability!

    But what if I needed to build more complex workflows that need to store and transfer state, and complete operations in a reliable manner? Durable Functions are an extension of Azure Functions that makes stateful workflows possible.

    Concept: Orchestrator Functions

    How can I create workflows that coordinate functions?

    Durable Functions use orchestrator functions to coordinate execution of other Durable functions within a given Functions app. These functions are durable and reliable. Later in this post, we'll talk briefly about some application patterns that showcase popular orchestration scenarios.

    Concept: Entity Functions

    How do I persist and manage state across workflows?

    Entity Functions provide explicit state mangement for Durable Functions, defining operations to read and write state to durable entities. They are associated with a special entity trigger for invocation. These are currently available only for a subset of programming languages so check to see if they are supported for your programming language of choice.

    USAGE: Application Patterns

    Durable Functions are a fascinating topic that would require a separate, longer post, to do justice. For now, let's look at some application patterns that showcase the value of these starting with the simplest one - Function Chaining as shown below:

    Function Chaining

    Here, we want to execute a sequence of named functions in a specific order. As shown in the snippet below, the orchestrator function coordinates invocations on the given functions in the desired sequence - "chaining" inputs and outputs to establish the workflow. Take note of the yield keyword. This triggers a checkpoint, preserving the current state of the function for reliable operation.

    const df = require("durable-functions");

    module.exports = df.orchestrator(function*(context) {
    try {
    const x = yield context.df.callActivity("F1");
    const y = yield context.df.callActivity("F2", x);
    const z = yield context.df.callActivity("F3", y);
    return yield context.df.callActivity("F4", z);
    } catch (error) {
    // Error handling or compensation goes here.
    }
    });

    Other application patterns for durable functions include:

    There's a lot more to explore but we won't have time to do that today. Definitely check the documentation and take a minute to read the comparison with Azure Logic Apps to understand what each technology provides for serverless workflow automation.


    4. Exercise

    That was a lot of information to absorb! Thankfully, there are a lot of examples in the documentation that can help put these in context. Here are a couple of exercises you can do, to reinforce your understanding of these concepts.


    5. What's Next?

    The goal for today was to give you a quick tour of key terminology and concepts related to Azure Functions. Tomorrow, we dive into the developer experience, starting with core tools for local development and ending by deploying our first Functions app.

    Want to do some prep work? Here are a few useful links:


    6. Resources


    - + \ No newline at end of file diff --git a/blog/page/32/index.html b/blog/page/32/index.html index 8955c5fb87..a5a1b14b8f 100644 --- a/blog/page/32/index.html +++ b/blog/page/32/index.html @@ -14,13 +14,13 @@ - +

    · 5 min read
    Nitya Narasimhan
    Devanshi Joshi

    What We'll Cover

    • What is Serverless September? (6 initiatives)
    • How can I participate? (3 actions)
    • How can I skill up (30 days)
    • Who is behind this? (Team Contributors)
    • How can you contribute? (Custom Issues)
    • Exercise: Take the Cloud Skills Challenge!
    • Resources: #30DaysOfServerless Collection.

    Serverless September

    Welcome to Day 01 of 🍂 #ServerlessSeptember! Today, we kick off a full month of content and activities to skill you up on all things Serverless on Azure with content, events, and community interactions! Read on to learn about what we have planned!


    Explore our initiatives

    We have a number of initiatives planned for the month to help you learn and skill up on relevant technologies. Click on the links to visit the relevant pages for each.

    We'll go into more details about #30DaysOfServerless in this post - don't forget to subscribe to the blog to get daily posts delivered directly to your preferred feed reader!


    Register for events!

    What are 3 things you can do today, to jumpstart your learning journey?

    Serverless Hacks


    #30DaysOfServerless

    #30DaysOfServerless is a month-long series of daily blog posts grouped into 4 themed weeks - taking you from core concepts to end-to-end solution examples in 30 days. Each article will be short (5-8 mins reading time) and provide exercises and resources to help you reinforce learnings and take next steps.

    This series focuses on the Serverless On Azure learning journey in four stages, each building on the previous week to help you skill up in a beginner-friendly way:

    We have a tentative roadmap for the topics we hope to cover and will keep this updated as we go with links to actual articles as they get published.

    Week 1: FOCUS ON FUNCTIONS ⚡️

    Here's a sneak peek at what we have planned for week 1. We'll start with a broad look at fundamentals, walkthrough examples for each targeted programming language, then wrap with a post that showcases the role of Azure Functions in powering different serverless scenarios.

    • Sep 02: Learn Core Concepts for Azure Functions
    • Sep 03: Build and deploy your first Function
    • Sep 04: Azure Functions - for Java Developers!
    • Sep 05: Azure Functions - for JavaScript Developers!
    • Sep 06: Azure Functions - for .NET Developers!
    • Sep 07: Azure Functions - for Python Developers!
    • Sep 08: Wrap: Azure Functions + Serverless on Azure

    Ways to Participate..

    We hope you are as excited as we are, to jumpstart this journey. We want to make this a useful, beginner-friendly journey and we need your help!

    Here are the many ways you can participate:

    • Follow Azure on dev.to - we'll republish posts under this series page and welcome comments and feedback there!
    • Discussions on GitHub - Use this if you have feedback for us (on how we can improve these resources), or want to chat with your peers about serverless topics.
    • Custom Issues - just pick a template, create a new issue by filling in the requested details, and submit. You can use these to:
      • submit questions for AskTheExpert (live Q&A) ahead of time
      • submit your own articles or projects for community to learn from
      • share your ServerlessHack and get listed in our Hall Of Fame!
      • report bugs or share ideas for improvements

    Here's the list of custom issues currently defined.

    Community Buzz

    Let's Get Started!

    Now you know everything! We hope you are as excited as we are to dive into a full month of active learning and doing! Don't forget to subscribe for updates in your favorite feed reader! And look out for our first Azure Functions post tomorrow!


    - + \ No newline at end of file diff --git a/blog/page/33/index.html b/blog/page/33/index.html index 07e4574190..d219b511d5 100644 --- a/blog/page/33/index.html +++ b/blog/page/33/index.html @@ -14,13 +14,13 @@ - +

    · 3 min read
    Sara Gibbons

    ✨ Serverless September For Students

    My love for the tech industry grows as it evolves. Not just for the new technologies to play with, but seeing how paths into a tech career continue to expand. Allowing so many new voices, ideas and perspectives to our industry. With serverless computing removing barriers of entry for so many.

    It's a reason I enjoy working with universities and students. I get to hear the excitement of learning, fresh ideas and perspectives from our student community. All you students are incredible! How you view serverless, and what it can do, so cool!

    This year for Serverless September we want to hear all the amazing ways our student community is learning and working with Azure Serverless, and have all new ways for you to participate.

    Getting Started

    If you don't already have an Azure for Students account you can easily get your FREE account created at Azure for Students Sign up.

    If you are new to serverless, here are a couple links to get you started:

    No Experience, No problem

    For Serverless September we have planned beginner friendly content all month long. Covering such services as:

    You can follow #30DaysOfServerles here on the blog for daily posts covering concepts, scenarios, and how to create end-to-end solutions.

    Join the Cloud Skills Challenge where we have selected a list of Learn Modules for you to go through at your own pace, including deploying a full stack application with Azure Static Web Apps.

    Have A Question

    We want to hear it! All month long we will have Ask The Expert sessions. Submit your questions at any time and will be be sure to get one of our Azure Serverless experts to get you an answer.

    Share What You've Created

    If you have written a blog post, recorded a video, have an open source Azure Serverless project, we'd love to see it! Here is some links for you to share your creations

    🧭 Explore Student Resources

    ⚡️ Join us!

    Multiple teams across Microsoft are working to create Serverless September! They all want to hear from our incredible student community. We can't wait to share all the Serverless September resources and hear what you have learned and created. Here are some ways to keep up to date on all Serverless September activity:

    - + \ No newline at end of file diff --git a/blog/page/34/index.html b/blog/page/34/index.html index 49bc18ef2a..63306f54fd 100644 --- a/blog/page/34/index.html +++ b/blog/page/34/index.html @@ -14,13 +14,13 @@ - +

    · 3 min read
    Nitya Narasimhan
    Devanshi Joshi

    🍂 It's September?

    Well, almost! September 1 is a few days away and I'm excited! Why? Because it's the perfect time to revisit #Serverless September, a month of

    ".. content-driven learning where experts and practitioners share their insights and tutorials on how to use serverless technologies effectively in today's ecosystems"

    If the words look familiar, it's because I actually wrote them 2 years ago when we launched the 2020 edition of this series. You might even recall this whimsical image I drew to capture the concept of September (fall) and Serverless (event-driven on-demand compute). Since then, a lot has happened in the serverless ecosystem!

    You can still browse the 2020 Content Collection to find great talks, articles and code samples to get started using Serverless on Azure. But read on to learn what's new!

    🧐 What's New?

    Well - quite a few things actually. This year, Devanshi Joshi and I expanded the original concept in a number of ways. Here's just a few of them that come to mind.

    New Website

    This year, we created this website (shortcut: https://aka.ms/serverless-september) to serve as a permanent home for content in 2022 and beyond - making it a canonical source for the #serverless posts we publish to tech communities like dev.to, Azure Developer Community and Apps On Azure. We hope this also makes it easier for you to search for, or discover, current and past articles that support your learning journey!

    Start by bookmarking these two sites:

    More Options

    Previous years focused on curating and sharing content authored by Microsoft and community contributors, showcasing serverless examples and best practices. This was perfect for those who already had experience with the core devtools and concepts.

    This year, we wanted to combine beginner-friendly options (for those just starting their serverless journey) with more advanced insights (for those looking to skill up further). Here's a sneak peek at some of the initiatives we've got planned!

    We'll also explore the full spectrum of serverless - from Functions-as-a-Service (for granularity) to Containerization (for deployment) and Microservices (for scalability). Here are a few services and technologies you'll get to learn more about:

    ⚡️ Join us!

    This has been a labor of love from multiple teams at Microsoft! We can't wait to share all the resources that we hope will help you skill up on all things Serverless this September! Here are a couple of ways to participate:

    - + \ No newline at end of file diff --git a/blog/page/4/index.html b/blog/page/4/index.html index d06f02987c..8c338d5d4e 100644 --- a/blog/page/4/index.html +++ b/blog/page/4/index.html @@ -14,13 +14,13 @@ - +

    · 14 min read
    Justin Yoo

    Welcome to Day 28 of #30DaysOfServerless!

    Since it's the serverless end-to-end week, I'm going to discuss how to use a serverless application Azure Functions with OpenAPI extension to be seamlessly integrated with Power Platform custom connector through Azure API Management - in a post I call "Where am I? My GPS Location with Serverless Power Platform Custom Connector"

    OK. Are you ready? Let's get started!


    What We'll Cover

    • What is Power Platform custom connector?
    • Proxy app to Google Maps and Naver Map API
    • API Management integration
    • Two ways of building custom connector
    • Where am I? Power Apps app
    • Exercise: Try this yourself!
    • Resources: For self-study!


    SAMPLE REPO

    Want to follow along? Check out the sample app on GitHub repository used in this post.

    What is Power Platform custom connector?

    Power Platform is a low-code/no-code application development tool for fusion teams that consist of a group of people. Those people come from various disciplines, including field experts (domain experts), IT professionals and professional developers, to draw business values successfully. Within the fusion team, the domain experts become citizen developers or low-code developers by Power Platform. In addition, Making Power Platform more powerful is that it offers hundreds of connectors to other Microsoft 365 and third-party services like SAP, ServiceNow, Salesforce, Google, etc.

    However, what if you want to use your internal APIs or APIs not yet offering their official connectors? Here's an example. If your company has an inventory management system, and you want to use it within your Power Apps or Power Automate. That point is exactly where Power Platform custom connectors is necessary.

    Inventory Management System for Power Apps

    Therefore, Power Platform custom connectors enrich those citizen developers' capabilities because those connectors can connect any API applications for the citizen developers to use.

    In this post, let's build a custom connector that provides a static map image generated by Google Maps API and Naver Map API using your GPS location.

    Proxy app to Google Maps and Naver Map API

    First, let's build an Azure Functions app that connects to Google Maps and Naver Map. Suppose that you've already got the API keys for both services. If you haven't yet, get the keys first by visiting here for Google and here for Naver. Then, store them to local.settings.json within your Azure Functions app.

    {
    "Values": {
    ...
    "Maps__Google__ApiKey": "<GOOGLE_MAPS_API_KEY>",
    "Maps__Naver__ClientId": "<NAVER_MAP_API_CLIENT_ID>",
    "Maps__Naver__ClientSecret": "<NAVER_MAP_API_CLIENT_SECRET>"
    }
    }

    Here's the sample logic to get the static image from Google Maps API. It takes the latitude and longitude of your current location and image zoom level, then returns the static map image. There are a few hard-coded assumptions, though:

    • The image size should be 400x400.
    • The image should be in .png format.
    • The marker should show be red and show my location.
    public class GoogleMapService : IMapService
    {
    public async Task<byte[]> GetMapAsync(HttpRequest req)
    {
    var latitude = req.Query["lat"];
    var longitude = req.Query["long"];
    var zoom = (string)req.Query["zoom"] ?? "14";

    var sb = new StringBuilder();
    sb.Append("https://maps.googleapis.com/maps/api/staticmap")
    .Append($"?center={latitude},{longitude}")
    .Append("&size=400x400")
    .Append($"&zoom={zoom}")
    .Append($"&markers=color:red|{latitude},{longitude}")
    .Append("&format=png32")
    .Append($"&key={this._settings.Google.ApiKey}");
    var requestUri = new Uri(sb.ToString());

    var bytes = await this._http.GetByteArrayAsync(requestUri).ConfigureAwait(false);

    return bytes;
    }
    }

    The NaverMapService class has a similar logic with the same input and assumptions. Here's the code:

    public class NaverMapService : IMapService
    {
    public async Task<byte[]> GetMapAsync(HttpRequest req)
    {
    var latitude = req.Query["lat"];
    var longitude = req.Query["long"];
    var zoom = (string)req.Query["zoom"] ?? "13";

    var sb = new StringBuilder();
    sb.Append("https://naveropenapi.apigw.ntruss.com/map-static/v2/raster")
    .Append($"?center={longitude},{latitude}")
    .Append("&w=400")
    .Append("&h=400")
    .Append($"&level={zoom}")
    .Append($"&markers=color:blue|pos:{longitude}%20{latitude}")
    .Append("&format=png")
    .Append("&lang=en");
    var requestUri = new Uri(sb.ToString());

    this._http.DefaultRequestHeaders.Clear();
    this._http.DefaultRequestHeaders.Add("X-NCP-APIGW-API-KEY-ID", this._settings.Naver.ClientId);
    this._http.DefaultRequestHeaders.Add("X-NCP-APIGW-API-KEY", this._settings.Naver.ClientSecret);

    var bytes = await this._http.GetByteArrayAsync(requestUri).ConfigureAwait(false);

    return bytes;
    }
    }

    Let's take a look at the function endpoints. Here's for the Google Maps and Naver Map. As the GetMapAsync(req) method returns a byte array value, you need to transform it as FileContentResult, with the content type of image/png.

    // Google Maps
    public class GoogleMapsTrigger
    {
    [FunctionName(nameof(GoogleMapsTrigger.GetGoogleMapImage))]
    public async Task<IActionResult> GetGoogleMapImage(
    [HttpTrigger(AuthorizationLevel.Anonymous, "GET", Route = "google/image")] HttpRequest req)
    {
    this._logger.LogInformation("C# HTTP trigger function processed a request.");

    var bytes = await this._service.GetMapAsync(req).ConfigureAwait(false);

    return new FileContentResult(bytes, "image/png");
    }
    }

    // Naver Map
    public class NaverMapsTrigger
    {
    [FunctionName(nameof(NaverMapsTrigger.GetNaverMapImage))]
    public async Task<IActionResult> GetNaverMapImage(
    [HttpTrigger(AuthorizationLevel.Anonymous, "GET", Route = "naver/image")] HttpRequest req)
    {
    this._logger.LogInformation("C# HTTP trigger function processed a request.");

    var bytes = await this._service.GetMapAsync(req).ConfigureAwait(false);

    return new FileContentResult(bytes, "image/png");
    }
    }

    Then, add the OpenAPI capability to each function endpoint. Here's the example:

    // Google Maps
    public class GoogleMapsTrigger
    {
    [FunctionName(nameof(GoogleMapsTrigger.GetGoogleMapImage))]
    // ⬇️⬇️⬇️ Add decorators provided by the OpenAPI extension ⬇️⬇️⬇️
    [OpenApiOperation(operationId: nameof(GoogleMapsTrigger.GetGoogleMapImage), tags: new[] { "google" })]
    [OpenApiParameter(name: "lat", In = ParameterLocation.Query, Required = true, Type = typeof(string), Description = "The **latitude** parameter")]
    [OpenApiParameter(name: "long", In = ParameterLocation.Query, Required = true, Type = typeof(string), Description = "The **longitude** parameter")]
    [OpenApiParameter(name: "zoom", In = ParameterLocation.Query, Required = false, Type = typeof(string), Description = "The **zoom level** parameter &ndash; Default value is `14`")]
    [OpenApiResponseWithBody(statusCode: HttpStatusCode.OK, contentType: "image/png", bodyType: typeof(byte[]), Description = "The map image as an OK response")]
    // ⬆️⬆️⬆️ Add decorators provided by the OpenAPI extension ⬆️⬆️⬆️
    public async Task<IActionResult> GetGoogleMapImage(
    [HttpTrigger(AuthorizationLevel.Anonymous, "GET", Route = "google/image")] HttpRequest req)
    {
    ...
    }
    }

    // Naver Map
    public class NaverMapsTrigger
    {
    [FunctionName(nameof(NaverMapsTrigger.GetNaverMapImage))]
    // ⬇️⬇️⬇️ Add decorators provided by the OpenAPI extension ⬇️⬇️⬇️
    [OpenApiOperation(operationId: nameof(NaverMapsTrigger.GetNaverMapImage), tags: new[] { "naver" })]
    [OpenApiParameter(name: "lat", In = ParameterLocation.Query, Required = true, Type = typeof(string), Description = "The **latitude** parameter")]
    [OpenApiParameter(name: "long", In = ParameterLocation.Query, Required = true, Type = typeof(string), Description = "The **longitude** parameter")]
    [OpenApiParameter(name: "zoom", In = ParameterLocation.Query, Required = false, Type = typeof(string), Description = "The **zoom level** parameter &ndash; Default value is `13`")]
    [OpenApiResponseWithBody(statusCode: HttpStatusCode.OK, contentType: "image/png", bodyType: typeof(byte[]), Description = "The map image as an OK response")]
    // ⬆️⬆️⬆️ Add decorators provided by the OpenAPI extension ⬆️⬆️⬆️
    public async Task<IActionResult> GetNaverMapImage(
    [HttpTrigger(AuthorizationLevel.Anonymous, "GET", Route = "naver/image")] HttpRequest req)
    {
    ...
    }
    }

    Run the function app in the local. Here are the latitude and longitude values for Seoul, Korea.

    • latitude: 37.574703
    • longitude: 126.978519

    Google Map for Seoul

    It seems to be working! Let's deploy it to Azure.

    API Management integration

    Visual Studio 2022 provides a built-in deployment tool for Azure Functions app onto Azure. In addition, the deployment tool supports seamless integration with Azure API Management as long as your Azure Functions app enables the OpenAPI capability. In this post, I'm going to use this feature. Right-mouse click on the Azure Functions project and select the "Publish" menu.

    Visual Studio context menu for publish

    Then, you will see the publish screen. Click the "➕ New" button to create a new publish profile.

    Create a new publish profile

    Choose "Azure" and click the "Next" button.

    Choose the target platform for publish

    Select the app instance. This time simply pick up the "Azure Function App (Windows)" option, then click "Next".

    Choose the target OS for publish

    If you already provision an Azure Function app instance, you will see it on the screen. Otherwise, create a new one. Then, click "Next".

    Choose the target instance for publish

    In the next step, you are asked to choose the Azure API Management instance for integration. Choose one, or create a new one. Then, click "Next".

    Choose the APIM instance for integration

    Finally, select the publish method either local publish or GitHub Actions workflow. Let's pick up the local publish method for now. Then, click "Finish".

    Choose the deployment type

    The publish profile has been created. Click "Close" to move on.

    Publish profile created

    Now the function app is ready for deployment. Click the "Publish" button and see how it goes.

    Publish function app

    The Azure function app has been deployed and integrated with the Azure API Management instance.

    Function app published

    Go to the published function app site, and everything looks OK.

    Function app on Azure

    And API Management shows the function app integrated perfectly.

    Function app integrated with APIM

    Now, you are ready to create a custom connector. Let's move on.

    Two ways of building custom connector

    There are two ways to create a custom connector.

    Export custom connector from API Management

    First, you can directly use the built-in API Management feature. Then, click the ellipsis icon and select the "Create Power Connector" menu.

    Create Power Connector menu

    Then, you are redirected to this screen. While the "API" and "API display name" fields are pre-populated, you need to choose the Power Platform environment tied to your tenant. Choose an environment, click "Authenticate", and click "Create".

    Create custom connector screen

    Check your custom connector on Power Apps or Power Automate side.

    Custom connector created on Power Apps

    However, there's a caveat to this approach. Because it's tied to your tenant, you should use the second approach if you want to use this custom connector on the other tenant.

    Import custom connector from OpenAPI document or URL

    Click the ellipsis icon again and select the "Export" menu.

    Export menu

    On the Export API screen, choose the "OpenAPI v2 (JSON)" panel because Power Platform custom connector currently accepts version 2 of the OpenAPI document.

    Select OpenAPI v2

    Download the OpenAPI document to your local computer and move to your Power Apps or Power Automate page under your desired environment. I'm going to use the Power Automate page. First, go to the "Data" ➡️ "Custom connectors" page. Then, click the "➕ New custom connector" ➡️ "Import an OpenAPI file" at the top right corner.

    New custom connector

    When a modal pops up, give the custom connector name and import the OpenAPI document exported above. Then, click "Continue".

    Import custom connector

    Actually, that's it! Next, click the "✔️ Create connector" button to create the connector.

    Create custom connector

    Go back to the custom connector page, and you will see the "Maps API" custom connector you just created.

    Custom connector imported

    So, you are ready to create a Power Apps app to display your location on Google Maps or Naver Map! Let's move on.

    Where am I? Power Apps app

    Open the Power Apps Studio, and create an empty canvas app, named Who am I with a phone layout.

    Custom connector integration

    To use the custom connector created above, you need to add it to the Power App. Click the cylinder icon on the left and click the "Add data" button.

    Add custom connector to data pane

    Search the custom connector name, "Maps API", and click the custom connector to add.

    Search custom connector

    To use the custom connector, you also need to create a connection to it. Click the "Connect" button and move on.

    Create connection to custom connector

    Now, you've got the connection to the custom connector.

    Connection to custom connector ready

    Controls

    Let's build the Power Apps app. First of all, put three controls Image, Slider and Button onto the canvas.

    Power Apps control added

    Click the "Screen1" control and change the value on the property "OnVisible" to the formula below. The formula stores the current slider value in the zoomlevel collection.

    ClearCollect(
    zoomlevel,
    Slider1.Value
    )

    Click the "Botton1" control and change the value on the property "OnSelected" to the formula below. It passes the current latitude, longitude and zoom level to the custom connector and receives the image data. The received image data is stored in the result collection.

    ClearCollect(
    result,
    MAPS.GetGoogleMapImage(
    Location.Latitude,
    Location.Longitude,
    { zoom: First(zoomlevel).Value }
    )
    )

    Click the "Image1" control and change the value on the property "Image" to the formula below. It gets the image data from the result collection.

    First(result).Url

    Click the "Slider1" control and change the value on the property "OnChange" to the formula below. It stores the current slider value to the zoomlevel collection, followed by calling the custom connector to get the image data against the current location.

    ClearCollect(
    zoomlevel,
    Slider1.Value
    );
    ClearCollect(
    result,
    MAPS.GetGoogleMapImage(
    Location.Latitude,
    Location.Longitude,
    { zoom: First(zoomlevel).Value }
    )
    )

    That seems to be OK. Let's click the "Where am I?" button. But it doesn't show the image. The First(result).Url value is actually similar to this:

    appres://blobmanager/1090a86393a843adbfcf428f0b90e91b/1

    It's the image reference value somewhere you can't get there.

    Workaround Power Automate workflow

    Therefore, you need a workaround using a Power Automate workflow to sort out this issue. Open the Power Automate Studio, create an instant cloud flow with the Power App trigger, and give it the "Where am I" name. Then add input parameters of lat, long and zoom.

    Power Apps trigger on Power Automate workflow

    Add custom connector action to get the map image.

    Select action to get the Google Maps image

    In the action, pass the appropriate parameters to the action.

    Pass parameters to the custom connector action

    Add a "Response" action and put the following values into each field.

    • "Body" field:

      {
      "base64Image": <power_automate_expression>
      }

      The <power_automate_expression> should be concat('data:', body('GetGoogleMapImage')?['$content-type'], ';base64,', body('GetGoogleMapImage')?['$content']).

    • "Response Body JSON Schema" field:

      {
      "type": "object",
      "properties": {
      "base64Image": {
      "type": "string"
      }
      }
      }

    Format the Response action

    Let's return to the Power Apps Studio and add the Power Automate workflow you created.

    Add Power Automate workflow

    Select "Button1" and change the value on the property "OnSelect" below. It replaces the direct call to the custom connector with the Power Automate workflow.

    ClearCollect(
    result,
    WhereamI.Run(
    Location.Latitude,
    Location.Longitude,
    First(zoomlevel).Value
    )
    )

    Also, change the value on the property "OnChange" of the "Slider1" control below, replacing the custom connector call with the Power Automate workflow call.

    ClearCollect(
    zoomlevel,
    Slider1.Value
    );
    ClearCollect(
    result,
    WhereamI.Run(
    Location.Latitude,
    Location.Longitude,
    First(zoomlevel).Value
    )
    )

    And finally, change the "Image1" control's "Image" property value below.

    First(result).base64Image

    The workaround has been applied. Click the "Where am I?" button to see your current location from Google Maps.

    Run Power Apps app #1

    If you change the slider left or right, you will see either the zoomed-in image or the zoomed-out image.

    Run Power Apps app #2

    Now, you've created a Power Apps app to show your current location using:

    • Google Maps API through the custom connector, and
    • Custom connector written in Azure Functions with OpenAPI extension!

    Exercise: Try this yourself!

    You can fork this GitHub repository to your account and play around with it to see how the custom connector works. After forking the repository, make sure that you create all the necessary secrets to your repository documented in the README file.

    Then, click the "Deploy to Azure" button, and it will provision all necessary Azure resources and deploy an Azure Functions app for a custom connector.

    Deploy To Azure

    Once everything is deployed successfully, try to create a Power Apps app and Power Automate workflow to see your current location in real-time!

    Resources: For self-study!

    Want to know more about Power Platform custom connector and Azure Functions OpenAPI extension? Here are several resources you can take a look at:

    - + \ No newline at end of file diff --git a/blog/page/5/index.html b/blog/page/5/index.html index 9e305a73ff..3e6af3c006 100644 --- a/blog/page/5/index.html +++ b/blog/page/5/index.html @@ -14,14 +14,14 @@ - +

    · 5 min read
    Madhura Bharadwaj

    Welcome to Day 26 of #30DaysOfServerless!

    Today, we have a special set of posts from our Zero To Hero 🚀 initiative, featuring blog posts authored by our Product Engineering teams for #ServerlessSeptember. Posts were originally published on the Apps on Azure blog on Microsoft Tech Community.


    What We'll Cover

    • Monitoring your Azure Functions
    • Built-in log streaming
    • Live Metrics stream
    • Troubleshooting Azure Functions


    Monitoring your Azure Functions:

    Azure Functions uses Application Insights to collect and analyze log data from individual function executions in your function app.

    Using Application Insights

    Application Insights collects log, performance, and error data. By automatically detecting performance anomalies and featuring powerful analytics tools, you can more easily diagnose issues and better understand how your functions are used. These tools are designed to help you continuously improve performance and usability of your functions. You can even use Application Insights during local function app project development.

    Typically, you create an Application Insights instance when you create your function app. In this case, the instrumentation key required for the integration is already set as an application setting named APPINSIGHTS_INSTRUMENTATIONKEY. With Application Insights integration enabled, telemetry data is sent to your connected Application Insights instance. This data includes logs generated by the Functions host, traces written from your functions code, and performance data. In addition to data from your functions and the Functions host, you can also collect data from the Functions scale controller.

    By default, the data collected from your function app is stored in Application Insights. In the Azure portal, Application Insights provides an extensive set of visualizations of your telemetry data. You can drill into error logs and query events and metrics. To learn more, including basic examples of how to view and query your collected data, see Analyze Azure Functions telemetry in Application Insights.

    Using Log Streaming

    In addition to this, you can have a smoother debugging experience through log streaming. There are two ways to view a stream of log files being generated by your function executions.

    • Built-in log streaming: the App Service platform lets you view a stream of your application log files. This is equivalent to the output seen when you debug your functions during local development and when you use the Test tab in the portal. All log-based information is displayed. For more information, see Stream logs. This streaming method supports only a single instance and can't be used with an app running on Linux in a Consumption plan.
    • Live Metrics Stream: when your function app is connected to Application Insights, you can view log data and other metrics in near real-time in the Azure portal using Live Metrics Stream. Use this method when monitoring functions running on multiple-instances or on Linux in a Consumption plan. This method uses sampled data. Log streams can be viewed both in the portal and in most local development environments.
    Monitoring Azure Functions

    Learn how to configure monitoring for your Azure Functions. See Monitoring Azure Functions data reference for detailed information on the metrics and logs metrics created by Azure Functions.

    In addition to this, Azure Functions uses Azure Monitor to monitor the health of your function apps. Azure Functions collects the same kinds of monitoring data as other Azure resources that are described in Azure Monitor data collection. See Monitoring Azure Functions data reference for detailed information on the metrics and logs metrics created by Azure Functions.

    Troubleshooting your Azure Functions:

    When you do run into issues with your function app, Azure Functions diagnostics points out what’s wrong. It guides you to the right information to troubleshoot and resolve the issue more easily and quickly.

    Let’s explore how to use Azure Functions diagnostics to diagnose and solve common function app issues.

    1. Navigate to your function app in the Azure portal.
    2. Select Diagnose and solve problems to open Azure Functions diagnostics.
    3. Once you’re here, there are multiple ways to retrieve the information you’re looking for. Choose a category that best describes the issue of your function app by using the keywords in the homepage tile. You can also type a keyword that best describes your issue in the search bar. There’s also a section at the bottom of the page that will directly take you to some of the more popular troubleshooting tools. For example, you could type execution to see a list of diagnostic reports related to your function app execution and open them directly from the homepage.

    Monitoring and troubleshooting apps in Azure Functions

    1. For example, click on the Function App Down or Reporting Errors link under Popular troubleshooting tools section. You will find detailed analysis, insights and next steps for the issues that were detected. On the left you’ll see a list of detectors. Click on them to explore more, or if there’s a particular keyword you want to look for, type it Into the search bar on the top.

    Monitoring and troubleshooting apps in Azure Functions

    TROUBLESHOOTING TIP

    Here are some general troubleshooting tips that you can follow if you find your Function App throwing Azure Functions Runtime unreachable error.

    Also be sure to check out the recommended best practices to ensure your Azure Functions are highly reliable. This article details some best practices for designing and deploying efficient function apps that remain healthy and perform well in a cloud-based environment.

    Bonus tip:

    - + \ No newline at end of file diff --git a/blog/page/6/index.html b/blog/page/6/index.html index 742502a56c..d8192f59fe 100644 --- a/blog/page/6/index.html +++ b/blog/page/6/index.html @@ -14,13 +14,13 @@ - +

    · 7 min read
    Brian Benz

    Welcome to Day 25 of #30DaysOfServerless!

    Azure Container Apps enable application code packaged in containers to run and scale without the overhead of managing cloud infrastructure and container orchestration. In this post I'll show you how to deploy a Java application running on Spring Boot in a container to Azure Container Registry and Azure Container Apps.


    What We'll Cover

    • Introduction to Deploying Java containers in the cloud
    • Step-by-step: Deploying to Azure Container Registry
    • Step-by-step: Deploying and running on Azure Container Apps
    • Resources: For self-study!


    Deploy Java containers to cloud

    We'll deploy a Java application running on Spring Boot in a container to Azure Container Registry and Azure Container Apps. Here are the main steps:

    • Create Azure Container Registry (ACR) on Azure portal
    • Create Azure Container App (ACA) on Azure portal.
    • Deploy code to Azure Container Registry from the Azure CLI.
    • Deploy container from ACR to ACA using the Azure portal.
    PRE-REQUISITES

    Sign in to Azure from the CLI using the az login command, and follow the prompts in your browser to complete the authentication process. Also, ensure you're running the latest version of the CLI by using the az upgrade command.

    1. Get Sample Code

    Fork and clone the sample GitHub repo to your local machine. Navigate to the and click Fork in the top-right corner of the page.

    The example code that we're using is a very basic containerized Spring Boot example. There are a lot more details to learn about Spring boot apps in docker, for a deep dive check out this Spring Boot Guide

    2. Run Sample Locally (Optional)

    If you have docker installed locally, you can optionally test the code on your local machine. Navigate to the root directory of the forked repository and run the following commands:

    docker build -t spring-boot-docker-aca .
    docker run -p 8080:8080 spring-boot-docker-aca

    Open a browser and go to https://localhost:8080. You should see this message:

    Hello Docker World

    That indicates the the Spring Boot app is successfully running locally in a docker container.

    Next, let's set up an Azure Container Registry an an Azure Container App and deploy this container to the cloud!


    3. Step-by-step: Deploy to ACR

    To create a container registry from the portal dashboard, Select Create a resource > Containers > Container Registry.

    Navigate to container registry in portal

    In the Basics tab, enter values for Resource group and Registry name. The registry name must be unique within Azure, and contain 5-50 alphanumeric characters. Create a new resource group in the West US location named spring-boot-docker-aca. Select the 'Basic' SKU.

    Keep the default values for the remaining settings. Then select Review + create, then Create. When the Deployment succeeded message appears, select the container registry in the portal.

    Note the registry server name ending with azurecr.io. You will use this in the following steps when you push and pull images with Docker.

    3.1 Log into registry using the Azure CLI

    Before pushing and pulling container images, you must log in to the registry instance. Sign into the Azure CLI on your local machine, then run the az acr login command. For this step, use the registry name, not the server name ending with azurecr.io.

    From the command line, type:

    az acr login --name myregistryname

    The command returns Login Succeeded once completed.

    3.2 Build & deploy with az acr build

    Next, we're going to deploy the docker container we created earlier using the AZ ACR Build command. AZ ACR Build creates a docker build from local code and pushes the container to Azure Container Registry if the build is successful.

    Go to your local clone of the spring-boot-docker-aca repo in the command line, type:

    az acr build --registry myregistryname --image spring-boot-docker-aca:v1 .

    3.3 List container images

    Once the AZ ACR Build command is complete, you should be able to view the container as a repository in the registry. In the portal, open your registry and select Repositories, then select the spring-boot-docker-aca repository you created with docker push. You should also see the v1 image under Tags.

    4. Deploy on ACA

    Now that we have an image in the Azure Container Registry, we can deploy it to Azure Container Apps. For the first deployment, we'll pull the container from our ACR as part of the ACA setup.

    4.1 Create a container app

    We'll create the container app at the same place that we created the container registry in the Azure portal. From the portal, select Create a resource > Containers > Container App. In the Basics tab, set these values:

    4.2 Enter project details

    SettingAction
    SubscriptionYour Azure subscription.
    Resource groupUse the spring-boot-docker-aca resource group
    Container app nameEnter spring-boot-docker-aca.

    4.3 Create an environment

    1. In the Create Container App environment field, select Create new.

    2. In the Create Container App Environment page on the Basics tab, enter the following values:

      SettingValue
      Environment nameEnter my-environment.
      RegionSelect westus3.
    3. Select OK.

    4. Select the Create button at the bottom of the Create Container App Environment page.

    5. Select the Next: App settings button at the bottom of the page.

    5. App settings tab

    The App settings tab is where you connect to the ACR and pull the repository image:

    SettingAction
    Use quickstart imageUncheck the checkbox.
    NameEnter spring-boot-docker-aca.
    Image sourceSelect Azure Container Registry
    RegistrySelect your ACR from the list.
    ImageSelect spring-boot-docker-aca from the list.
    Image TagSelect v1 from the list.

    5.1 Application ingress settings

    SettingAction
    IngressSelect Enabled.
    Ingress visibilitySelect External to publicly expose your container app.
    Target portEnter 8080.

    5.2 Deploy the container app

    1. Select the Review and create button at the bottom of the page.
    2. Select Create.

    Once the deployment is successfully completed, you'll see the message: Your deployment is complete.

    5.3 Verify deployment

    In the portal, go to the Overview of your spring-boot-docker-aca Azure Container App, and click on the Application Url. You should see this message in the browser:

    Hello Docker World

    That indicates the the Spring Boot app is running in a docker container in your spring-boot-docker-aca Azure Container App.

    Resources: For self-study!

    Once you have an understanding of the basics in ths post, there is so much more to learn!

    Thanks for stopping by!

    - + \ No newline at end of file diff --git a/blog/page/7/index.html b/blog/page/7/index.html index 10d117bb21..5aae0aeaf2 100644 --- a/blog/page/7/index.html +++ b/blog/page/7/index.html @@ -14,13 +14,13 @@ - +

    · 19 min read
    Alex Wolf

    Welcome to Day 24 of #30DaysOfServerless!

    We continue exploring E2E scenarios with this tutorial where you'll deploy a containerized ASP.NET Core 6.0 application to Azure Container Apps.

    The application consists of a front-end web app built using Blazor Server, as well as two Web API projects to manage data. These projects will exist as three separate containers inside of a shared container apps environment.


    What We'll Cover

    • Deploy ASP.NET Core 6.0 app to Azure Container Apps
    • Automate deployment workflows using GitHub Actions
    • Provision and deploy resources using Azure Bicep
    • Exercise: Try this yourself!
    • Resources: For self-study!


    Introduction

    Azure Container Apps enables you to run microservices and containerized applications on a serverless platform. With Container Apps, you enjoy the benefits of running containers while leaving behind the concerns of manually configuring cloud infrastructure and complex container orchestrators.

    In this tutorial, you'll deploy a containerized ASP.NET Core 6.0 application to Azure Container Apps. The application consists of a front-end web app built using Blazor Server, as well as two Web API projects to manage data. These projects will exist as three separate containers inside of a shared container apps environment.

    You will use GitHub Actions in combination with Bicep to deploy the application. These tools provide an approachable and sustainable solution for building CI/CD pipelines and working with Container Apps.

    PRE-REQUISITES

    Architecture

    In this tutorial, we'll setup a container app environment with a separate container for each project in the sample store app. The major components of the sample project include:

    • A Blazor Server front-end web app to display product information
    • A products API to list available products
    • An inventory API to determine how many products are in stock
    • GitHub Actions and Bicep templates to provision Azure resources and then build and deploy the sample app.

    You will explore these templates later in the tutorial.

    Public internet traffic should be proxied to the Blazor app. The back-end APIs should only be reachable via requests from the Blazor app inside the container apps environment. This setup can be achieved using container apps environment ingress configurations during deployment.

    An architecture diagram of the shopping app


    Project Sources

    Want to follow along? Fork the sample below. The tutorial can be completed with or without Dapr integration. Pick the path you feel comfortable in. Dapr provides various benefits that make working with Microservices easier - you can learn more in the docs. For this tutorial you will need GitHub and Azure CLI.

    PICK YOUR PATH

    To follow along with this tutorial, fork the relevant sample project below.

    You can run the app locally from Visual Studio:

    • Right click on the Blazor Store project and select Set as Startup Project.
    • Press the start button at the top of Visual Studio to run the app.
    • (Once running) start each API in the background by
    • right-clicking on the project node
    • selecting Debug --> Start without debugging.

    Once the Blazor app is running, you should see something like this:

    An architecture diagram of the shopping app


    Configuring Azure credentials

    In order to deploy the application to Azure through GitHub Actions, you first need to create a service principal. The service principal will allow the GitHub Actions process to authenticate to your Azure subscription to create resources and deploy code. You can learn more about Service Principals in the Azure CLI documentation. For this step you'll need to be logged into the Azure CLI.

    1) If you have not done so already, make sure to fork the sample project to your own GitHub account or organization.

    1) Once you have completed this step, create a service principal using the Azure CLI command below:

    ```azurecli
    $subscriptionId=$(az account show --query id --output tsv)
    az ad sp create-for-rbac --sdk-auth --name WebAndApiSample --role Contributor --scopes /subscriptions/$subscriptionId
    ```

    1) Copy the JSON output of the CLI command to your clipboard

    1) Under the settings tab of your forked GitHub repo, create a new secret named AzureSPN. The name is important to match the Bicep templates included in the project, which we'll review later. Paste the copied service principal values on your clipboard into the secret and save your changes. This new secret will be used by the GitHub Actions workflow to authenticate to Azure.

    :::image type="content" source="./img/dotnet/github-secrets.png" alt-text="A screenshot of adding GitHub secrets.":::

    Deploy using Github Actions

    You are now ready to deploy the application to Azure Container Apps using GitHub Actions. The sample application includes a GitHub Actions template that is configured to build and deploy any changes to a branch named deploy. The deploy branch does not exist in your forked repository by default, but you can easily create it through the GitHub user interface.

    1) Switch to the Actions tab along the top navigation of your GitHub repository. If you have not done so already, ensure that workflows are enabled by clicking the button in the center of the page.

    A screenshot showing how to enable GitHub actions

    1) Navigate to the main Code tab of your repository and select the main dropdown. Enter deploy into the branch input box, and then select Create branch: deploy from 'main'.

    A screenshot showing how to create the deploy branch

    1) On the new deploy branch, navigate down into the .github/workflows folder. You should see a file called deploy.yml, which contains the main GitHub Actions workflow script. Click on the file to view its content. You'll learn more about this file later in the tutorial.

    1) Click the pencil icon in the upper right to edit the document.

    1) Change the RESOURCE_GROUP_NAME: value to msdocswebappapis or another valid resource group name of your choosing.

    1) In the upper right of the screen, select Start commit and then Commit changes to commit your edit. This will persist the change to the file and trigger the GitHub Actions workflow to build and deploy the app.

    A screenshot showing how to commit changes

    1) Switch to the Actions tab along the top navigation again. You should see the workflow running to create the necessary resources and deploy the app. The workflow may take several minutes to run. When it completes successfully, all of the jobs should have a green checkmark icon next to them.

    The completed GitHub workflow.

    Explore the Azure resources

    Once the GitHub Actions workflow has completed successfully you can browse the created resources in the Azure portal.

    1) On the left navigation, select Resource Groups. Next,choose the msdocswebappapis resource group that was created by the GitHub Actions workflow.

    2) You should see seven resources available that match the screenshot and table descriptions below.

    The resources created in Azure.

    Resource nameTypeDescription
    inventoryContainer appThe containerized inventory API.
    msdocswebappapisacrContainer registryA registry that stores the built Container images for your apps.
    msdocswebappapisaiApplication insightsApplication insights provides advanced monitoring, logging and metrics for your apps.
    msdocswebappapisenvContainer apps environmentA container environment that manages networking, security and resource concerns. All of your containers live in this environment.
    msdocswebappapislogsLog Analytics workspaceA workspace environment for managing logging and analytics for the container apps environment
    productsContainer appThe containerized products API.
    storeContainer appThe Blazor front-end web app.

    3) You can view your running app in the browser by clicking on the store container app. On the overview page, click the Application Url link on the upper right of the screen.

    :::image type="content" source="./img/dotnet/application-url.png" alt-text="The link to browse the app.":::

    Understanding the GitHub Actions workflow

    The GitHub Actions workflow created and deployed resources to Azure using the deploy.yml file in the .github folder at the root of the project. The primary purpose of this file is to respond to events - such as commits to a branch - and run jobs to accomplish tasks. The deploy.yml file in the sample project has three main jobs:

    • Provision: Create the necessary resources in Azure, such as the container apps environment. This step leverages Bicep templates to create the Azure resources, which you'll explore in a moment.
    • Build: Create the container images for the three apps in the project and store them in the container registry.
    • Deploy: Deploy the container images to the different container apps created during the provisioning job.

    The deploy.yml file also accepts parameters to make the workflow more dynamic, such as setting the resource group name or the Azure region resources will be provisioned to.

    Below is a commented version of the deploy.yml file that highlights the essential steps.

    name: Build and deploy .NET application to Container Apps

    # Trigger the workflow on pushes to the deploy branch
    on:
    push:
    branches:
    - deploy

    env:
    # Set workflow variables
    RESOURCE_GROUP_NAME: msdocswebappapis

    REGION: eastus

    STORE_DOCKER: Store/Dockerfile
    STORE_IMAGE: store

    INVENTORY_DOCKER: Store.InventoryApi/Dockerfile
    INVENTORY_IMAGE: inventory

    PRODUCTS_DOCKER: Store.ProductApi/Dockerfile
    PRODUCTS_IMAGE: products

    jobs:
    # Create the required Azure resources
    provision:
    runs-on: ubuntu-latest

    steps:

    - name: Checkout to the branch
    uses: actions/checkout@v2

    - name: Azure Login
    uses: azure/login@v1
    with:
    creds: ${{ secrets.AzureSPN }}

    - name: Create resource group
    uses: azure/CLI@v1
    with:
    inlineScript: >
    echo "Creating resource group in Azure"
    echo "Executing 'az group create -l ${{ env.REGION }} -n ${{ env.RESOURCE_GROUP_NAME }}'"
    az group create -l ${{ env.REGION }} -n ${{ env.RESOURCE_GROUP_NAME }}

    # Use Bicep templates to create the resources in Azure
    - name: Creating resources
    uses: azure/CLI@v1
    with:
    inlineScript: >
    echo "Creating resources"
    az deployment group create --resource-group ${{ env.RESOURCE_GROUP_NAME }} --template-file '/github/workspace/Azure/main.bicep' --debug

    # Build the three app container images
    build:
    runs-on: ubuntu-latest
    needs: provision

    steps:

    - name: Checkout to the branch
    uses: actions/checkout@v2

    - name: Azure Login
    uses: azure/login@v1
    with:
    creds: ${{ secrets.AzureSPN }}

    - name: Set up Docker Buildx
    uses: docker/setup-buildx-action@v1

    - name: Login to ACR
    run: |
    set -euo pipefail
    access_token=$(az account get-access-token --query accessToken -o tsv)
    refresh_token=$(curl https://${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io/oauth2/exchange -v -d "grant_type=access_token&service=${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io&access_token=$access_token" | jq -r .refresh_token)
    docker login -u 00000000-0000-0000-0000-000000000000 --password-stdin ${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io <<< "$refresh_token"

    - name: Build the products api image and push it to ACR
    uses: docker/build-push-action@v2
    with:
    push: true
    tags: ${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io/${{ env.PRODUCTS_IMAGE }}:${{ github.sha }}
    file: ${{ env.PRODUCTS_DOCKER }}

    - name: Build the inventory api image and push it to ACR
    uses: docker/build-push-action@v2
    with:
    push: true
    tags: ${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io/${{ env.INVENTORY_IMAGE }}:${{ github.sha }}
    file: ${{ env.INVENTORY_DOCKER }}

    - name: Build the frontend image and push it to ACR
    uses: docker/build-push-action@v2
    with:
    push: true
    tags: ${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io/${{ env.STORE_IMAGE }}:${{ github.sha }}
    file: ${{ env.STORE_DOCKER }}

    # Deploy the three container images
    deploy:
    runs-on: ubuntu-latest
    needs: build

    steps:

    - name: Checkout to the branch
    uses: actions/checkout@v2

    - name: Azure Login
    uses: azure/login@v1
    with:
    creds: ${{ secrets.AzureSPN }}

    - name: Installing Container Apps extension
    uses: azure/CLI@v1
    with:
    inlineScript: >
    az config set extension.use_dynamic_install=yes_without_prompt

    az extension add --name containerapp --yes

    - name: Login to ACR
    run: |
    set -euo pipefail
    access_token=$(az account get-access-token --query accessToken -o tsv)
    refresh_token=$(curl https://${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io/oauth2/exchange -v -d "grant_type=access_token&service=${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io&access_token=$access_token" | jq -r .refresh_token)
    docker login -u 00000000-0000-0000-0000-000000000000 --password-stdin ${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io <<< "$refresh_token"

    - name: Deploy Container Apps
    uses: azure/CLI@v1
    with:
    inlineScript: >
    az containerapp registry set -n products -g ${{ env.RESOURCE_GROUP_NAME }} --server ${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io

    az containerapp update -n products -g ${{ env.RESOURCE_GROUP_NAME }} -i ${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io/${{ env.PRODUCTS_IMAGE }}:${{ github.sha }}

    az containerapp registry set -n inventory -g ${{ env.RESOURCE_GROUP_NAME }} --server ${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io

    az containerapp update -n inventory -g ${{ env.RESOURCE_GROUP_NAME }} -i ${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io/${{ env.INVENTORY_IMAGE }}:${{ github.sha }}

    az containerapp registry set -n store -g ${{ env.RESOURCE_GROUP_NAME }} --server ${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io

    az containerapp update -n store -g ${{ env.RESOURCE_GROUP_NAME }} -i ${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io/${{ env.STORE_IMAGE }}:${{ github.sha }}

    - name: logout
    run: >
    az logout

    Understanding the Bicep templates

    During the provisioning stage of the GitHub Actions workflow, the main.bicep file is processed. Bicep files provide a declarative way of generating resources in Azure and are ideal for managing infrastructure as code. You can learn more about Bicep in the related documentation. The main.bicep file in the sample project creates the following resources:

    • The container registry to store images of the containerized apps.
    • The container apps environment, which handles networking and resource management for the container apps.
    • Three container apps - one for the Blazor front-end and two for the back-end product and inventory APIs.
    • Configuration values to connect these services together

    main.bicep without Dapr

    param location string = resourceGroup().location

    # create the azure container registry
    resource acr 'Microsoft.ContainerRegistry/registries@2021-09-01' = {
    name: toLower('${resourceGroup().name}acr')
    location: location
    sku: {
    name: 'Basic'
    }
    properties: {
    adminUserEnabled: true
    }
    }

    # create the aca environment
    module env 'environment.bicep' = {
    name: 'containerAppEnvironment'
    params: {
    location: location
    }
    }

    # create the various configuration pairs
    var shared_config = [
    {
    name: 'ASPNETCORE_ENVIRONMENT'
    value: 'Development'
    }
    {
    name: 'APPINSIGHTS_INSTRUMENTATIONKEY'
    value: env.outputs.appInsightsInstrumentationKey
    }
    {
    name: 'APPLICATIONINSIGHTS_CONNECTION_STRING'
    value: env.outputs.appInsightsConnectionString
    }
    ]

    # create the products api container app
    module products 'container_app.bicep' = {
    name: 'products'
    params: {
    name: 'products'
    location: location
    registryPassword: acr.listCredentials().passwords[0].value
    registryUsername: acr.listCredentials().username
    containerAppEnvironmentId: env.outputs.id
    registry: acr.name
    envVars: shared_config
    externalIngress: false
    }
    }

    # create the inventory api container app
    module inventory 'container_app.bicep' = {
    name: 'inventory'
    params: {
    name: 'inventory'
    location: location
    registryPassword: acr.listCredentials().passwords[0].value
    registryUsername: acr.listCredentials().username
    containerAppEnvironmentId: env.outputs.id
    registry: acr.name
    envVars: shared_config
    externalIngress: false
    }
    }

    # create the store api container app
    var frontend_config = [
    {
    name: 'ProductsApi'
    value: 'http://${products.outputs.fqdn}'
    }
    {
    name: 'InventoryApi'
    value: 'http://${inventory.outputs.fqdn}'
    }
    ]

    module store 'container_app.bicep' = {
    name: 'store'
    params: {
    name: 'store'
    location: location
    registryPassword: acr.listCredentials().passwords[0].value
    registryUsername: acr.listCredentials().username
    containerAppEnvironmentId: env.outputs.id
    registry: acr.name
    envVars: union(shared_config, frontend_config)
    externalIngress: true
    }
    }

    main.bicep with Dapr


    param location string = resourceGroup().location

    # create the azure container registry
    resource acr 'Microsoft.ContainerRegistry/registries@2021-09-01' = {
    name: toLower('${resourceGroup().name}acr')
    location: location
    sku: {
    name: 'Basic'
    }
    properties: {
    adminUserEnabled: true
    }
    }

    # create the aca environment
    module env 'environment.bicep' = {
    name: 'containerAppEnvironment'
    params: {
    location: location
    }
    }

    # create the various config pairs
    var shared_config = [
    {
    name: 'ASPNETCORE_ENVIRONMENT'
    value: 'Development'
    }
    {
    name: 'APPINSIGHTS_INSTRUMENTATIONKEY'
    value: env.outputs.appInsightsInstrumentationKey
    }
    {
    name: 'APPLICATIONINSIGHTS_CONNECTION_STRING'
    value: env.outputs.appInsightsConnectionString
    }
    ]

    # create the products api container app
    module products 'container_app.bicep' = {
    name: 'products'
    params: {
    name: 'products'
    location: location
    registryPassword: acr.listCredentials().passwords[0].value
    registryUsername: acr.listCredentials().username
    containerAppEnvironmentId: env.outputs.id
    registry: acr.name
    envVars: shared_config
    externalIngress: false
    }
    }

    # create the inventory api container app
    module inventory 'container_app.bicep' = {
    name: 'inventory'
    params: {
    name: 'inventory'
    location: location
    registryPassword: acr.listCredentials().passwords[0].value
    registryUsername: acr.listCredentials().username
    containerAppEnvironmentId: env.outputs.id
    registry: acr.name
    envVars: shared_config
    externalIngress: false
    }
    }

    # create the store api container app
    module store 'container_app.bicep' = {
    name: 'store'
    params: {
    name: 'store'
    location: location
    registryPassword: acr.listCredentials().passwords[0].value
    registryUsername: acr.listCredentials().username
    containerAppEnvironmentId: env.outputs.id
    registry: acr.name
    envVars: shared_config
    externalIngress: true
    }
    }


    Bicep Modules

    The main.bicep file references modules to create resources, such as module products. Modules are a feature of Bicep templates that enable you to abstract resource declarations into their own files or sub-templates. As the main.bicep file is processed, the defined modules are also evaluated. Modules allow you to create resources in a more organized and reusable way. They can also define input and output parameters that are passed to and from the parent template, such as the name of a resource.

    For example, the environment.bicep module extracts the details of creating a container apps environment into a reusable template. The module defines necessary resource dependencies such as Log Analytics Workspaces and an Application Insights instance.

    environment.bicep without Dapr

    param baseName string = resourceGroup().name
    param location string = resourceGroup().location

    resource logs 'Microsoft.OperationalInsights/workspaces@2021-06-01' = {
    name: '${baseName}logs'
    location: location
    properties: any({
    retentionInDays: 30
    features: {
    searchVersion: 1
    }
    sku: {
    name: 'PerGB2018'
    }
    })
    }

    resource appInsights 'Microsoft.Insights/components@2020-02-02' = {
    name: '${baseName}ai'
    location: location
    kind: 'web'
    properties: {
    Application_Type: 'web'
    WorkspaceResourceId: logs.id
    }
    }

    resource env 'Microsoft.App/managedEnvironments@2022-01-01-preview' = {
    name: '${baseName}env'
    location: location
    properties: {
    appLogsConfiguration: {
    destination: 'log-analytics'
    logAnalyticsConfiguration: {
    customerId: logs.properties.customerId
    sharedKey: logs.listKeys().primarySharedKey
    }
    }
    }
    }

    output id string = env.id
    output appInsightsInstrumentationKey string = appInsights.properties.InstrumentationKey
    output appInsightsConnectionString string = appInsights.properties.ConnectionString

    environment.bicep with Dapr


    param baseName string = resourceGroup().name
    param location string = resourceGroup().location

    resource logs 'Microsoft.OperationalInsights/workspaces@2021-06-01' = {
    name: '${baseName}logs'
    location: location
    properties: any({
    retentionInDays: 30
    features: {
    searchVersion: 1
    }
    sku: {
    name: 'PerGB2018'
    }
    })
    }

    resource appInsights 'Microsoft.Insights/components@2020-02-02' = {
    name: '${baseName}ai'
    location: location
    kind: 'web'
    properties: {
    Application_Type: 'web'
    WorkspaceResourceId: logs.id
    }
    }

    resource env 'Microsoft.App/managedEnvironments@2022-01-01-preview' = {
    name: '${baseName}env'
    location: location
    properties: {
    appLogsConfiguration: {
    destination: 'log-analytics'
    logAnalyticsConfiguration: {
    customerId: logs.properties.customerId
    sharedKey: logs.listKeys().primarySharedKey
    }
    }
    }
    }

    output id string = env.id
    output appInsightsInstrumentationKey string = appInsights.properties.InstrumentationKey
    output appInsightsConnectionString string = appInsights.properties.ConnectionString


    The container_apps.bicep template defines numerous parameters to provide a reusable template for creating container apps. This allows the module to be used in other CI/CD pipelines as well.

    container_app.bicep without Dapr

    param name string
    param location string = resourceGroup().location
    param containerAppEnvironmentId string
    param repositoryImage string = 'mcr.microsoft.com/azuredocs/containerapps-helloworld:latest'
    param envVars array = []
    param registry string
    param minReplicas int = 1
    param maxReplicas int = 1
    param port int = 80
    param externalIngress bool = false
    param allowInsecure bool = true
    param transport string = 'http'
    param registryUsername string
    @secure()
    param registryPassword string

    resource containerApp 'Microsoft.App/containerApps@2022-01-01-preview' ={
    name: name
    location: location
    properties:{
    managedEnvironmentId: containerAppEnvironmentId
    configuration: {
    activeRevisionsMode: 'single'
    secrets: [
    {
    name: 'container-registry-password'
    value: registryPassword
    }
    ]
    registries: [
    {
    server: registry
    username: registryUsername
    passwordSecretRef: 'container-registry-password'
    }
    ]
    ingress: {
    external: externalIngress
    targetPort: port
    transport: transport
    allowInsecure: allowInsecure
    }
    }
    template: {
    containers: [
    {
    image: repositoryImage
    name: name
    env: envVars
    }
    ]
    scale: {
    minReplicas: minReplicas
    maxReplicas: maxReplicas
    }
    }
    }
    }

    output fqdn string = containerApp.properties.configuration.ingress.fqdn

    container_app.bicep with Dapr


    param name string
    param location string = resourceGroup().location
    param containerAppEnvironmentId string
    param repositoryImage string = 'mcr.microsoft.com/azuredocs/containerapps-helloworld:latest'
    param envVars array = []
    param registry string
    param minReplicas int = 1
    param maxReplicas int = 1
    param port int = 80
    param externalIngress bool = false
    param allowInsecure bool = true
    param transport string = 'http'
    param appProtocol string = 'http'
    param registryUsername string
    @secure()
    param registryPassword string

    resource containerApp 'Microsoft.App/containerApps@2022-01-01-preview' ={
    name: name
    location: location
    properties:{
    managedEnvironmentId: containerAppEnvironmentId
    configuration: {
    dapr: {
    enabled: true
    appId: name
    appPort: port
    appProtocol: appProtocol
    }
    activeRevisionsMode: 'single'
    secrets: [
    {
    name: 'container-registry-password'
    value: registryPassword
    }
    ]
    registries: [
    {
    server: registry
    username: registryUsername
    passwordSecretRef: 'container-registry-password'
    }
    ]
    ingress: {
    external: externalIngress
    targetPort: port
    transport: transport
    allowInsecure: allowInsecure
    }
    }
    template: {
    containers: [
    {
    image: repositoryImage
    name: name
    env: envVars
    }
    ]
    scale: {
    minReplicas: minReplicas
    maxReplicas: maxReplicas
    }
    }
    }
    }

    output fqdn string = containerApp.properties.configuration.ingress.fqdn


    Understanding configuration differences with Dapr

    The code for this specific sample application is largely the same whether or not Dapr is integrated. However, even with this simple app, there are a few benefits and configuration differences when using Dapr that are worth exploring.

    In this scenario most of the changes are related to communication between the container apps. However, you can explore the full range of Dapr benefits by reading the Dapr integration with Azure Container Apps article in the conceptual documentation.

    Without Dapr

    Without Dapr the main.bicep template handles wiring up the front-end store app to communicate with the back-end apis by manually managing environment variables. The bicep template retrieves the fully qualified domains (fqdn) of the API apps as output parameters when they are created. Those configurations are then set as environment variables on the store container app.


    # Retrieve environment variables from API container creation
    var frontend_config = [
    {
    name: 'ProductsApi'
    value: 'http://${products.outputs.fqdn}'
    }
    {
    name: 'InventoryApi'
    value: 'http://${inventory.outputs.fqdn}'
    }
    ]

    # create the store api container app, passing in config
    module store 'container_app.bicep' = {
    name: 'store'
    params: {
    name: 'store'
    location: location
    registryPassword: acr.listCredentials().passwords[0].value
    registryUsername: acr.listCredentials().username
    containerAppEnvironmentId: env.outputs.id
    registry: acr.name
    envVars: union(shared_config, frontend_config)
    externalIngress: true
    }
    }

    The environment variables are then retrieved inside of the program class and used to configure the base URLs of the corresponding HTTP clients.


    builder.Services.AddHttpClient("Products", (httpClient) => httpClient.BaseAddress = new Uri(builder.Configuration.GetValue<string>("ProductsApi")));
    builder.Services.AddHttpClient("Inventory", (httpClient) => httpClient.BaseAddress = new Uri(builder.Configuration.GetValue<string>("InventoryApi")));

    With Dapr

    Dapr can be enabled on a container app when it is created, as seen below. This configuration adds a Dapr sidecar to the app to streamline discovery and communication features between the different container apps in your environment.


    # Create the container app with Dapr enabled
    resource containerApp 'Microsoft.App/containerApps@2022-01-01-preview' ={
    name: name
    location: location
    properties:{
    managedEnvironmentId: containerAppEnvironmentId
    configuration: {
    dapr: {
    enabled: true
    appId: name
    appPort: port
    appProtocol: appProtocol
    }
    activeRevisionsMode: 'single'
    secrets: [
    {
    name: 'container-registry-password'
    value: registryPassword
    }
    ]

    # Rest of template omitted for brevity...
    }
    }

    Some of these Dapr features can be surfaced through the program file. You can configure your HttpClient to leverage Dapr configurations when communicating with other apps in your environment.


    // reconfigure code to make requests to Dapr sidecar
    var baseURL = (Environment.GetEnvironmentVariable("BASE_URL") ?? "http://localhost") + ":" + (Environment.GetEnvironmentVariable("DAPR_HTTP_PORT") ?? "3500");
    builder.Services.AddHttpClient("Products", (httpClient) =>
    {
    httpClient.BaseAddress = new Uri(baseURL);
    httpClient.DefaultRequestHeaders.Add("dapr-app-id", "Products");
    });

    builder.Services.AddHttpClient("Inventory", (httpClient) =>
    {
    httpClient.BaseAddress = new Uri(baseURL);
    httpClient.DefaultRequestHeaders.Add("dapr-app-id", "Inventory");
    });


    Clean up resources

    If you're not going to continue to use this application, you can delete the Azure Container Apps and all the associated services by removing the resource group.

    Follow these steps in the Azure portal to remove the resources you created:

    1. In the Azure portal, navigate to the msdocswebappsapi resource group using the left navigation or search bar.
    2. Select the Delete resource group button at the top of the resource group Overview.
    3. Enter the resource group name msdocswebappsapi in the Are you sure you want to delete "msdocswebappsapi" confirmation dialog.
    4. Select Delete.
      The process to delete the resource group may take a few minutes to complete.
    - + \ No newline at end of file diff --git a/blog/page/8/index.html b/blog/page/8/index.html index 2da5b7e10f..fc60a4e689 100644 --- a/blog/page/8/index.html +++ b/blog/page/8/index.html @@ -14,13 +14,13 @@ - +

    · 9 min read
    Justin Yoo

    Welcome to Day 21 of #30DaysOfServerless!

    We've so far walked through what Azure Event Grid is and how it generally works. Today, let's discuss how Azure Event Grid deals with CloudEvents.


    What We'll Cover


    OK. Let's get started!

    What is CloudEvents?

    Needless to say, events are everywhere. Events come not only from event-driven systems but also from many different systems and devices, including IoT ones like Raspberry PI.

    But the problem is that every event publisher (system/device that creates events) describes their events differently, meaning there is no standard way of describing events. It has caused many issues between systems, mainly from the interoperability perspective.

    1. Consistency: No standard way of describing events resulted in developers having to write their own event handling logic for each event source.
    2. Accessibility: There were no common libraries, tooling and infrastructure to deliver events across systems.
    3. Productivity: The overall productivity decreases because of the lack of the standard format of events.

    Cloud Events Logo

    Therefore, CNCF (Cloud-Native Computing Foundation) has brought up the concept, called CloudEvents. CloudEvents is a specification that commonly describes event data. Conforming any event data to this spec will simplify the event declaration and delivery across systems and platforms and more, resulting in a huge productivity increase.

    How Azure Event Grid brokers CloudEvents

    Before CloudEvents, Azure Event Grid described events in their own way. Therefore, if you want to use Azure Event Grid, you should follow the event format/schema that Azure Event Grid declares. However, not every system/service/application follows the Azure Event Grid schema. Therefore, Azure Event Grid now supports CloudEvents spec as input and output formats.

    Azure Event Grid for Azure

    Take a look at the simple diagram below, which describes how Azure Event Grid captures events raised from various Azure services. In this diagram, Azure Key Vault takes the role of the event source or event publisher, and Azure Logic Apps takes the role of the event handler (I'll discuss Azure Logic Apps as the event handler later in this post). We use Azure Event Grid System Topic for Azure.

    Azure Event Grid for Azure

    Therefore, let's create an Azure Event Grid System Topic that captures events raised from Azure Key Vault when a new version of a secret is added.

    Azure Event Grid System Topic for Key Vault

    As Azure Event Grid makes use of the pub/sub pattern, you need to create the Azure Event Grid Subscription to consume the events. Here's the subscription that uses the Event Grid data format:

    ![Azure Event Grid System Subscription for Key Vault in Event Grid Format][./img/21-cloudevents-via-event-grid-03.png]

    Once you create the subscription, create a new version of the secret on Azure Key Vault. Then, Azure Key Vault raises an event, which is captured in the Event Grid format:

    [
    {
    "id": "6f44b9c0-d37e-40e7-89be-f70a6da291cc",
    "topic": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/rg-aegce-krc/providers/Microsoft.KeyVault/vaults/kv-xxxxxxxx",
    "subject": "hello",
    "eventType": "Microsoft.KeyVault.SecretNewVersionCreated",
    "data": {
    "Id": "https://kv-xxxxxxxx.vault.azure.net/secrets/hello/064dfc082fec463f8d4610ed6118811d",
    "VaultName": "kv-xxxxxxxx",
    "ObjectType": "Secret",
    "ObjectName": "hello",
    "Version": "064dfc082fec463f8d4610ed6118811d",
    "NBF": null,
    "EXP": null
    },
    "dataVersion": "1",
    "metadataVersion": "1",
    "eventTime": "2022-09-21T07:08:09.1234567Z"
    }
    ]

    So, how is it different from the CloudEvents format? Let's take a look. According to the spec, the JSON data in CloudEvents might look like this:

    {
    "id" : "C234-1234-1234",
    "source" : "/mycontext",
    "specversion" : "1.0",
    "type" : "com.example.someevent",
    "comexampleextension1" : "value",
    "time" : "2018-04-05T17:31:00Z",
    "datacontenttype" : "application/cloudevents+json",
    "data" : {
    "appinfoA" : "abc",
    "appinfoB" : 123,
    "appinfoC" : true
    }
    }

    This time, let's create another subscription using the CloudEvents schema. Here's how to create the subscription against the system topic:

    Azure Event Grid System Subscription for Key Vault in CloudEvents Format

    Therefore, Azure Key Vault emits the event data in the CloudEvents format:

    {
    "id": "6f44b9c0-d37e-40e7-89be-f70a6da291cc",
    "source": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/rg-aegce-krc/providers/Microsoft.KeyVault/vaults/kv-xxxxxxxx",
    "specversion": "1.0",
    "type": "Microsoft.KeyVault.SecretNewVersionCreated",
    "subject": "hello",
    "time": "2022-09-21T07:08:09.1234567Z",
    "data": {
    "Id": "https://kv-xxxxxxxx.vault.azure.net/secrets/hello/064dfc082fec463f8d4610ed6118811d",
    "VaultName": "kv-xxxxxxxx",
    "ObjectType": "Secret",
    "ObjectName": "hello",
    "Version": "064dfc082fec463f8d4610ed6118811d",
    "NBF": null,
    "EXP": null
    }
    }

    Can you identify some differences between the Event Grid format and the CloudEvents format? Fortunately, both Event Grid schema and CloudEvents schema look similar to each other. But they might be significantly different if you use a different event source outside Azure.

    Azure Event Grid for Systems outside Azure

    As mentioned above, the event data described outside Azure or your own applications within Azure might not be understandable by Azure Event Grid. In this case, we need to use Azure Event Grid Custom Topic. Here's the diagram for it:

    Azure Event Grid for Applications outside Azure

    Let's create the Azure Event Grid Custom Topic. When you create the topic, make sure that you use the CloudEvent schema during the provisioning process:

    Azure Event Grid Custom Topic

    If your application needs to publish events to Azure Event Grid Custom Topic, your application should build the event data in the CloudEvents format. If you use a .NET application, add the NuGet package first.

    dotnet add package Azure.Messaging.EventGrid

    Then, create the publisher instance. You've already got the topic endpoint URL and the access key.

    var topicEndpoint = new Uri("<Azure Event Grid Custom Topic Endpoint URL>");
    var credential = new AzureKeyCredential("<Azure Event Grid Custom Topic Access Key>");
    var publisher = new EventGridPublisherClient(topicEndpoint, credential);

    Now, build the event data like below. Make sure that you follow the CloudEvents schema that requires additional metadata like event source, event type and content type.

    var source = "/your/event/source";
    var type = "com.source.event.your/OnEventOccurs";

    var data = new MyEventData() { Hello = "World" };

    var @event = new CloudEvent(source, type, data);

    And finally, send the event to Azure Event Grid Custom Topic.

    await publisher.SendEventAsync(@event);

    The captured event data looks like the following:

    {
    "id": "cc2b2775-52b8-43b8-a7cc-c1c33c2b2e59",
    "source": "/your/event/source",
    "type": "com.source.event.my/OnEventOccurs",
    "data": {
    "Hello": "World"
    },
    "time": "2022-09-21T07:08:09.1234567+00:00",
    "specversion": "1.0"
    }

    However, due to limitations, someone might insist that their existing application doesn't or can't emit the event data in the CloudEvents format. In this case, what should we do? There's no standard way of sending the event data in the CloudEvents format to Azure Event Grid Custom Topic. One of the approaches we may be able to apply is to put a converter between the existing application and Azure Event Grid Custom Topic like below:

    Azure Event Grid for Applications outside Azure with Converter

    Once the Function app (or any converter app) receives legacy event data, it internally converts the CloudEvents format and publishes it to Azure Event Grid.

    var data = default(MyRequestData);
    using (var reader = new StreamReader(req.Body))
    {
    var serialised = await reader.ReadToEndAsync();
    data = JsonConvert.DeserializeObject<MyRequestData>(serialised);
    }

    var converted = new MyEventData() { Hello = data.Lorem };
    var @event = new CloudEvent(source, type, converted);

    The converted event data is captured like this:

    {
    "id": "df296da3-77cd-4da2-8122-91f631941610",
    "source": "/your/event/source",
    "type": "com.source.event.my/OnEventOccurs",
    "data": {
    "Hello": "ipsum"
    },
    "time": "2022-09-21T07:08:09.1234567+00:00",
    "specversion": "1.0"
    }

    This approach is beneficial in many integration scenarios to make all the event data canonicalised.

    How Azure Logic Apps consumes CloudEvents

    I put Azure Logic Apps as the event handler in the previous diagrams. According to the CloudEvents spec, each event handler must implement request validation to avoid abuse. One good thing about using Azure Logic Apps is that it has already implemented this request validation feature. It implies that we just subscribe to the topic and consume the event data.

    Create a new Logic Apps instance and add the HTTP Request trigger. Once it saves, you will get the endpoint URL.

    Azure Logic Apps with HTTP Request Trigger

    Then, create the Azure Event Grid Subscription with:

    • Endpoint type: Webhook
    • Endpoint URL: The Logic Apps URL from above.

    Azure Logic Apps with HTTP Request Trigger

    Once the subscription is ready, this Logic Apps works well as the event handler. Here's how it receives the CloudEvents data from the subscription.

    Azure Logic Apps that Received CloudEvents data

    Now you've got the CloudEvents data. It's entirely up to you to handle that event data however you want!

    Exercise: Try this yourself!

    You can fork this GitHub repository to your account and play around with it to see how Azure Event Grid with CloudEvents works. Alternatively, the "Deploy to Azure" button below will provision all necessary Azure resources and deploy an Azure Functions app to mimic the event publisher.

    Deploy To Azure

    Resources: For self-study!

    Want to know more about CloudEvents in real-life examples? Here are several resources you can take a look at:

    - + \ No newline at end of file diff --git a/blog/page/9/index.html b/blog/page/9/index.html index efab797574..a53cd48cba 100644 --- a/blog/page/9/index.html +++ b/blog/page/9/index.html @@ -14,7 +14,7 @@ - + @@ -22,7 +22,7 @@

    · 10 min read
    Ayca Bas

    Welcome to Day 20 of #30DaysOfServerless!

    Every day millions of people spend their precious time in productivity tools. What if you use data and intelligence behind the Microsoft applications (Microsoft Teams, Outlook, and many other Office apps) to build seamless automations and custom apps to boost productivity?

    In this post, we'll learn how to build a seamless onboarding experience for new employees joining a company with the power of Microsoft Graph, integrated with Event Hubs and Logic Apps!


    What We'll Cover

    • ✨ The power of Microsoft Graph
    • 🖇️ How do Microsoft Graph and Event Hubs work together?
    • 🛠 Let's Build an Onboarding Workflow!
      • 1️⃣ Setup Azure Event Hubs + Key Vault
      • 2️⃣ Subscribe to users, receive change notifications from Logic Apps
      • 3️⃣ Create Onboarding workflow in the Logic Apps
    • 🚀 Debug: Your onboarding experience
    • ✋ Exercise: Try this tutorial out yourself!
    • 📚 Resources: For Self-Study


    ✨ The Power of Microsoft Graph

    Microsoft Graph is the gateway to data and intelligence in Microsoft 365 platform. Microsoft Graph exploses Rest APIs and client libraries to access data across Microsoft 365 core services such as Calendar, Teams, To Do, Outlook, People, Planner, OneDrive, OneNote and more.

    Overview of Microsoft Graph

    You can build custom experiences by using Microsoft Graph such as automating the onboarding process for new employees. When new employees are created in the Azure Active Directory, they will be automatically added in the Onboarding team on Microsoft Teams.

    Solution architecture


    🖇️ Microsoft Graph with Event Hubs

    Microsoft Graph uses a webhook mechanism to track changes in resources and deliver change notifications to the clients. For example, with Microsoft Graph Change Notifications, you can receive change notifications when:

    • a new task is added in the to-do list
    • a user changes the presence status from busy to available
    • an event is deleted/cancelled from the calendar

    If you'd like to track a large set of resources at a high frequency, use Azure Events Hubs instead of traditional webhooks to receive change notifications. Azure Event Hubs is a popular real-time events ingestion and distribution service built for scale.

    EVENT GRID - PARTNER EVENTS

    Microsoft Graph Change Notifications can be also received by using Azure Event Grid -- currently available for Microsoft Partners! Read the Partner Events Overview documentation for details.

    Setup Azure Event Hubs + Key Vault.

    To get Microsoft Graph Change Notifications delivered to Azure Event Hubs, we'll have to setup Azure Event Hubs and Azure Key Vault. We'll use Azure Key Vault to access to Event Hubs connection string.

    1️⃣ Create Azure Event Hubs

    1. Go to Azure Portal and select Create a resource, type Event Hubs and select click Create.
    2. Fill in the Event Hubs namespace creation details, and then click Create.
    3. Go to the newly created Event Hubs namespace page, select Event Hubs tab from the left pane and + Event Hub:
      • Name your Event Hub as Event Hub
      • Click Create.
    4. Click the name of the Event Hub, and then select Shared access policies and + Add to add a new policy:
      • Give a name to the policy
      • Check Send and Listen
      • Click Create.
    5. After the policy has been created, click the name of the policy to open the details panel, and then copy the Connection string-primary key value. Write it down; you'll need it for the next step.
    6. Go to Consumer groups tab in the left pane and select + Consumer group, give a name for your consumer group as onboarding and select Create.

    2️⃣ Create Azure Key Vault

    1. Go to Azure Portal and select Create a resource, type Key Vault and select Create.
    2. Fill in the Key Vault creation details, and then click Review + Create.
    3. Go to newly created Key Vault and select Secrets tab from the left pane and click + Generate/Import:
      • Give a name to the secret
      • For the value, paste in the connection string you generated at the Event Hubs step
      • Click Create
      • Copy the name of the secret.
    4. Select Access Policies from the left pane and + Add Access Policy:
      • For Secret permissions, select Get
      • For Principal, select Microsoft Graph Change Tracking
      • Click Add.
    5. Select Overview tab from the left pane and copy the Vault URI.

    Subscribe for Logic Apps change notifications

    To start receiving Microsoft Graph Change Notifications, we'll need to create subscription to the resource that we'd like to track - here, 'users'. We'll use Azure Logic Apps to create subscription.

    To create subscription for Microsoft Graph Change Notifications, we'll need to make a http post request to https://graph.microsoft.com/v1.0/subscriptions. Microsoft Graph requires Azure Active Directory authentication make API calls. First, we'll need to register an app to Azure Active Directory, and then we will make the Microsoft Graph Subscription API call with Azure Logic Apps.

    1️⃣ Create an app in Azure Active Directory

    1. In the Azure Portal, go to Azure Active Directory and select App registrations from the left pane and select + New registration. Fill in the details for the new App registration form as below:
      • Name: Graph Subscription Flow Auth
      • Supported account types: Accounts in any organizational directory (Any Azure AD directory - Multitenant) and personal Microsoft accounts (e.g. Skype, Xbox)
      • Select Register.
    2. Go to newly registered app in Azure Active Directory, select API permissions:
      • Select + Add a permission and Microsoft Graph
      • Select Application permissions and add User.Read.All and Directory.Read.All.
      • Select Grant admin consent for the organization
    3. Select Certificates & secrets tab from the left pane, select + New client secret:
      • Choose desired expiry duration
      • Select Add
      • Copy the value of the secret.
    4. Go to Overview from the left pane, copy Application (client) ID and Directory (tenant) ID.

    2️⃣ Create subscription with Azure Logic Apps

    1. Go to Azure Portal and select Create a resource, type Logic apps and select click Create.

    2. Fill in the Logic Apps creation details, and then click Create.

    3. Go to the newly created Logic Apps page, select Workflows tab from the left pane and select + Add:

      • Give a name to the new workflow as graph-subscription-flow
      • Select Stateful as a state type
      • Click Create.
    4. Go to graph-subscription-flow, and then select Designer tab.

    5. In the Choose an operation section, search for Schedule and select Recurrence as a trigger. Fill in the parameters as below:

      • Interval: 61
      • Frequency: Minute
      • Time zone: Select your own time zone
      • Start time: Set a start time
    6. Select + button in the flow and select add an action. Search for HTTP and select HTTP as an action. Fill in the parameters as below:

      • Method: POST
      • URI: https://graph.microsoft.com/v1.0/subscriptions
      • Headers:
        • Key: Content-type
        • Value: application/json
      • Body:
      {
      "changeType": "created, updated",
      "clientState": "secretClientValue",
      "expirationDateTime": "@{addHours(utcNow(), 1)}",
      "notificationUrl": "EventHub:https://<YOUR-VAULT-URI>/secrets/<YOUR-KEY-VAULT-SECRET-NAME>?tenantId=72f988bf-86f1-41af-91ab-2d7cd011db47",
      "resource": "users"
      }

      In notificationUrl, make sure to replace <YOUR-VAULT-URI> with the vault uri and <YOUR-KEY-VAULT-SECRET-NAME> with the secret name that you copied from the Key Vault.

      In resource, define the resource type you'd like to track changes. For our example, we will track changes for users resource.

      • Authentication:
        • Authentication type: Active Directory OAuth
        • Authority: https://login.microsoft.com
        • Tenant: Directory (tenant) ID copied from AAD app
        • Audience: https://graph.microsoft.com
        • Client ID: Application (client) ID copied from AAD app
        • Credential Type: Secret
        • Secret: value of the secret copied from AAD app
    7. Select Save and run your workflow from the Overview tab.

      Check your subscription in Graph Explorer: If you'd like to make sure that your subscription is created successfully by Logic Apps, you can go to Graph Explorer, login with your Microsoft 365 account and make GET request to https://graph.microsoft.com/v1.0/subscriptions. Your subscription should appear in the response after it's created successfully.

    Subscription workflow success

    After subscription is created successfully by Logic Apps, Azure Event Hubs will receive notifications whenever there is a new user created in Azure Active Directory.


    Create Onboarding workflow in Logic Apps

    We'll create a second workflow in the Logic Apps to receive change notifications from Event Hubs when there is a new user created in the Azure Active Directory and add new user in Onboarding team on Microsoft Teams.

    1. Go to the Logic Apps you created in the previous steps, select Workflows tab and create a new workflow by selecting + Add:
      • Give a name to the new workflow as teams-onboarding-flow
      • Select Stateful as a state type
      • Click Create.
    2. Go to teams-onboarding-flow, and then select Designer tab.
    3. In the Choose an operation section, search for Event Hub, select When events are available in Event Hub as a trigger. Setup Event Hub connection as below:
      • Create Connection:
        • Connection name: Connection
        • Authentication Type: Connection String
        • Connection String: Go to Event Hubs > Shared Access Policies > RootManageSharedAccessKey and copy Connection string–primary key
        • Select Create.
      • Parameters:
        • Event Hub Name: Event Hub
        • Consumer Group Name: onboarding
    4. Select + in the flow and add an action, search for Control and add For each as an action. Fill in For each action as below:
      • Select output from previous steps: Events
    5. Inside For each, select + in the flow and add an action, search for Data operations and select Parse JSON. Fill in Parse JSON action as below:
      • Content: Events Content
      • Schema: Copy the json content from schema-parse.json and paste as a schema
    6. Select + in the flow and add an action, search for Control and add For each as an action. Fill in For each action as below:
      • Select output from previous steps: value
      1. Inside For each, select + in the flow and add an action, search for Microsoft Teams and select Add a member to a team. Login with your Microsoft 365 account to create a connection and fill in Add a member to a team action as below:
      • Team: Create an Onboarding team on Microsoft Teams and select
      • A user AAD ID for the user to add to a team: id
    7. Select Save.

    🚀 Debug your onboarding experience

    To debug our onboarding experience, we'll need to create a new user in Azure Active Directory and see if it's added in Microsoft Teams Onboarding team automatically.

    1. Go to Azure Portal and select Azure Active Directory from the left pane and go to Users. Select + New user and Create new user. Fill in the details as below:

      • User name: JaneDoe
      • Name: Jane Doe

      new user in Azure Active Directory

    2. When you added Jane Doe as a new user, it should trigger the teams-onboarding-flow to run. teams onboarding flow success

    3. Once the teams-onboarding-flow runs successfully, you should be able to see Jane Doe as a member of the Onboarding team on Microsoft Teams! 🥳 new member in Onboarding team on Microsoft Teams

    Congratulations! 🎉

    You just built an onboarding experience using Azure Logic Apps, Azure Event Hubs and Azure Key Vault.


    📚 Resources

    - + \ No newline at end of file diff --git a/blog/serverless-status-post/index.html b/blog/serverless-status-post/index.html index d1698297ba..367a068a76 100644 --- a/blog/serverless-status-post/index.html +++ b/blog/serverless-status-post/index.html @@ -14,14 +14,14 @@ - +

    Serverless September - In a Nutshell

    · 7 min read
    Devanshi Joshi

    It's Serverless September in a Nutshell! Join us as we unpack our month-long learning journey exploring the core technology pillars for Serverless architectures on Azure. Then end with a look at next steps to build your Cloud-native applications on Azure.


    What We'll Cover

    • Functions-as-a-Service (FaaS)
    • Microservices and Containers
    • Serverless Integrations
    • End-to-End Solutions
    • Developer Tools & #Hacktoberfest

    Banner for Serverless September


    Building Cloud-native Apps

    By definition, cloud-native technologies empower organizations to build and run scalable applications in modern, dynamic environments such as public, private, and hybrid clouds. You can learn more about cloud-native in Kendall Roden's #ServerlessSeptember post on Going Cloud-native with Azure Container Apps.

    Serveless technologies accelerate productivity and minimize costs for deploying applications at cloud scale. So, what can we build with serverless technologies in cloud-native on Azure? Anything that is event-driven - examples include:

    • Microservices - scaled by KEDA-compliant triggers
    • Public API Endpoints - scaled by #concurrent HTTP requests
    • Event-Driven Applications - scaled by length of message queue
    • Web Applications - scaled by #concurrent HTTP requests
    • Background Process - scaled by CPU and Memory usage

    Great - but as developers, we really want to know how we can get started building and deploying serverless solutions on Azure. That was the focus of our #ServerlessSeptember journey. Let's take a quick look at the four key themes.

    Functions-as-a-Service (FaaS)

    Functions-as-a-Service (FaaS) is the epitome of developer productivity for full-stack modern apps. As developers, you don't manage infrastructure and focus only on business logic and application code. And, with Serverless Compute you only pay for when your code runs - making this the simplest first step to begin migrating your application to cloud-native.

    In Azure, FaaS is provided by Azure Functions. Check out our Functions + Serverless on Azure to go from learning core concepts, to building your first Functions app in your programming language of choice. Azure functions support multiple programming languages including C#, F#, Java, JavaScript, Python, Typescript, and PowerShell.

    Want to get extended language support for languages like Go, and Rust? You can Use Custom Handlers to make this happen! But what if you want to have long-running functions, or create complex workflows involving more than one function? Read our post on Durable Entities to learn how you can orchestrate this with Azure Functions.

    Check out this recent AskTheExpert Q&A session with the Azure Functions team to get answers to popular community questions on Azure Functions features and usage.

    Microservices and Containers

    Functions-as-a-Service is an ideal first step towards serverless development. But Functions are just one of the 5 pillars of cloud-native. This week we'll look at two of the other pillars: microservices and containers - with specific focus on two core technologies: Azure Container Apps and Dapr (Distributed Application Runtime).

    In this 6-part series of posts, we walk through each technology independently, before looking at the value of building Azure Container Apps with Dapr.

    • In Hello Container Apps we learned core concepts & deployed our first ACA.
    • In Microservices Communication we learned about ACA environments and virtual networks, and how microservices communicate in ACA with a hands-on tutorial.
    • In Scaling Your Container Apps we learned about KEDA (Kubernetes Event-Driven Autoscaler) and configuring ACA for autoscaling with KEDA-compliant triggers.
    • In Build with Dapr we introduced the Distributed Application Runtime (Dapr), exploring its Building Block APIs and sidecar architecture for working with ACA.
    • In Secure ACA Access we learned how to secure ACA access to external services with - and without - Dapr, covering Secret Stores and Managed Identity.
    • Finally, Build ACA with Dapr tied it all together with a enterprise app scenario where an orders processor (ACA) uses Dapr APIs (PubSub, State Management) to receive and store order messages from Azure Service Bus.

    Build ACA with Dapr

    Check out this recent AskTheExpert Q&A session with the Azure Container Apps team for answers to popular community questions on core features and usage.

    Serverless Integrations

    In the first half of the month we looked at compute resources for building and deploying serverless applications. In the second half, we look at integration tools and resources that automate developer workflows to streamline the end-to-end developer experience.

    In Azure, this is enabled by services like Azure Logic Apps and Azure Event Grid. Azure Logic Apps provides a visual designer to create and automate workflows with little or no code involved. Azure Event Grid provides a highly-scable event broker with support for pub/sub communications to drive async event-driven architectures.

    • In Tracking Weather Data Changes With Logic Apps we look at how you can use Logic Apps to integrate the MSN weather service with Azure CosmosDB, allowing automated collection of weather data on changes.

    • In Teach the Cloud to Read & Categorize Mail we take it a step further, using Logic Apps to automate a workflow that includes a Computer Vision service to "read" images and store the results to CosmosDB.

    • In Integrate with Microsoft Graph we explore a multi-cloud scenario (Azure + M365) where change notifications from Microsoft Graph can be integrated using Logic Apps and Event Hubs to power an onboarding workflow.

    • In Cloud Events with Event Grid we learn about the CloudEvents specification (for consistently describing event data) - and learn how Event Grid brokers events in this format. Azure Logic Apps can be an Event handler (subscriber) that uses the event to trigger an automated workflow on receipt.

      Azure Event Grid And Logic Apps

    Want to explore other such integrations? Browse Azure Architectures and filter by selected Azure services for more real-world scenarios.


    End-to-End Solutions

    We've covered serverless compute solutions (for building your serverless applications) and serverless integration services to automate end-to-end workflows in synchronous or asynchronous event-driven architectures. In this final week, we want to leave you with a sense of end-to-end development tools and use cases that can be enabled by Serverless on Azure. Here are some key examples:

    ArticleDescription
    In this tutorial, you'll learn to deploy a containerized ASP.NET Core 6.0 application to Azure Container Apps - with a Blazor front-end and two Web API projects
    Deploy Java containers to cloudIn this tutorial you learn to build and deploy a Java application running on Spring Boot, by publishing it in a container to Azure Container Registry, then deploying to Azure Container Apps,, from ACR, via the Azure Portal.
    **Where am I? My GPS Location with Serverless Power Platform Custom Connector**In this step-by-step tutorial you learn to integrate a serverless application (built on Azure Functions and OpenAPI) with Power Platforms custom connectors via Azure API Management (API-M).This pattern can empower a new ecosystem of fusion apps for cases like inventory management.
    And in our Serverless Hacks initiative, we walked through an 8-step hack to build a serverless tollbooth. Check out this 12-part video walkthrough of a reference solution using .NET.

    Developer Tools

    But wait - there's more. Those are a sample of the end-to-end application scenarios that are built on serverless on Azure. But what about the developer experience? In this article, we say hello to the Azure Developer CLI - an open-source tool that streamlines your develop-deploy workflow, with simple commands that map to core stages of your development journey. Go from code to cloud with one CLI

    And watch this space for more such tutorials and content through October, including a special #Hacktoberfest focused initiative to encourage and support first-time contributors to open-source. Here's a sneak peek at the project we plan to share - the new awesome-azd templates gallery.


    Join us at Microsoft Ignite!

    Want to continue your learning journey, and learn about what's next for Serverless on Azure? Microsoft Ignite happens Oct 12-14 this year and has multiple sessions on relevant technologies and tools. Check out the Session Catalog and register here to attend online.

    - + \ No newline at end of file diff --git a/blog/students/index.html b/blog/students/index.html index d51201623d..36ca0ea172 100644 --- a/blog/students/index.html +++ b/blog/students/index.html @@ -14,13 +14,13 @@ - +

    Welcome Students!

    · 3 min read
    Sara Gibbons

    ✨ Serverless September For Students

    My love for the tech industry grows as it evolves. Not just for the new technologies to play with, but seeing how paths into a tech career continue to expand. Allowing so many new voices, ideas and perspectives to our industry. With serverless computing removing barriers of entry for so many.

    It's a reason I enjoy working with universities and students. I get to hear the excitement of learning, fresh ideas and perspectives from our student community. All you students are incredible! How you view serverless, and what it can do, so cool!

    This year for Serverless September we want to hear all the amazing ways our student community is learning and working with Azure Serverless, and have all new ways for you to participate.

    Getting Started

    If you don't already have an Azure for Students account you can easily get your FREE account created at Azure for Students Sign up.

    If you are new to serverless, here are a couple links to get you started:

    No Experience, No problem

    For Serverless September we have planned beginner friendly content all month long. Covering such services as:

    You can follow #30DaysOfServerles here on the blog for daily posts covering concepts, scenarios, and how to create end-to-end solutions.

    Join the Cloud Skills Challenge where we have selected a list of Learn Modules for you to go through at your own pace, including deploying a full stack application with Azure Static Web Apps.

    Have A Question

    We want to hear it! All month long we will have Ask The Expert sessions. Submit your questions at any time and will be be sure to get one of our Azure Serverless experts to get you an answer.

    Share What You've Created

    If you have written a blog post, recorded a video, have an open source Azure Serverless project, we'd love to see it! Here is some links for you to share your creations

    🧭 Explore Student Resources

    ⚡️ Join us!

    Multiple teams across Microsoft are working to create Serverless September! They all want to hear from our incredible student community. We can't wait to share all the Serverless September resources and hear what you have learned and created. Here are some ways to keep up to date on all Serverless September activity:

    - + \ No newline at end of file diff --git a/blog/tags/30-days-of-serverless/index.html b/blog/tags/30-days-of-serverless/index.html index 6cb14bc685..b92a58cbc3 100644 --- a/blog/tags/30-days-of-serverless/index.html +++ b/blog/tags/30-days-of-serverless/index.html @@ -14,7 +14,7 @@ - + @@ -26,7 +26,7 @@

    ...and that's it! We've successfully deployed our application on Azure!

    But there's more!

    Best practices: Monitoring and CI/CD!

    In my opinion, it's not enough to just set up the application on Azure! I want to know that my web app is performant and serving my users reliably! I also want to make sure that I'm not inadvertently breaking my application as I continue to make changes to it. Thankfully, the Azure Developer CLI also handles all of this via two additional commands - azd monitor and azd pipeline config.

    Application Monitoring

    When we provisioned all of our infrastructure, we also set up application monitoring via a Bicep file in our .infra/ directory that spec'd out an Application Insights dashboard. By running azd monitor we can see the dashboard with live metrics that was configured for the application.

    We can also navigate to the Application Dashboard by clicking on the resource group name, where you can set a specific refresh rate for the dashboard, and see usage, reliability, and performance metrics over time.

    I don't know about everyone else but I have spent a ton of time building out similar dashboards. It can be super time-consuming to write all the queries and create the visualizations so this feels like a real time saver.

    CI/CD

    Finally let's talk about setting up CI/CD! This might be my favorite azd feature. As I mentioned before, the Azure Developer CLI has a command, azd pipeline config, which uses the files in the .github/ directory to set up a GitHub Action. More than that, if there is no upstream repo, the Developer CLI will actually help you create one. But what does this mean exactly? Because our GitHub Action is using the same commands you'd run in the CLI under the hood, we're actually going to have CI/CD set up to run on every commit into the repo, against real Azure resources. What a sweet collaboration feature!

    That's it! We've gone end-to-end with the Azure Developer CLI - initialized a project, provisioned the resources on Azure, deployed our code on Azure, set up monitoring logs and dashboards, and set up a CI/CD pipeline with GitHub Actions to run on every commit into the repo (on real Azure resources!).

    Exercise: Try it yourself or create your own template!

    As an exercise, try out the workflow above with any template on GitHub!

    Or, try turning your own project into an Azure Developer CLI-enabled template by following this guidance. If you create your own template, don't forget to tag the repo with the azd-templates topic on GitHub to help others find it (unfamiliar with GitHub topics? Learn how to add topics to your repo)! We'd also love to chat with you about your experience creating an azd template - if you're open to providing feedback around this, please fill out this form!

    Resources

    - + \ No newline at end of file diff --git a/blog/tags/30-days-of-serverless/page/10/index.html b/blog/tags/30-days-of-serverless/page/10/index.html index 8090e54e02..bef4694066 100644 --- a/blog/tags/30-days-of-serverless/page/10/index.html +++ b/blog/tags/30-days-of-serverless/page/10/index.html @@ -14,14 +14,14 @@ - +

    20 posts tagged with "30-days-of-serverless"

    View All Tags

    · 11 min read
    Kendall Roden

    Welcome to Day 13 of #30DaysOfServerless!

    In the previous post, we learned about all things Distributed Application Runtime (Dapr) and highlighted the capabilities you can unlock through managed Dapr in Azure Container Apps! Today, we'll dive into how we can make use of Container Apps secrets and managed identities to securely access cloud-hosted resources that your Container Apps depend on!

    Ready? Let's go.


    What We'll Cover

    • Secure access to external services overview
    • Using Container Apps Secrets
    • Using Managed Identity for connecting to Azure resources
    • Using Dapr secret store component references (Dapr-only)
    • Conclusion
    • Resources: For self-study!


    Securing access to external services

    In most, if not all, microservice-based applications, one or more services in the system will rely on other cloud-hosted resources; Think external services like databases, secret stores, message brokers, event sources, etc. To interact with these services, an application must have the ability to establish a secure connection. Traditionally, an application will authenticate to these backing resources using some type of connection string or password.

    I'm not sure if it was just me, but one of the first things I learned as a developer was to ensure credentials and other sensitive information were never checked into the codebase. The ability to inject these values at runtime is a non-negotiable.

    In Azure Container Apps, applications can securely leverage connection information via Container Apps Secrets. If the resource is Azure-based, a more ideal solution that removes the dependence on secrets altogether is using Managed Identity.

    Specifically for Dapr-enabled container apps, users can now tap into the power of the Dapr secrets API! With this new capability unlocked in Container Apps, users can call the Dapr secrets API from application code to securely access secrets from Key Vault or other backing secret stores. In addition, customers can also make use of a secret store component reference when wiring up Dapr state store components and more!

    ALSO, I'm excited to share that support for Dapr + Managed Identity is now available!!. What does this mean? It means that you can enable Managed Identity for your container app - and when establishing connections via Dapr, the Dapr sidecar can use this identity! This means simplified components without the need for secrets when connecting to Azure services!

    Let's dive a bit deeper into the following three topics:

    1. Using Container Apps secrets in your container apps
    2. Using Managed Identity to connect to Azure services
    3. Connecting to services securely for Dapr-enabled apps

    Secure access to external services without Dapr

    Leveraging Container Apps secrets at runtime

    Users can leverage this approach for any values which need to be securely stored, however, it is recommended to use Managed Identity where possible when connecting to Azure-specific resources.

    First, let's establish a few important points regarding secrets in container apps:

    • Secrets are scoped at the container app level, meaning secrets cannot be shared across container apps today
    • When running in multiple-revision mode,
      • changes to secrets do not generate a new revision
      • running revisions will not be automatically restarted to reflect changes. If you want to force-update existing container app revisions to reflect the changed secrets values, you will need to perform revision restarts.
    STEP 1

    Provide the secure value as a secret parameter when creating your container app using the syntax "SECRET_NAME=SECRET_VALUE"

    az containerapp create \
    --resource-group "my-resource-group" \
    --name queuereader \
    --environment "my-environment-name" \
    --image demos/queuereader:v1 \
    --secrets "queue-connection-string=$CONNECTION_STRING"
    STEP 2

    Create an environment variable which references the value of the secret created in step 1 using the syntax "ENV_VARIABLE_NAME=secretref:SECRET_NAME"

    az containerapp create \
    --resource-group "my-resource-group" \
    --name myQueueApp \
    --environment "my-environment-name" \
    --image demos/myQueueApp:v1 \
    --secrets "queue-connection-string=$CONNECTIONSTRING" \
    --env-vars "QueueName=myqueue" "ConnectionString=secretref:queue-connection-string"

    This ConnectionString environment variable can be used within your application code to securely access the connection string value at runtime.

    Using Managed Identity to connect to Azure services

    A managed identity from Azure Active Directory (Azure AD) allows your container app to access other Azure AD-protected resources. This approach is recommended where possible as it eliminates the need for managing secret credentials in your container apps and allows you to properly scope the permissions needed for a given container app using role-based access control. Both system-assigned and user-assigned identities are available in container apps. For more background on managed identities in Azure AD, see Managed identities for Azure resources.

    To configure your app with a system-assigned managed identity you will follow similar steps to the following:

    STEP 1

    Run the following command to create a system-assigned identity for your container app

    az containerapp identity assign \
    --name "myQueueApp" \
    --resource-group "my-resource-group" \
    --system-assigned
    STEP 2

    Retrieve the identity details for your container app and store the Principal ID for the identity in a variable "PRINCIPAL_ID"

    az containerapp identity show \
    --name "myQueueApp" \
    --resource-group "my-resource-group"
    STEP 3

    Assign the appropriate roles and permissions to your container app's managed identity using the Principal ID in step 2 based on the resources you need to access (example below)

    az role assignment create \
    --role "Storage Queue Data Contributor" \
    --assignee $PRINCIPAL_ID \
    --scope "/subscriptions/<subscription>/resourceGroups/<resource-group>/providers/Microsoft.Storage/storageAccounts/<storage-account>/queueServices/default/queues/<queue>"

    After running the above commands, your container app will be able to access your Azure Store Queue because it's managed identity has been assigned the "Store Queue Data Contributor" role. The role assignments you create will be contingent solely on the resources your container app needs to access. To instrument your code to use this managed identity, see more details here.

    In addition to using managed identity to access services from your container app, you can also use managed identity to pull your container images from Azure Container Registry.

    Secure access to external services with Dapr

    For Dapr-enabled apps, there are a few ways to connect to the resources your solutions depend on. In this section, we will discuss when to use each approach.

    1. Using Container Apps secrets in your Dapr components
    2. Using Managed Identity with Dapr Components
    3. Using Dapr Secret Stores for runtime secrets and component references

    Using Container Apps secrets in Dapr components

    Prior to providing support for the Dapr Secret's Management building block, this was the only approach available for securely storing sensitive values for use in Dapr components.

    In Dapr OSS, when no secret store reference is provided in a Dapr component file, the default secret store is set to "Kubernetes secrets". In Container Apps, we do not expose the ability to use this default store. Rather, Container Apps secrets can be used in it's place.

    With the introduction of the Secrets API and the ability to use Dapr + Managed Identity, this approach is useful for a limited number of scenarios:

    • Quick demos and dev/test scenarios using the Container Apps CLI
    • Securing values when a secret store is not configured or available for use
    • Using service principal credentials to configure an Azure Key Vault secret store component (Using Managed Identity is recommend)
    • Securing access credentials which may be required when creating a non-Azure secret store component
    STEP 1

    Create a Dapr component which can be used by one or more services in the container apps environment. In the below example, you will create a secret to store the storage account key and reference this secret from the appropriate Dapr metadata property.

       componentType: state.azure.blobstorage
    version: v1
    metadata:
    - name: accountName
    value: testStorage
    - name: accountKey
    secretRef: account-key
    - name: containerName
    value: myContainer
    secrets:
    - name: account-key
    value: "<STORAGE_ACCOUNT_KEY>"
    scopes:
    - myApp
    STEP 2

    Deploy the Dapr component using the below command with the appropriate arguments.

     az containerapp env dapr-component set \
    --name "my-environment" \
    --resource-group "my-resource-group" \
    --dapr-component-name statestore \
    --yaml "./statestore.yaml"

    Using Managed Identity with Dapr Components

    Dapr-enabled container apps can now make use of managed identities within Dapr components. This is the most ideal path for connecting to Azure services securely, and allows for the removal of sensitive values in the component itself.

    The Dapr sidecar makes use of the existing identities available within a given container app; Dapr itself does not have it's own identity. Therefore, the steps to enable Dapr + MI are similar to those in the section regarding managed identity for non-Dapr apps. See example steps below specifically for using a system-assigned identity:

    1. Create a system-assigned identity for your container app

    2. Retrieve the identity details for your container app and store the Principal ID for the identity in a variable "PRINCIPAL_ID"

    3. Assign the appropriate roles and permissions (for accessing resources backing your Dapr components) to your ACA's managed identity using the Principal ID

    4. Create a simplified Dapr component without any secrets required

          componentType: state.azure.blobstorage
      version: v1
      metadata:
      - name: accountName
      value: testStorage
      - name: containerName
      value: myContainer
      scopes:
      - myApp
    5. Deploy the component to test the connection from your container app via Dapr!

    Keep in mind, all Dapr components will be loaded by each Dapr-enabled container app in an environment by default. In order to avoid apps without the appropriate permissions from loading a component unsuccessfully, use scopes. This will ensure that only applications with the appropriate identities to access the backing resource load the component.

    Using Dapr Secret Stores for runtime secrets and component references

    Dapr integrates with secret stores to provide apps and other components with secure storage and access to secrets such as access keys and passwords. The Dapr Secrets API is now available for use in Container Apps.

    Using Dapr’s secret store building block typically involves the following:

    • Setting up a component for a specific secret store solution.
    • Retrieving secrets using the Dapr secrets API in the application code.
    • Optionally, referencing secrets in Dapr component files.

    Let's walk through a couple sample workflows involving the use of Dapr's Secrets Management capabilities!

    Setting up a component for a specific secret store solution

    1. Create an Azure Key Vault instance for hosting the secrets required by your application.

      az keyvault create --name "<your-unique-keyvault-name>" --resource-group "my-resource-group" --location "<your-location>"
    2. Create an Azure Key Vault component in your environment without the secrets values, as the connection will be established to Azure Key Vault via Managed Identity.

          componentType: secretstores.azure.keyvault
      version: v1
      metadata:
      - name: vaultName
      value: "[your_keyvault_name]"
      scopes:
      - myApp
      az containerapp env dapr-component set \
      --name "my-environment" \
      --resource-group "my-resource-group" \
      --dapr-component-name secretstore \
      --yaml "./secretstore.yaml"
    3. Run the following command to create a system-assigned identity for your container app

      az containerapp identity assign \
      --name "myApp" \
      --resource-group "my-resource-group" \
      --system-assigned
    4. Retrieve the identity details for your container app and store the Principal ID for the identity in a variable "PRINCIPAL_ID"

      az containerapp identity show \
      --name "myApp" \
      --resource-group "my-resource-group"
    5. Assign the appropriate roles and permissions to your container app's managed identity to access Azure Key Vault

      az role assignment create \
      --role "Key Vault Secrets Officer" \
      --assignee $PRINCIPAL_ID \
      --scope /subscriptions/{subscriptionid}/resourcegroups/{resource-group-name}/providers/Microsoft.KeyVault/vaults/{key-vault-name}
    6. Begin using the Dapr Secrets API in your application code to retrieve secrets! See additional details here.

    Referencing secrets in Dapr component files

    Once a Dapr secret store component is available in the environment, it can be used to retrieve secrets for use in other components. For example, when creating a state store component, you can add a reference to the Dapr secret store from which you would like to source connection information. You will no longer use secrets directly in the component spec, but rather will instruct the Dapr sidecar to retrieve the secrets from the specified store.

          componentType: state.azure.blobstorage
    version: v1
    metadata:
    - name: accountName
    value: testStorage
    - name: accountKey
    secretRef: account-key
    - name: containerName
    value: myContainer
    secretStoreComponent: "<SECRET_STORE_COMPONENT_NAME>"
    scopes:
    - myApp

    Summary

    In this post, we have covered the high-level details on how to work with secret values in Azure Container Apps for both Dapr and Non-Dapr apps. In the next article, we will walk through a complex Dapr example from end-to-end which makes use of the new support for Dapr + Managed Identity. Stayed tuned for additional documentation around Dapr secrets as it will be release in the next two weeks!

    Resources

    Here are the main resources to explore for self-study:

    - + \ No newline at end of file diff --git a/blog/tags/30-days-of-serverless/page/11/index.html b/blog/tags/30-days-of-serverless/page/11/index.html index 6f94345a99..fd21b23ddf 100644 --- a/blog/tags/30-days-of-serverless/page/11/index.html +++ b/blog/tags/30-days-of-serverless/page/11/index.html @@ -14,13 +14,13 @@ - +

    20 posts tagged with "30-days-of-serverless"

    View All Tags

    · 8 min read
    Nitya Narasimhan

    Welcome to Day 12 of #30DaysOfServerless!

    So far we've looked at Azure Container Apps - what it is, how it enables microservices communication, and how it enables auto-scaling with KEDA compliant scalers. Today we'll shift gears and talk about Dapr - the Distributed Application Runtime - and how it makes microservices development with ACA easier with core building blocks and a sidecar architecture!

    Ready? Let's go!


    What We'll Cover

    • What is Dapr and why use it?
    • Building Block APIs
    • Dapr Quickstart and Tutorials
    • Dapr-enabled ACA: A Sidecar Approach
    • Exercise: Build & Deploy a Dapr-enabled ACA.
    • Resources: For self-study!


    Hello, Dapr!

    Building distributed applications is hard. Building reliable and portable microservces means having middleware that deals with challenges like service discovery, sync and async communications, state management, secure information sharing and more. Integrating these support services into your application can be challenging from both development and maintenance perspectives, adding complexity that is independent of the core application logic you want to focus on.

    This is where Dapr (Distributed Application Runtime) shines - it's defined as::

    a portable, event-driven runtime that makes it easy for any developer to build resilient, stateless and stateful applications that run on the cloud and edge and embraces the diversity of languages and developer frameworks.

    But what does this actually mean to me as an app developer?


    Dapr + Apps: A Sidecar Approach

    The strength of Dapr lies in its ability to:

    • abstract complexities of distributed systems middleware - with Building Block APIs that implement components using best practices to tackle key challenges.
    • implement a Sidecar Pattern with interactions via APIs - allowing applications to keep their codebase clean and focus on app logic.
    • be Incrementally Adoptable - allowing developers to start by integrating one API, then evolving to use more as and when needed.
    • be Platform Agnostic - allowing applications to be developed in a preferred language or framework without impacting integration capabilities.

    The application-dapr sidecar interaction is illustrated below. The API abstraction allows applications to get the desired functionality without having to know how it was implemented, or without having to integrate Dapr-specific code into their codebase. Note how the sidecar process listens on port 3500 and the API provides clear routes for the specific building blocks supported by Dapr (e.g, /secrets, /state etc.)


    Dapr Building Blocks: API Interactions

    Dapr Building Blocks refers to HTTP and gRPC endpoints exposed by Dapr API endpoints exposed by the Dapr sidecar, providing key capabilities like state management, observability, service-to-service invocation, pub/sub messaging and more to the associated application.

    Building Blocks: Under the Hood
    The Dapr API is implemented by modular components that codify best practices for tackling the specific challenge that they represent. The API abstraction allows component implementations to evolve, or alternatives to be used , without requiring changes to the application codebase.

    The latest Dapr release has the building blocks shown in the above figure. Not all capabilities are available to Azure Container Apps by default - check the documentation for the latest updates on this. For now, Azure Container Apps + Dapr integration provides the following capabilities to the application:

    In the next section, we'll dive into Dapr-enabled Azure Container Apps. Before we do that, here are a couple of resources to help you explore the Dapr platform by itself, and get more hands-on experience with the concepts and capabilities:

    • Dapr Quickstarts - build your first Dapr app, then explore quickstarts for a core APIs including service-to-service invocation, pub/sub, state mangement, bindings and secrets management.
    • Dapr Tutorials - go beyond the basic quickstart and explore more realistic service integrations and usage scenarios. Try the distributed calculator example!

    Integrate Dapr & Azure Container Apps

    Dapr currently has a v1.9 (preview) version, but Azure Container Apps supports Dapr v1.8. In this section, we'll look at what it takes to enable, configure, and use, Dapr integration with Azure Container Apps. It involves 3 steps: enabling Dapr using settings, configuring Dapr components (API) for use, then invoking the APIs.

    Here's a simple a publisher-subscriber scenario from the documentation. We have two Container apps identified as publisher-app and subscriber-app deployed in a single environment. Each ACA has an activated daprd sidecar, allowing them to use the Pub/Sub API to communicate asynchronously with each other - without having to write the underlying pub/sub implementation themselves. Rather, we can see that the Dapr API uses a pubsub,azure.servicebus component to implement that capability.

    Pub/sub example

    Let's look at how this is setup.

    1. Enable Dapr in ACA: Settings

    We can enable Dapr integration in the Azure Container App during creation by specifying settings in one of two ways, based on your development preference:

    • Using Azure CLI: use custom commandline options for each setting
    • Using Infrastructure-as-Code (IaC): using properties for Bicep, ARM templates

    Once enabled, Dapr will run in the same environment as the Azure Container App, and listen on port 3500 for API requests. The Dapr sidecar can be shared my multiple Container Apps deployed in the same environment.

    There are four main settings we will focus on for this demo - the example below shows the ARM template properties, but you can find the equivalent CLI parameters here for comparison.

    • dapr.enabled - enable Dapr for Azure Container App
    • dapr.appPort - specify port on which app is listening
    • dapr.appProtocol - specify if using http (default) or gRPC for API
    • dapr.appId - specify unique application ID for service discovery, usage

    These are defined under the properties.configuration section for your resource. Changing Dapr settings does not update the revision but it will restart ACA revisions and replicas. Here is what the relevant section of the ARM template looks like for the publisher-app ACA in the scenario shown above.

    "dapr": {
    "enabled": true,
    "appId": "publisher-app",
    "appProcotol": "http",
    "appPort": 80
    }

    2. Configure Dapr in ACA: Components

    The next step after activating the Dapr sidecar, is to define the APIs that you want to use and potentially specify the Dapr components (specific implementations of that API) that you prefer. These components are created at environment-level and by default, Dapr-enabled containers apps in an environment will load the complete set of deployed components -- use the scopes property to ensure only components needed by a given app are loaded at runtime. Here's what the ARM template resources section looks like for the example above. This tells us that the environment has a dapr-pubsub component of type pubsub.azure.servicebus deployed - where that component is loaded by container apps with dapr ids (publisher-app, subscriber-app).

    USING MANAGED IDENTITY + DAPR

    The secrets approach used here is idea for demo purposes. However, we recommend using Managed Identity with Dapr in production. For more details on secrets, check out tomorrow's post on Secrets and Managed Identity in Azure Container Apps

    {
    "resources": [
    {
    "type": "daprComponents",
    "name": "dapr-pubsub",
    "properties": {
    "componentType": "pubsub.azure.servicebus",
    "version": "v1",
    "secrets": [
    {
    "name": "sb-root-connectionstring",
    "value": "value"
    }
    ],
    "metadata": [
    {
    "name": "connectionString",
    "secretRef": "sb-root-connectionstring"
    }
    ],
    // Application scopes
    "scopes": ["publisher-app", "subscriber-app"]

    }
    }
    ]
    }

    With this configuration, the ACA is now set to use pub/sub capabilities from the Dapr sidecar, using standard HTTP requests to the exposed API endpoint for this service.

    Exercise: Deploy Dapr-enabled ACA

    In the next couple posts in this series, we'll be discussing how you can use the Dapr secrets API and doing a walkthrough of a more complex example, to show how Dapr-enabled Azure Container Apps are created and deployed.

    However, you can get hands-on experience with these concepts by walking through one of these two tutorials, each providing an alternative approach to configure and setup the application describe in the scenario below:

    Resources

    Here are the main resources to explore for self-study:

    - + \ No newline at end of file diff --git a/blog/tags/30-days-of-serverless/page/12/index.html b/blog/tags/30-days-of-serverless/page/12/index.html index c58e5e03d3..b7b801da2a 100644 --- a/blog/tags/30-days-of-serverless/page/12/index.html +++ b/blog/tags/30-days-of-serverless/page/12/index.html @@ -14,13 +14,13 @@ - +

    20 posts tagged with "30-days-of-serverless"

    View All Tags

    · 7 min read
    Paul Yu

    Welcome to Day 11 of #30DaysOfServerless!

    Yesterday we explored Azure Container Concepts related to environments, networking and microservices communication - and illustrated these with a deployment example. Today, we turn our attention to scaling your container apps with demand.


    What We'll Cover

    • What makes ACA Serverless?
    • What is Keda?
    • Scaling Your ACA
    • ACA Scaling In Action
    • Exercise: Explore azure-opensource-labs examples
    • Resources: For self-study!


    So, what makes Azure Container Apps "serverless"?

    Today we are going to focus on what makes Azure Container Apps (ACA) a "serverless" offering. But what does the term "serverless" really mean? As much as we'd like to think there aren't any servers involved, that is certainly not the case. In general, "serverless" means that most (if not all) server maintenance has been abstracted away from you.

    With serverless, you don't spend any time managing and patching servers. This concern is offloaded to Azure and you simply focus on adding business value through application delivery. In addition to operational efficiency, cost efficiency can be achieved with serverless on-demand pricing models. Your workload horizontally scales out based on need and you only pay for what you use. To me, this is serverless, and my teammate @StevenMurawski said it best... "being able to scale to zero is what gives ACA it's serverless magic."

    Scaling your Container Apps

    If you don't know by now, ACA is built on a solid open-source foundation. Behind the scenes, it runs on a managed Kubernetes cluster and includes several open-source components out-of-the box including Dapr to help you build and run microservices, Envoy Proxy for ingress capabilities, and KEDA for event-driven autoscaling. Again, you do not need to install these components yourself. All you need to be concerned with is enabling and/or configuring your container app to leverage these components.

    Let's take a closer look at autoscaling in ACA to help you optimize your container app.

    What is KEDA?

    KEDA stands for Kubernetes Event-Driven Autoscaler. It is an open-source project initially started by Microsoft and Red Hat and has been donated to the Cloud-Native Computing Foundation (CNCF). It is being maintained by a community of 200+ contributors and adopted by many large organizations. In terms of its status as a CNCF project it is currently in the Incubating Stage which means the project has gone through significant due diligence and on its way towards the Graduation Stage.

    Prior to KEDA, horizontally scaling your Kubernetes deployment was achieved through the Horizontal Pod Autoscaler (HPA) which relies on resource metrics such as CPU and memory to determine when additional replicas should be deployed. Being limited to CPU and memory falls a bit short for certain workloads. This is especially true for apps that need to processes messages from a queue or HTTP-based apps that can handle a specific amount of incoming HTTP requests at a time. KEDA aims to fill that gap and provides a much more robust framework for scaling by working in conjunction with HPA. It offers many scalers for you to implement and even allows your deployments to scale to zero! 🥳

    KEDA architecture

    Configuring ACA scale rules

    As I mentioned above, ACA's autoscaling feature leverages KEDA and gives you the ability to configure the number of replicas to deploy based on rules (event triggers). The number of replicas can be configured as a static number or a range (minimum and maximum). So if you need your containers to run 24/7, set the min and max to be the same value. By default, when you deploy a container app, it is set to scale from 0 to 10 replicas. The default scaling rule uses HTTP scaling and defaults to a minimum of 10 concurrent requests per second. Once the threshold of 10 concurrent request per second is met, another replica will be deployed until it reaches the maximum number of replicas.

    At the time of this writing, a container app can have up to 30 replicas.

    Default autoscaler

    As a best practice, if you have a Min / max replicas range configured, you should configure a scaling rule even if it is just explicitly setting the default values.

    Adding HTTP scaling rule

    In addition to HTTP scaling, you can also configure an Azure queue rule, which allows you to use Azure Storage Queues as an event data source.

    Adding Azure Queue scaling rule

    The most flexibility comes with the Custom rule type. This opens up a LOT more options for scaling. All of KEDA's event-based scalers are supported with this option 🚀

    Adding Custom scaling rule

    Translating KEDA templates to Azure templates

    When you implement Custom rules, you need to become familiar with translating KEDA templates to Azure Resource Manager templates or ACA YAML manifests. The KEDA scaler documentation is great and it should be simple to translate KEDA template metadata to an ACA rule metadata.

    The images below shows how to translated a scaling rule which uses Azure Service Bus as an event data source. The custom rule type is set to azure-servicebus and details of the service bus is added to the Metadata section. One important thing to note here is that the connection string to the service bus was added as a secret on the container app and the trigger parameter must be set to connection.

    Azure Container App custom rule metadata

    Azure Container App custom rule metadata

    Additional examples of KEDA scaler conversion can be found in the resources section and example video below.

    See Container App scaling in action

    Now that we've built up some foundational knowledge on how ACA autoscaling is implemented and configured, let's look at a few examples.

    Autoscaling based on HTTP traffic load

    Autoscaling based on Azure Service Bus message queues

    Summary

    ACA brings you a true serverless experience and gives you the ability to configure autoscaling rules based on KEDA scaler templates. This gives you flexibility to scale based on a wide variety of data sources in an event-driven manner. With the amount built-in scalers currently available, there is probably a scaler out there for all your use cases. If not, I encourage you to get involved with the KEDA community and help make it better!

    Exercise

    By now, you've probably read and seen enough and now ready to give this autoscaling thing a try. The example I walked through in the videos above can be found at the azure-opensource-labs repo. I highly encourage you to head over to the containerapps-terraform folder and try the lab out. There you'll find instructions which will cover all the steps and tools you'll need implement autoscaling container apps within your own Azure subscription.

    If you have any questions or feedback, please let us know in the comments below or reach out on Twitter @pauldotyu

    Have fun scaling your containers!

    Resources

    - + \ No newline at end of file diff --git a/blog/tags/30-days-of-serverless/page/13/index.html b/blog/tags/30-days-of-serverless/page/13/index.html index 0ed28ca78e..f8b5b349e7 100644 --- a/blog/tags/30-days-of-serverless/page/13/index.html +++ b/blog/tags/30-days-of-serverless/page/13/index.html @@ -14,13 +14,13 @@ - +

    20 posts tagged with "30-days-of-serverless"

    View All Tags

    · 8 min read
    Paul Yu

    Welcome to Day 10 of #30DaysOfServerless!

    We continue our exploraton into Azure Container Apps, with today's focus being communication between microservices, and how to configure your Azure Container Apps environment in the context of a deployment example.


    What We'll Cover

    • ACA Environments & Virtual Networking
    • Basic Microservices Communications
    • Walkthrough: ACA Deployment Example
    • Summary and Next Steps
    • Exercise: Try this yourself!
    • Resources: For self-study!


    Introduction

    In yesterday's post, we learned what the Azure Container Apps (ACA) service is and the problems it aims to solve. It is considered to be a Container-as-a-Service platform since much of the complex implementation details of running a Kubernetes cluster is managed for you.

    Some of the use cases for ACA include event-driven processing jobs and background tasks, but this article will focus on hosting microservices, and how they can communicate with each other within the ACA service. At the end of this article, you will have a solid understanding of how networking and communication is handled and will leave you with a few tutorials to try.

    Environments and virtual networking in ACA

    Before we jump into microservices communication, we should review how networking works within ACA. With ACA being a managed service, Azure will take care of most of your underlying infrastructure concerns. As you provision an ACA resource, Azure provisions an Environment to deploy Container Apps into. This environment is your isolation boundary.

    Azure Container Apps Environment

    By default, Azure creates and manages a new Virtual Network (VNET) for you and the VNET is associated with the environment. As you deploy container apps, they are deployed into the same VNET and the environment is assigned a static public IP address which allows your apps to be accessible over the internet. This VNET is not visible or manageable.

    If you need control of the networking flows within the VNET, you can pre-provision one and tell Azure to deploy an environment within it. This "bring-your-own" VNET model allows you to deploy an environment in either External or Internal modes. Deploying an environment in External mode gives you the flexibility of managing your own VNET, while still allowing your containers to be accessible from outside the environment; a static public IP address is assigned to the environment. When deploying in Internal mode, your containers are accessible within the environment and/or VNET but not accessible from the internet.

    Bringing your own VNET will require some planning and you will need dedicate an empty subnet which will be used exclusively by the ACA environment. The size of your subnet will be dependant on how many containers you plan on deploying and your scaling requirements and one requirement to know is that the subnet address range must have have a /23 CIDR prefix at minimum. You will also need to think about your deployment strategy since ACA has the concept of Revisions which will also consume IPs from your subnet.

    Some additional restrictions to consider when planning your subnet address space is listed in the Resources section below and can be addressed in future posts, so be sure to follow us on dev.to and bookmark the ServerlessSeptember site.

    Basic microservices communication in ACA

    When it comes to communications between containers, ACA addresses this concern with its Ingress capabilities. With HTTP Ingress enabled on your container app, you can expose your app on a HTTPS endpoint.

    If your environment is deployed using default networking and your containers needs to be accessible from outside the environment, you will need to set the Ingress traffic option to Accepting traffic from anywhere. This will generate a Full-Qualified Domain Name (FQDN) which you can use to access your app right away. The ingress feature also generates and assigns a Secure Socket Layer (SSL) certificate for the FQDN.

    External ingress on Container App

    If your environment is deployed using default networking and your containers only need to communicate with other containers in the environment, you'll need to set the Ingress traffic option to Limited to Container Apps Environment. You get a FQDN here as well, but in the section below we'll see how that changes.

    Internal ingress on Container App

    As mentioned in the networking section above, if you deploy your ACA environment into a VNET in internal mode, your options will be Limited to Container Apps Environment or Limited to VNet.

    Ingress on internal virtual network

    Note how the Accepting traffic from anywhere option is greyed out. If your VNET is deployed in external mode, then the option will be available.

    Let's walk though an example ACA deployment

    The diagram below illustrates a simple microservices application that I deployed to ACA. The three container apps all have ingress enabled. The greeting-service app calls two backend services; a hello-service that returns the text Hello (in random casing) and a world-service that returns the text World (in a few random languages). The greeting-service concatenates the two strings together and returns Hello World to the browser. The greeting-service is the only service accessible via external ingress while two backend services are only accessible via internal ingress.

    Greeting Service overview

    With ingress enabled, let's take a quick look at the FQDN structures. Here is the FQDN of the external greeting-service.

    https://greeting-service.victoriouswave-3749d046.eastus.azurecontainerapps.io

    We can break it down into these components:

    https://[YOUR-CONTAINER-APP-NAME].[RANDOM-NAME]-[RANDOM-CHARACTERS].[AZURE-REGION].containerapps.io

    And here is the FQDN of the internal hello-service.

    https://hello-service.internal.victoriouswave-3749d046.eastus.azurecontainerapps.io

    Can you spot the difference between FQDNs?

    That was too easy 😉... the word internal is added as a subdomain in the FQDN between your container app name and the random name for all internal ingress endpoints.

    https://[YOUR-CONTAINER-APP-NAME].internal.[RANDOM-NAME]-[RANDOM-CHARACTERS].[AZURE-REGION].containerapps.io

    Now that we know the internal service FQDNs, we use them in the greeting-service app to achieve basic service-to-service communications.

    So we can inject FQDNs of downstream APIs to upstream apps using environment variables, but the downside to this approach is that need to deploy downstream containers ahead of time and this dependency will need to be planned for during your deployment process. There are ways around this and one option is to leverage the auto-injected environment variables within your app code.

    If I use the Console blade for the hello-service container app and run the env command, you will see environment variables named CONTAINER_APP_NAME and CONTAINER_APP_ENV_DNS_SUFFIX. You can use these values to determine FQDNs within your upstream app.

    hello-service environment variables

    Back in the greeting-service container I can invoke the hello-service container's sayhello method. I know the container app name is hello-service and this service is exposed over an internal ingress, therefore, if I add the internal subdomain to the CONTAINER_APP_ENV_DNS_SUFFIX I can invoke a HTTP request to the hello-service from my greeting-service container.

    Invoke the sayHello method from the greeting-service container

    As you can see, the ingress feature enables communications to other container apps over HTTP/S and ACA will inject environment variables into our container to help determine what the ingress FQDNs would be. All we need now is a little bit of code modification in the greeting-service app and build the FQDNs of our backend APIs by retrieving these environment variables.

    Greeting service code

    ... and now we have a working microservices app on ACA! 🎉

    Hello World

    Summary and next steps

    We've covered Container Apps networking and the basics of how containers communicate with one another. However, there is a better way to address service-to-service invocation using Dapr, which is an open-source framework for building microservices. It is natively integrated into the ACA service and in a future post, you'll learn how to enable it in your Container App to address microservices concerns and more. So stay tuned!

    Exercises

    As a takeaway for today's post, I encourage you to complete this tutorial and if you'd like to deploy the sample app that was presented in this article, my teammate @StevenMurawski is hosting a docker-compose-examples repo which includes samples for deploying to ACA using Docker Compose files. To learn more about the az containerapp compose command, a link to his blog articles are listed in the Resources section below.

    If you have any questions or feedback, please let us know in the comments below or reach out on Twitter @pauldotyu

    Have fun packing and shipping containers! See you in the next post!

    Resources

    The sample app presented here was inspired by services demonstrated in the book Introducing Distributed Application Runtime (Dapr): Simplifying Microservices Applications Development Through Proven and Reusable Patterns and Practices. Go check it out to leran more about Dapr!

    - + \ No newline at end of file diff --git a/blog/tags/30-days-of-serverless/page/14/index.html b/blog/tags/30-days-of-serverless/page/14/index.html index 44f590c219..ef8e0bedc4 100644 --- a/blog/tags/30-days-of-serverless/page/14/index.html +++ b/blog/tags/30-days-of-serverless/page/14/index.html @@ -14,13 +14,13 @@ - +

    20 posts tagged with "30-days-of-serverless"

    View All Tags

    · 12 min read
    Nitya Narasimhan

    Welcome to Day 9 of #30DaysOfServerless!


    What We'll Cover

    • The Week Ahead
    • Hello, Container Apps!
    • Quickstart: Build Your First ACA!
    • Under The Hood: Core ACA Concepts
    • Exercise: Try this yourself!
    • Resources: For self-study!


    The Week Ahead

    Welcome to Week 2 of #ServerlessSeptember, where we put the focus on Microservices and building Cloud-Native applications that are optimized for serverless solutions on Azure. One week is not enough to do this complex topic justice so consider this a 7-part jumpstart to the longer journey.

    1. Hello, Container Apps (ACA) - Learn about Azure Container Apps, a key service that helps you run microservices and containerized apps on a serverless platform. Know the core concepts. (Tutorial 1: First ACA)
    2. Communication with Microservices - Dive deeper into two key concepts: environments and virtual networking. Learn how microservices communicate in ACA, and walkthrough an example. (Tutorial 2: ACA with 3 Microservices)
    3. Scaling Your Container Apps - Learn about KEDA. Understand how to configure your ACA for auto-scaling with KEDA-supported triggers. Put this into action by walking through a tutorial. (Tutorial 3: Configure Autoscaling)
    4. Hello, Distributed Application Runtime (Dapr) - Learn about Dapr and how its Building Block APIs simplify microservices development with ACA. Know how the sidecar pattern enables incremental adoption of Dapr APIs without requiring any Dapr code integration in app. (Tutorial 4: Setup & Explore Dapr)
    5. Building ACA with Dapr - See how Dapr works with ACA by building a Dapr-enabled Azure Container App. Walk through a .NET tutorial using Pub/Sub and State Management APIs in an enterprise scenario. (Tutorial 5: Build ACA with Dapr)
    6. Managing Secrets With Dapr - We'll look at the Secrets API (a key Building Block of Dapr) and learn how it simplifies management of sensitive information in ACA.
    7. Microservices + Serverless On Azure - We recap Week 2 (Microservices) and set the stage for Week 3 ( Integrations) of Serverless September. Plus, self-study resources including ACA development tutorials in different languages.

    Ready? Let's go!


    Azure Container Apps!

    When building your application, your first decision is about where you host your application. The Azure Architecture Center has a handy chart to help you decide between choices like Azure Functions, Azure App Service, Azure Container Instances, Azure Container Apps and more. But if you are new to this space, you'll need a good understanding of the terms and concepts behind the services Today, we'll focus on Azure Container Apps (ACA) - so let's start with the fundamentals.

    Containerized App Defined

    A containerized app is one where the application components, dependencies, and configuration, are packaged into a single file (container image), which can be instantiated in an isolated runtime environment (container) that is portable across hosts (OS). This makes containers lightweight and scalable - and ensures that applications behave consistently on different host platforms.

    Container images can be shared via container registries (public or private) helping developers discover and deploy related apps with less effort. Scaling a containerized app can be as simple as activating more instances of its container image. However, this requires container orchestrators to automate the management of container apps for efficiency. Orchestrators use technologies like Kubernetes to support capabilities like workload scheduling, self-healing and auto-scaling on demand.

    Cloud-Native & Microservices

    Containers are seen as one of the 5 pillars of Cloud-Native app development, an approach where applications are designed explicitly to take advantage of the unique benefits of modern dynamic environments (involving public, private and hybrid clouds). Containers are particularly suited to serverless solutions based on microservices.

    • With serverless - developers use managed services instead of managing their own infrastructure. Services are typically event-driven and can be configured for autoscaling with rules tied to event triggers. Serverless is cost-effective, with developers paying only for the compute cycles and resources they use.
    • With microservices - developers compose their applications from independent components. Each component can be deployed in its own container, and scaled at that granularity. This simplifies component reuse (across apps) and maintainability (over time) - with developers evolving functionality at microservice (vs. app) levels.

    Hello, Azure Container Apps!

    Azure Container Apps is the managed service that helps you run containerized apps and microservices as a serverless compute solution, on Azure. You can:

    • deploy serverless API endpoints - autoscaled by HTTP request traffic
    • host background processing apps - autoscaled by CPU or memory load
    • handle event-driven processing - autoscaled by #messages in queue
    • run microservices - autoscaled by any KEDA-supported scaler.

    Want a quick intro to the topic? Start by watching the short video below - then read these two posts from our ZeroToHero series:


    Deploy Your First ACA

    Dev Options

    We typically have three options for development:

    • Use the Azure Portal - provision and deploy from a browser.
    • Use Visual Studio Code (with relevant extensions) - if you prefer an IDE
    • Using Azure CLI - if you prefer to build and deploy from command line.

    The documentation site has quickstarts for three contexts:

    For this quickstart, we'll go with the first option (sample image) so we can move quickly to core concepts. We'll leave the others as an exercise for you to explore.

    1. Setup Resources

    PRE-REQUISITES

    You need:

    • An Azure account with an active subscription
    • An installed Azure CLI

    Start by logging into Azure from the CLI. The command should launch a browser to complete the auth flow (or give you an option to take an alternative path).

    $ az login

    Successful authentication will result in extensive command-line output detailing the status of your subscription.

    Next, install the Azure Container Apps extension for the CLI

    $ az extension add --name containerapp --upgrade
    ...
    The installed extension 'containerapp' is in preview.

    Once successfully installed, register the Microsoft.App namespace.

    $ az provider register --namespace Microsoft.App

    Then set local environment variables in that terminal - and verify they are set correctly:

    $ RESOURCE_GROUP="my-container-apps"
    $ LOCATION="canadacentral"
    $ CONTAINERAPPS_ENVIRONMENT="my-environment"

    $ echo $LOCATION $RESOURCE_GROUP $CONTAINERAPPS_ENVIRONMENT
    canadacentral my-container-apps my-environment

    Now you can use Azure CLI to provision a resource group for this tutorial. Creating a resource group also makes it easier for us to delete/reclaim all resources used at the end of this tutorial.

    az group create \
    --name $RESOURCE_GROUP \
    --location $LOCATION
    Congratulations

    You completed the Setup step!

    On completion, the console should print out the details of the newly created resource group. You should also be able to visit the Azure Portal and find the newly-active my-container-apps resource group under your active subscription.

    2. Create Environment

    An environment is like the picket fence around your property. It creates a secure boundary that contains a group of container apps - such that all apps deployed to it share the same virtual network and logging resources.

    $ az containerapp env create \
    --name $CONTAINERAPPS_ENVIRONMENT \
    --resource-group $RESOURCE_GROUP \
    --location $LOCATION

    No Log Analytics workspace provided.
    Generating a Log Analytics workspace with name ...

    This can take a few minutes. When done, you will see the terminal display more details. You can also check the resource group in the portal and see that a Container Apps Environment and a Log Analytics Workspace are created for you as part of this step.

    You've got the fence set up. Now it's time to build your home - er, container app!

    3. Create Container App

    Here's the command we'll use to create our first Azure Container App. Note that the --image argument provides the link to a pre-existing containerapps-helloworld image.

    az containerapp create \
    --name my-container-app \
    --resource-group $RESOURCE_GROUP \
    --environment $CONTAINERAPPS_ENVIRONMENT \
    --image mcr.microsoft.com/azuredocs/containerapps-helloworld:latest \
    --target-port 80 \
    --ingress 'external' \
    --query properties.configuration.ingress.fqdn
    ...
    ...

    Container app created. Access your app at <URL>

    The --ingress property shows that the app is open to external requests; in other words, it is publicly visible at the <URL> that is printed out on the terminal on successsful completion of this step.

    4. Verify Deployment

    Let's see if this works. You can verify that your container app by visitng the URL returned above in your browser. You should see something like this!

    Container App Hello World

    You can also visit the Azure Portal and look under the created Resource Group. You should see a new Container App type of resource was created after this step.

    Congratulations

    You just created and deployed your first "Hello World" Azure Container App! This validates your local development environment setup and existence of a valid Azure subscription.

    5. Clean Up Your Resources

    It's good practice to clean up resources once you are done with a tutorial.

    THIS ACTION IS IRREVERSIBLE

    This command deletes the resource group we created above - and all resources in it. So make sure you specified the right name, then confirm deletion.

    $ az group delete --name $RESOURCE_GROUP
    Are you sure you want to perform this operation? (y/n):

    Note that you can also delete the resource group from the Azure Portal interface if that feels more comfortable. For now, we'll just use the Portal to verify that deletion occurred. If you had previously opened the Resource Group page for the created resource, just refresh it. You should see something like this:

    Resource Not Found


    Core Concepts

    COMING SOON

    An illustrated guide summarizing these concepts in a single sketchnote.

    We covered a lot today - we'll stop with a quick overview of core concepts behind Azure Container Apps, each linked to documentation for self-study. We'll dive into more details on some of these concepts in upcoming articles:

    • Environments - are the secure boundary around a group of container apps that are deployed in the same virtual network. They write logs to a shared Log Analytics workspace and can communicate seamlessly using Dapr, if used.
    • Containers refer to the container image deployed in the Azure Container App. They can use any runtime, programming language, or development stack - and be discovered using any public or private container registry. A container app can support multiple containers.
    • Revisions are immutable snapshots of an Azure Container App. The first revision is created when the ACA is first deployed, with new revisions created when redeployment occurs with revision-scope changes. Multiple revisions can run concurrently in an environment.
    • Application Lifecycle Management revolves around these revisions, with a container app having three phases: deployment, update and deactivation.
    • Microservices are independent units of functionality in Cloud-Native architectures. A single container app typically represents a single microservice, and can be composed from one or more containers. Microservices can now be scaled and upgraded indepedently, giving your application more flexbility and control.
    • Networking architecture consist of a virtual network (VNET) associated with the environment. Unless you provide a custom VNET at environment creation time, a default VNET is automatically created. The VNET configuration determines access (ingress, internal vs. external) and can influence auto-scaling choices (e.g., use HTTP Edge Proxy and scale based on number of HTTP requests).
    • Observability is about monitoring the health of your application and diagnosing it to improve reliability or performance. Azure Container Apps has a number of features - from Log streaming and Container console to integration with Azure Monitor - to provide a holistic view of application status over time.
    • Easy Auth is possible with built-in support for authentication and authorization including support for popular identity providers like Facebook, Google, Twitter and GitHub - alongside the Microsoft Identity Platform.

    Keep these terms in mind as we walk through more tutorials this week, to see how they find application in real examples. Finally, a note on Dapr, the Distributed Application Runtime that abstracts away many of the challenges posed by distributed systems - and lets you focus on your application logic.

    DAPR INTEGRATION MADE EASY

    Dapr uses a sidecar architecture, allowing Azure Container Apps to communicate with Dapr Building Block APIs over either gRPC or HTTP. Your ACA can be built to run with or without Dapr - giving you the flexibility to incrementally adopt specific APIs and unlock related capabilities as the need arises.

    In later articles this week, we'll do a deeper dive into Dapr and build our first Dapr-enable Azure Container App to get a better understanding of this integration.

    Exercise

    Congratulations! You made it! By now you should have a good idea of what Cloud-Native development means, why Microservices and Containers are important to that vision - and how Azure Container Apps helps simplify the building and deployment of microservices based applications using serverless architectures on Azure.

    Now it's your turn to reinforce learning by doing.

    Resources

    Three key resources to bookmark and explore:

    - + \ No newline at end of file diff --git a/blog/tags/30-days-of-serverless/page/15/index.html b/blog/tags/30-days-of-serverless/page/15/index.html index 959afde26a..b315656cc7 100644 --- a/blog/tags/30-days-of-serverless/page/15/index.html +++ b/blog/tags/30-days-of-serverless/page/15/index.html @@ -14,13 +14,13 @@ - +

    20 posts tagged with "30-days-of-serverless"

    View All Tags

    · 5 min read
    Nitya Narasimhan
    Devanshi Joshi

    SEP 08: CHANGE IN PUBLISHING SCHEDULE

    Starting from Week 2 (Sep 8), we'll be publishing blog posts in batches rather than on a daily basis, so you can read a series of related posts together. Don't want to miss updates? Just subscribe to the feed


    Welcome to Day 8 of #30DaysOfServerless!

    This marks the end of our Week 1 Roadmap focused on Azure Functions!! Today, we'll do a quick recap of all #ServerlessSeptember activities in Week 1, set the stage for Week 2 - and leave you with some excellent tutorials you should explore to build more advanced scenarios with Azure Functions.

    Ready? Let's go.


    What We'll Cover

    • Azure Functions: Week 1 Recap
    • Advanced Functions: Explore Samples
    • End-to-End: Serverless Hacks & Cloud Skills
    • What's Next: Hello, Containers & Microservices
    • Challenge: Complete the Learning Path


    Week 1 Recap: #30Days & Functions

    Congratulations!! We made it to the end of Week 1 of #ServerlessSeptember. Let's recap what we learned so far:

    • In Core Concepts we looked at where Azure Functions fits into the serverless options available on Azure. And we learned about key concepts like Triggers, Bindings, Custom Handlers and Durable Functions.
    • In Build Your First Function we looked at the tooling options for creating Functions apps, testing them locally, and deploying them to Azure - as we built and deployed our first Functions app.
    • In the next 4 posts, we explored new Triggers, Integrations, and Scenarios - as we looked at building Functions Apps in Java, JavaScript, .NET and Python.
    • And in the Zero-To-Hero series, we learned about Durable Entities - and how we can use them to create stateful serverless solutions using a Chirper Sample as an example scenario.

    The illustrated roadmap below summarizes what we covered each day this week, as we bring our Functions-as-a-Service exploration to a close.


    Advanced Functions: Code Samples

    So, now that we've got our first Functions app under our belt, and validated our local development setup for tooling, where can we go next? A good next step is to explore different triggers and bindings, that drive richer end-to-end scenarios. For example:

    • Integrate Functions with Azure Logic Apps - we'll discuss Azure Logic Apps in Week 3. For now, think of it as a workflow automation tool that lets you integrate seamlessly with other supported Azure services to drive an end-to-end scenario. In this tutorial, we set up a workflow connecting Twitter (get tweet) to Azure Cognitive Services (analyze sentiment) - and use that to trigger an Azure Functions app to send email about the result.
    • Integrate Functions with Event Grid - we'll discuss Azure Event Grid in Week 3. For now, think of it as an eventing service connecting event sources (publishers) to event handlers (subscribers) at cloud scale. In this tutorial, we handle a common use case - a workflow where loading an image to Blob Storage triggers an Azure Functions app that implements a resize function, helping automatically generate thumbnails for the uploaded image.
    • Integrate Functions with CosmosDB and SignalR to bring real-time push-based notifications to your web app. It achieves this by using a Functions app that is triggered by changes in a CosmosDB backend, causing it to broadcast that update (push notification to connected web clients over SignalR, in real time.

    Want more ideas? Check out the Azure Samples for Functions for implementations, and browse the Azure Architecture Center for reference architectures from real-world scenarios that involve Azure Functions usage.


    E2E Scenarios: Hacks & Cloud Skills

    Want to systematically work your way through a single End-to-End scenario involving Azure Functions alongside other serverless support technologies? Check out the Serverless Hacks activity happening during #ServerlessSeptember, and learn to build this "Serverless Tollbooth Application" in a series of 10 challenges. Check out the video series for a reference solution in .NET and sign up for weekly office hours to join peers and discuss your solutions or challenges.

    Or perhaps you prefer to learn core concepts with code in a structured learning path? We have that covered. Check out the 12-module "Create Serverless Applications" course from Microsoft Learn which walks your through concepts, one at a time, with code. Even better - sign up for the free Cloud Skills Challenge and complete the same path (in under 30 days) but this time, with the added fun of competing against your peers for a spot on a leaderboard, and swag.


    What's Next? Hello, Cloud-Native!

    So where to next? In Week 2 we turn our attention from Functions-as-a-Service to building more complex backends using Containers and Microservices. We'll focus on two core technologies - Azure Container Apps and Dapr (Distributed Application Runtime) - both key components of a broader vision around Building Cloud-Native Applications in Azure.

    What is Cloud-Native you ask?

    Fortunately for you, we have an excellent introduction in our Zero-to-Hero article on Go Cloud-Native with Azure Container Apps - that explains the 5 pillars of Cloud-Native and highlights the value of Azure Container Apps (scenarios) and Dapr (sidecar architecture) for simplified microservices-based solution with auto-scale capability. Prefer a visual summary? Here's an illustrate guide to that article for convenience.

    Go Cloud-Native Download a higher resolution version of the image


    Take The Challenge

    We typically end each post with an exercise or activity to reinforce what you learned. For Week 1, we encourage you to take the Cloud Skills Challenge and work your way through at least a subset of the modules, for hands-on experience with the different Azure Functions concepts, integrations, and usage.

    See you in Week 2!

    - + \ No newline at end of file diff --git a/blog/tags/30-days-of-serverless/page/16/index.html b/blog/tags/30-days-of-serverless/page/16/index.html index 1d4bd4fcd5..15357fdf29 100644 --- a/blog/tags/30-days-of-serverless/page/16/index.html +++ b/blog/tags/30-days-of-serverless/page/16/index.html @@ -14,7 +14,7 @@ - + @@ -22,7 +22,7 @@

    20 posts tagged with "30-days-of-serverless"

    View All Tags

    · 7 min read
    Jay Miller

    Welcome to Day 7 of #30DaysOfServerless!

    Over the past couple of days, we've explored Azure Functions from the perspective of specific programming languages. Today we'll continue that trend by looking at Python - exploring the Timer Trigger and CosmosDB binding, and showcasing integration with a FastAPI-implemented web app.

    Ready? Let's go.


    What We'll Cover

    • Developer Guidance: Azure Functions On Python
    • Build & Deploy: Wildfire Detection Apps with Timer Trigger + CosmosDB
    • Demo: My Fire Map App: Using FastAPI and Azure Maps to visualize data
    • Next Steps: Explore Azure Samples
    • Exercise: Try this yourself!
    • Resources: For self-study!


    Developer Guidance

    If you're a Python developer new to serverless on Azure, start with the Azure Functions Python Developer Guide. It covers:

    • Quickstarts with Visual Studio Code and Azure CLI
    • Adopting best practices for hosting, reliability and efficiency.
    • Tutorials showcasing Azure automation, image classification and more
    • Samples showcasing Azure Functions features for Python developers

    Now let's dive in and build our first Python-based Azure Functions app.


    Detecting Wildfires Around the World?

    I live in California which is known for lots of wildfires. I wanted to create a proof of concept for developing an application that could let me know if there was a wildfire detected near my home.

    NASA has a few satelites orbiting the Earth that can detect wildfires. These satelites take scans of the radiative heat in and use that to determine the likelihood of a wildfire. NASA updates their information about every 30 minutes and it can take about four hours for to scan and process information.

    Fire Point Near Austin, TX

    I want to get the information but I don't want to ping NASA or another service every time I check.

    What if I occaisionally download all the data I need? Then I can ping that as much as I like.

    I can create a script that does just that. Any time I say I can create a script that is a verbal queue for me to consider using an Azure function. With the function being ran in the cloud, I can ensure the script runs even when I'm not at my computer.

    How the Timer Trigger Works

    This function will utilize the Timer Trigger. This means Azure will call this function to run at a scheduled interval. This isn't the only way to keep the data in sync, but we know that arcgis, the service that we're using says that data is only updated every 30 minutes or so.

    To learn more about the TimerTrigger as a concept, check out the Azure Functions documentation around Timers.

    When we create the function we tell it a few things like where the script will live (in our case in __init__.py) the type and direction and notably often it should run. We specify the timer using schedule": <The CRON INTERVAL>. For us we're using 0 0,30 * * * which means every 30 minutes at the hour and half-hour.

    {
    "scriptFile": "__init__.py",
    "bindings": [
    {
    "name": "reqTimer",
    "type": "timerTrigger",
    "direction": "in",
    "schedule": "0 0,30 * * * *"
    }
    ]
    }

    Next, we create the code that runs when the function is called.

    Connecting to the Database and our Source

    Disclaimer: The data that we're pulling is for educational purposes only. This is not meant to be a production level application. You're welcome play with this project but ensure that you're using the data in compliance with Esri.

    Our function does two important things.

    1. It pulls data from ArcGIS that meets the parameters
    2. It stores that pulled data into our database

    If you want to check out the code in its entirety, check out the GitHub repository.

    Pulling the data from ArcGIS is easy. We can use the ArcGIS Python API. Then, we need to load the service layer. Finally we query that layer for the specific data.

    def write_new_file_data(gis_id:str, layer:int=0) -> FeatureSet:
    """Returns a JSON String of the Dataframe"""
    fire_data = g.content.get(gis_id)
    feature = fire_data.layers[layer] # Loading Featured Layer from ArcGIS
    q = feature.query(
    where="confidence >= 65 AND hours_old <= 4", #The filter for the query
    return_distince_values=True,
    out_fields="confidence, hours_old", # The data we want to store with our points
    out_sr=4326, # The spatial reference of the data
    )
    return q

    Then we need to store the data in our database.

    We're using Cosmos DB for this. COSMOSDB is a NoSQL database, which means that the data looks a lot like a python dictionary as it's JSON. This means that we don't need to worry about converting the data into a format that can be stored in a relational database.

    The second reason is that Cosmos DB is tied into the Azure ecosystem so that if we want to create functions Azure events around it, we can.

    Our script grabs the information that we pulled from ArcGIS and stores it in our database.

    async with CosmosClient.from_connection_string(COSMOS_CONNECTION_STRING) as client:
    container = database.get_container_client(container=CONTAINER)
    for record in data:
    await container.create_item(
    record,
    enable_automatic_id_generation=True,
    )

    In our code each of these functions live in their own space. So in the main function we focus solely on what azure functions will be doing. The script that gets called is __init__.py. There we'll have the function call the other functions running.

    We created another function called load_and_write that does all the work outlined above. __init__.py will call that.

    async def main(reqTimer: func.TimerRequest) -> None:
    database=database
    container=container
    await update_db.load_and_write(gis_id=GIS_LAYER_ID, database=database, container=container)

    Then we deploy the function to Azure. I like to use VS Code's Azure Extension but you can also deploy it a few other ways.

    Deploying the function via VS Code

    Once the function is deployed we can load the Azure portal and see a ping whenever the function is called. The pings correspond to the Function being ran

    We can also see the data now living in the datastore. Document in Cosmos DB

    It's in the Database, Now What?

    Now the real fun begins. We just loaded the last bit of fire data into a database. We can now query that data and serve it to others.

    As I mentioned before, our Cosmos DB data is also stored in Azure, which means that we can deploy Azure Functions to trigger when new data is added. Perhaps you can use this to check for fires near you and use a Logic App to send an alert to your phone or email.

    Another option is to create a web application that talks to the database and displays the data. I've created an example of this using FastAPI – https://jm-func-us-fire-notify.azurewebsites.net.

    Website that Checks for Fires


    Next Steps

    This article showcased the Timer Trigger and the HTTP Trigger for Azure Functions in Python. Now try exploring other triggers and bindings by browsing Bindings code samples for Python and Azure Functions samples for Python

    Once you've tried out the samples, you may want to explore more advanced integrations or extensions for serverless Python scenarios. Here are some suggestions:

    And check out the resources for more tutorials to build up your Azure Functions skills.

    Exercise

    I encourage you to fork the repository and try building and deploying it yourself! You can see the TimerTrigger and a HTTPTrigger building the website.

    Then try extending it. Perhaps if wildfires are a big thing in your area, you can use some of the data available in Planetary Computer to check out some other datasets.

    Resources

    - + \ No newline at end of file diff --git a/blog/tags/30-days-of-serverless/page/17/index.html b/blog/tags/30-days-of-serverless/page/17/index.html index 5b73ab609b..d3284e9d89 100644 --- a/blog/tags/30-days-of-serverless/page/17/index.html +++ b/blog/tags/30-days-of-serverless/page/17/index.html @@ -14,13 +14,13 @@ - +

    20 posts tagged with "30-days-of-serverless"

    View All Tags

    · 10 min read
    Mike James
    Matt Soucoup

    Welcome to Day 6 of #30DaysOfServerless!

    The theme for this week is Azure Functions. Today we're going to talk about why Azure Functions are a great fit for .NET developers.


    What We'll Cover

    • What is serverless computing?
    • How does Azure Functions fit in?
    • Let's build a simple Azure Function in .NET
    • Developer Guide, Samples & Scenarios
    • Exercise: Explore the Create Serverless Applications path.
    • Resources: For self-study!

    A banner image that has the title of this article with the author&#39;s photo and a drawing that summarizes the demo application.


    The leaves are changing colors and there's a chill in the air, or for those lucky folks in the Southern Hemisphere, the leaves are budding and a warmth is in the air. Either way, that can only mean one thing - it's Serverless September!🍂 So today, we're going to take a look at Azure Functions - what they are, and why they're a great fit for .NET developers.

    What is serverless computing?

    For developers, serverless computing means you write highly compact individual functions that do one thing - and run in the cloud. These functions are triggered by some external event. That event could be a record being inserted into a database, a file uploaded into BLOB storage, a timer interval elapsed, or even a simple HTTP request.

    But... servers are still definitely involved! What has changed from other types of cloud computing is that the idea and ownership of the server has been abstracted away.

    A lot of the time you'll hear folks refer to this as Functions as a Service or FaaS. The defining characteristic is all you need to do is put together your application logic. Your code is going to be invoked in response to events - and the cloud provider takes care of everything else. You literally get to focus on only the business logic you need to run in response to something of interest - no worries about hosting.

    You do not need to worry about wiring up the plumbing between the service that originates the event and the serverless runtime environment. The cloud provider will handle the mechanism to call your function in response to whatever event you chose to have the function react to. And it passes along any data that is relevant to the event to your code.

    And here's a really neat thing. You only pay for the time the serverless function is running. So, if you have a function that is triggered by an HTTP request, and you rarely get requests to your function, you would rarely pay.

    How does Azure Functions fit in?

    Microsoft's Azure Functions is a modern serverless architecture, offering event-driven cloud computing that is easy for developers to use. It provides a way to run small pieces of code or Functions in the cloud without developers having to worry themselves about the infrastructure or platform the Function is running on.

    That means we're only concerned about writing the logic of the Function. And we can write that logic in our choice of languages... like C#. We are also able to add packages from NuGet to Azure Functions—this way, we don't have to reinvent the wheel and can use well-tested libraries.

    And the Azure Functions runtime takes care of a ton of neat stuff for us, like passing in information about the event that caused it to kick off - in a strongly typed variable. It also "binds" to other services, like Azure Storage, we can easily access those services from our code without having to worry about new'ing them up.

    Let's build an Azure Function!

    Scaffold the Function

    Don't worry about having an Azure subscription or even being connected to the internet—we can develop and debug Azure Functions locally using either Visual Studio or Visual Studio Code!

    For this example, I'm going to use Visual Studio Code to build up a Function that responds to an HTTP trigger and then writes a message to an Azure Storage Queue.

    Diagram of the how the Azure Function will use the HTTP trigger and the Azure Storage Queue Binding

    The incoming HTTP call is the trigger and the message queue the Function writes to is an output binding. Let's have at it!

    info

    You do need to have some tools downloaded and installed to get started. First and foremost, you'll need Visual Studio Code. Then you'll need the Azure Functions extension for VS Code to do the development with. Finally, you'll need the Azurite Emulator installed as well—this will allow us to write to a message queue locally.

    Oh! And of course, .NET 6!

    Now with all of the tooling out of the way, let's write a Function!

    1. Fire up Visual Studio Code. Then, from the command palette, type: Azure Functions: Create New Project

      Screenshot of create a new function dialog in VS Code

    2. Follow the steps as to which directory you want to create the project in and which .NET runtime and language you want to use.

      Screenshot of VS Code prompting which directory and language to use

    3. Pick .NET 6 and C#.

      It will then prompt you to pick the folder in which your Function app resides and then select a template.

      Screenshot of VS Code prompting you to pick the Function trigger template

      Pick the HTTP trigger template. When prompted for a name, call it: PostToAQueue.

    Execute the Function Locally

    1. After giving it a namespace, it prompts for an authorization level—pick Anonymous. Now we have a Function! Let's go ahead and hit F5 and see it run!
    info

    After the templates have finished installing, you may get a prompt to download additional components—these are NuGet packages. Go ahead and do that.

    When it runs, you'll see the Azure Functions logo appear in the Terminal window with the URL the Function is located at. Copy that link.

    Screenshot of the Azure Functions local runtime starting up

    1. Type the link into a browser, adding a name parameter as shown in this example: http://localhost:7071/api/PostToAQueue?name=Matt. The Function will respond with a message. You can even set breakpoints in Visual Studio Code and step through the code!

    Write To Azure Storage Queue

    Next, we'll get this HTTP trigger Function to write to a local Azure Storage Queue. First we need to add the Storage NuGet package to our project. In the terminal, type:

    dotnet add package Microsoft.Azure.WebJobs.Extensions.Storage

    Then set a configuration setting to tell the Function runtime where to find the Storage. Open up local.settings.json and set "AzureWebJobsStorage" to "UseDevelopmentStorage=true". The full file will look like:

    {
    "IsEncrypted": false,
    "Values": {
    "AzureWebJobsStorage": "UseDevelopmentStorage=true",
    "AzureWebJobsDashboard": ""
    }
    }

    Then create a new class within your project. This class will hold nothing but properties. Call it whatever you want and add whatever properties you want to it. I called mine TheMessage and added an Id and Name properties to it.

    public class TheMessage
    {
    public string Id { get; set; }
    public string Name { get; set; }
    }

    Finally, change your PostToAQueue Function, so it looks like the following:


    public static class PostToAQueue
    {
    [FunctionName("PostToAQueue")]
    public static async Task<IActionResult> Run(
    [HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)] HttpRequest req,
    [Queue("demoqueue", Connection = "AzureWebJobsStorage")] IAsyncCollector<TheMessage> messages,
    ILogger log)
    {
    string name = req.Query["name"];

    await messages.AddAsync(new TheMessage { Id = System.Guid.NewGuid().ToString(), Name = name });

    return new OkResult();
    }
    }

    Note the addition of the messages variable. This is telling the Function to use the storage connection we specified before via the Connection property. And it is also specifying which queue to use in that storage account, in this case demoqueue.

    All the code is doing is pulling out the name from the query string, new'ing up a new TheMessage class and adding that to the IAsyncCollector variable.

    That will add the new message to the queue!

    Make sure Azurite is started within VS Code (both the queue and blob emulators). Run the app and send the same GET request as before: http://localhost:7071/api/PostToAQueue?name=Matt.

    If you have the Azure Storage Explorer installed, you can browse your local Queue and see the new message in there!

    Screenshot of Azure Storage Explorer with the new message in the queue

    Summing Up

    We had a quick look at what Microsoft's serverless offering, Azure Functions, is comprised of. It's a full-featured FaaS offering that enables you to write functions in your language of choice, including reusing packages such as those from NuGet.

    A highlight of Azure Functions is the way they are triggered and bound. The triggers define how a Function starts, and bindings are akin to input and output parameters on it that correspond to external services. The best part is that the Azure Function runtime takes care of maintaining the connection to the external services so you don't have to worry about new'ing up or disposing of the connections yourself.

    We then wrote a quick Function that gets triggered off an HTTP request and then writes a query string parameters from that request into a local Azure Storage Queue.

    What's Next

    So, where can you go from here?

    Think about how you can build real-world scenarios by integrating other Azure services. For example, you could use serverless integrations to build a workflow where the input payload received using an HTTP Trigger, is now stored in Blob Storage (output binding), which in turn triggers another service (e.g., Cognitive Services) that processes the blob and returns an enhanced result.

    Keep an eye out for an update to this post where we walk through a scenario like this with code. Check out the resources below to help you get started on your own.

    Exercise

    This brings us close to the end of Week 1 with Azure Functions. We've learned core concepts, built and deployed our first Functions app, and explored quickstarts and scenarios for different programming languages. So, what can you do to explore this topic on your own?

    • Explore the Create Serverless Applications learning path which has several modules that explore Azure Functions integrations with various services.
    • Take up the Cloud Skills Challenge and complete those modules in a fun setting where you compete with peers for a spot on the leaderboard!

    Then come back tomorrow as we wrap up the week with a discussion on end-to-end scenarios, a recap of what we covered this week, and a look at what's ahead next week.

    Resources

    Start here for developer guidance in getting started with Azure Functions as a .NET/C# developer:

    Then learn about supported Triggers and Bindings for C#, with code snippets to show how they are used.

    Finally, explore Azure Functions samples for C# and learn to implement serverless solutions. Examples include:

    - + \ No newline at end of file diff --git a/blog/tags/30-days-of-serverless/page/18/index.html b/blog/tags/30-days-of-serverless/page/18/index.html index c5a7d98fd8..6661ce675c 100644 --- a/blog/tags/30-days-of-serverless/page/18/index.html +++ b/blog/tags/30-days-of-serverless/page/18/index.html @@ -14,13 +14,13 @@ - +

    20 posts tagged with "30-days-of-serverless"

    View All Tags

    · 7 min read
    Aaron Powell

    Welcome to Day 5 of #30DaysOfServerless!

    Yesterday we looked at Azure Functions from the perspective of a Java developer. Today, we'll do a similar walkthrough from the perspective of a JavaScript developer.

    And, we'll use this to explore another popular usage scenario for Azure Functions: building a serverless HTTP API using JavaScript.

    Ready? Let's go.


    What We'll Cover

    • Developer Guidance
    • Create Azure Function with CLI
    • Calling an external API
    • Azure Samples & Scenarios for JS
    • Exercise: Support searching
    • Resources: For self-study!


    Developer Guidance

    If you're a JavaScript developer new to serverless on Azure, start by exploring the Azure Functions JavaScript Developers Guide. It covers:

    • Quickstarts for Node.js - using Visual Code, CLI or Azure Portal
    • Guidance on hosting options and performance considerations
    • Azure Functions bindings and (code samples) for JavaScript
    • Scenario examples - integrations with other Azure Services

    Node.js 18 Support

    Node.js 18 Support (Public Preview)

    Azure Functions support for Node.js 18 entered Public Preview on Aug 31, 2022 and is supported by the Azure Functions v.4.x runtime!

    As we continue to explore how we can use Azure Functions, today we're going to look at using JavaScript to create one, and we're going to be using the newly released Node.js 18 support for Azure Functions to make the most out of the platform.

    Ensure you have Node.js 18 and Azure Functions v4.x versions installed, along with a text editor (I'll use VS Code in this post), and a terminal, then we're ready to go.

    Scenario: Calling The GitHub API

    The application we're going to be building today will use the GitHub API to return a random commit message, so that we don't need to come up with one ourselves! After all, naming things can be really hard! 🤣

    Creating the Azure Function

    To create our Azure Function, we're going to use the Azure Functions CLI, which we can install using npm:

    npm install --global azure-function-core-tools

    Once that's installed, we can use the new func command to initalise our project:

    func init --worker-runtime node --language javascript

    When running func init we can either provide the worker-runtime and language as arguments, or use the menu system that the tool will provide us. For brevity's stake, I've used the arguments here, specifying that we want node as the runtime and javascript as the language, but you could change that to typescript if you'd prefer to use TypeScript.

    Once the init command is completed, you should have a .vscode folder, and the files .gitignore, host.json, local.settings.json, and package.json.

    Files generated by func initFiles generated by func init

    Adding a HTTP Trigger

    We have an empty Functions app so far, what we need to do next is create a Function that it will run, and we're going to make a HTTP Trigger Function, which is a Function that responds to HTTP requests. We'll use the func new command to create that:

    func new --template "HTTP Trigger" --name "get-commit-message"

    When this completes, we'll have a folder for the Function, using the name we provided, that contains the filesfunction.json and index.js. Let's open the function.json to understand it a little bit:

    {
    "bindings": [
    {
    "authLevel": "function",
    "type": "httpTrigger",
    "direction": "in",
    "name": "req",
    "methods": [
    "get",
    "post"
    ]
    },
    {
    "type": "http",
    "direction": "out",
    "name": "res"
    }
    ]
    }

    This file is used to tell Functions about the Function that we've created and what it does, so it knows to handle the appropriate events. We have a bindings node which contains the event bindings for our Azure Function. The first binding is using the type httpTrigger, which indicates that it'll be executed, or triggered, by a HTTP event, and the methods indicates that it's listening to both GET and POST (you can change this for the right HTTP methods that you want to support). The HTTP request information will be bound to a property in the Functions context called req, so we can access query strings, the request body, etc.

    The other binding we have has the direction of out, meaning that it's something that the Function will return to the called, and since this is a HTTP API, the type is http, indicating that we'll return a HTTP response, and that response will be on a property called res that we add to the Functions context.

    Let's go ahead and start the Function and call it:

    func start

    Starting the FunctionStarting the Function

    With the Function started, access the endpoint http://localhost:7071/api/get-commit-message via a browser or using cURL:

    curl http://localhost:7071/api/get-commit-message\?name\=ServerlessSeptember

    Hello from Azure FunctionsHello from Azure Functions

    🎉 CONGRATULATIONS

    You created and ran a JavaScript function app locally!

    Calling an external API

    It's time to update the Function to do what we want to do - call the GitHub Search API and get some commit messages. The endpoint that we'll be calling is https://api.github.com/search/commits?q=language:javascript.

    Note: The GitHub API is rate limited and this sample will call it unauthenticated, so be aware of that in your own testing.

    To call this API, we'll leverage the newly released fetch support in Node 18 and async/await, to make for a very clean Function.

    Open up the index.js file, and delete the contents of the existing Function, so we have a empty one:

    module.exports = async function (context, req) {

    }

    The default template uses CommonJS, but you can use ES Modules with Azure Functions if you prefer.

    Now we'll use fetch to call the API, and unpack the JSON response:

    module.exports = async function (context, req) {
    const res = await fetch("https://api.github.com/search/commits?q=language:javascript");
    const json = await res.json();
    const messages = json.items.map(item => item.commit.message);
    context.res = {
    body: {
    messages
    }
    };
    }

    To send a response to the client, we're setting the context.res property, where res is the name of the output binding in our function.json, and giving it a body that contains the commit messages.

    Run func start again, and call the endpoint:

    curl http://localhost:7071/api/get-commit-message

    The you'll get some commit messages:

    A series of commit messages from the GitHub Search APIA series of commit messages from the GitHub Search API

    🎉 CONGRATULATIONS

    There we go, we've created an Azure Function which is used as a proxy to another API, that we call (using native fetch in Node.js 18) and from which we return a subset of the JSON payload.

    Next Steps

    Other Triggers, Bindings

    This article focused on using the HTTPTrigger and relevant bindings, to build a serverless API using Azure Functions. How can you explore other supported bindings, with code samples to illustrate usage?

    Scenarios with Integrations

    Once you've tried out the samples, try building an end-to-end scenario by using these triggers to integrate seamlessly with other services. Here are some suggestions:

    Exercise: Support searching

    The GitHub Search API allows you to provide search parameters via the q query string. In this sample, we hard-coded it to be language:javascript, but as a follow-on exercise, expand the Function to allow the caller to provide the search terms as a query string to the Azure Function, which is passed to the GitHub Search API. Hint - have a look at the req argument.

    Resources

    - + \ No newline at end of file diff --git a/blog/tags/30-days-of-serverless/page/19/index.html b/blog/tags/30-days-of-serverless/page/19/index.html index 14783595f4..bb25251a91 100644 --- a/blog/tags/30-days-of-serverless/page/19/index.html +++ b/blog/tags/30-days-of-serverless/page/19/index.html @@ -14,13 +14,13 @@ - +

    20 posts tagged with "30-days-of-serverless"

    View All Tags

    · 9 min read
    Nitya Narasimhan

    Welcome to Day 3 of #30DaysOfServerless!

    Yesterday we learned core concepts and terminology for Azure Functions, the signature Functions-as-a-Service option on Azure. Today we take our first steps into building and deploying an Azure Functions app, and validate local development setup.

    Ready? Let's go.


    What We'll Cover


    Developer Guidance

    Before we jump into development, let's familiarize ourselves with language-specific guidance from the Azure Functions Developer Guide. We'll review the JavaScript version but guides for F#, Java, Python, C# and PowerShell are also available.

    1. A function is defined by two things: code (written in a supported programming language) and configuration (specified in a functions.json file, declaring the triggers, bindings and other context for execution).

    2. A function app is the unit of deployment for your functions, and is associated with a single execution context or runtime. It can contain multiple functions, but they must be in the same language.

    3. A host configuration is runtime-specific configuration that affects all functions running in a given function app instance. It is defined in a host.json file.

    4. A recommended folder structure is defined for the function app, but may vary based on the programming language used. Check the documentation on folder structures to learn the default for your preferred language.

    Here's an example of the JavaScript folder structure for a function app containing two functions with some shared dependencies. Note that host.json (runtime configuration) is defined once, in the root directory. And function.json is defined separately for each function.

    FunctionsProject
    | - MyFirstFunction
    | | - index.js
    | | - function.json
    | - MySecondFunction
    | | - index.js
    | | - function.json
    | - SharedCode
    | | - myFirstHelperFunction.js
    | | - mySecondHelperFunction.js
    | - node_modules
    | - host.json
    | - package.json
    | - local.settings.json

    We'll dive into what the contents of these files look like, when we build and deploy the first function. We'll cover local.settings.json in the About Local Testing section at the end.


    My First Function App

    The documentation provides quickstart options for all supported languages. We'll walk through the JavaScript versions in this article. You have two options for development:

    I'm a huge fan of VS Code - so I'll be working through that tutorial today.

    PRE-REQUISITES

    Don't forget to validate your setup by checking the versions of installed software.

    Install VSCode Extension

    Installing the Visual Studio Code extension should automatically open this page in your IDE with similar quickstart instructions, but potentially more recent screenshots.

    Visual Studio Code Extension for VS Code

    Note that it may make sense to install the Azure tools for Visual Studio Code extensions pack if you plan on working through the many projects in Serverless September. This includes the Azure Functions extension by default.

    Create First Function App

    Walk through the Create local [project] steps of the quickstart. The process is quick and painless and scaffolds out this folder structure and files. Note the existence (and locations) of functions.json and host.json files.

    Final screenshot for VS Code workflow

    Explore the Code

    Check out the functions.json configuration file. It shows that the function is activated by an httpTrigger with an input binding (tied to req payload) and an output binding (tied to res payload). And it supports both GET and POST requests on the exposed URL.

    {
    "bindings": [
    {
    "authLevel": "anonymous",
    "type": "httpTrigger",
    "direction": "in",
    "name": "req",
    "methods": [
    "get",
    "post"
    ]
    },
    {
    "type": "http",
    "direction": "out",
    "name": "res"
    }
    ]
    }

    Check out index.js - the function implementation. We see it logs a message to the console when invoked. It then extracts a name value from the input payload (req) and crafts a different responseMessage based on the presence/absence of a valid name. It returns this response in the output payload (res).

    module.exports = async function (context, req) {
    context.log('JavaScript HTTP trigger function processed a request.');

    const name = (req.query.name || (req.body && req.body.name));
    const responseMessage = name
    ? "Hello, " + name + ". This HTTP triggered function executed successfully."
    : "This HTTP triggered function executed successfully. Pass a name in the query string or in the request body for a personalized response.";

    context.res = {
    // status: 200, /* Defaults to 200 */
    body: responseMessage
    };
    }

    Preview Function App Locally

    You can now run this function app locally using Azure Functions Core Tools. VS Code integrates seamlessly with this CLI-based tool, making it possible for you to exploit all its capabilities without leaving the IDE. In fact, the workflow will even prompt you to install those tools if they didn't already exist in your local dev environment.

    Now run the function app locally by clicking on the "Run and Debug" icon in the activity bar (highlighted, left) and pressing the "▶️" (Attach to Node Functions) to start execution. On success, your console output should show something like this.

    Final screenshot for VS Code workflow

    You can test the function locally by visiting the Function Url shown (http://localhost:7071/api/HttpTrigger1) or by opening the Workspace region of the Azure extension, and selecting the Execute Function now menu item as shown.

    Final screenshot for VS Code workflow

    In the latter case, the Enter request body popup will show a pre-populated request of {"name":"Azure"} that you can submit.

    Final screenshot for VS Code workflow

    On successful execution, your VS Code window will show a notification as follows. Take note of the console output - it shows the message encoded in index.js.

    Final screenshot for VS Code workflow

    You can also visit the deployed function URL directly in a local browser - testing the case for a request made with no name payload attached. Note how the response in the browser now shows the non-personalized version of the message!

    Final screenshot for VS Code workflow

    🎉 Congratulations

    You created and ran a function app locally!

    (Re)Deploy to Azure

    Now, just follow the creating a function app in Azure steps to deploy it to Azure, using an active subscription! The deployed app resource should now show up under the Function App Resources where you can click Execute Function Now to test the Azure-deployed version instead. You can also look up the function URL in the portal and visit that link in your local browser to trigger the function without the name context.

    🎉 Congratulations

    You have an Azure-hosted serverless function app!

    Challenge yourself and try to change the code and redeploy to Azure to return something different. You have effectively created a serverless API endpoint!


    About Core Tools

    That was a lot to cover! In the next few days we'll have more examples for Azure Functions app development - focused on different programming languages. So let's wrap today's post by reviewing two helpful resources.

    First, let's talk about Azure Functions Core Tools - the command-line tool that lets you develop, manage, and deploy, Azure Functions projects from your local development environment. It is used transparently by the VS Code extension - but you can use it directly from a terminal for a powerful command-line end-to-end developer experience! The Core Tools commands are organized into the following contexts:

    Learn how to work with Azure Functions Core Tools. Not only can it help with quick command execution, it can also be invaluable for debugging issues that may not always be visible or understandable in an IDE.

    About Local Testing

    You might have noticed that the scaffold also produced a local.settings.json file. What is that and why is it useful? By definition, the local.settings.json file "stores app settings and settings used by local development tools. Settings in the local.settings.json file are used only when you're running your project locally."

    Read the guidance on Code and test Azure Functions Locally to learn more about how to configure development environments locally, for your preferred programming language, to support testing and debugging on the local Functions runtime.

    Exercise

    We made it! Now it's your turn!! Here are a few things you can try to apply what you learned and reinforce your understanding:

    Resources

    Bookmark and visit the #30DaysOfServerless Collection. It's the one-stop collection of resources we will keep updated with links to relevant documentation and learning resources.

    - + \ No newline at end of file diff --git a/blog/tags/30-days-of-serverless/page/2/index.html b/blog/tags/30-days-of-serverless/page/2/index.html index 3b23dd201c..36d139ab3b 100644 --- a/blog/tags/30-days-of-serverless/page/2/index.html +++ b/blog/tags/30-days-of-serverless/page/2/index.html @@ -14,13 +14,13 @@ - +

    20 posts tagged with "30-days-of-serverless"

    View All Tags

    · 14 min read
    Justin Yoo

    Welcome to Day 28 of #30DaysOfServerless!

    Since it's the serverless end-to-end week, I'm going to discuss how to use a serverless application Azure Functions with OpenAPI extension to be seamlessly integrated with Power Platform custom connector through Azure API Management - in a post I call "Where am I? My GPS Location with Serverless Power Platform Custom Connector"

    OK. Are you ready? Let's get started!


    What We'll Cover

    • What is Power Platform custom connector?
    • Proxy app to Google Maps and Naver Map API
    • API Management integration
    • Two ways of building custom connector
    • Where am I? Power Apps app
    • Exercise: Try this yourself!
    • Resources: For self-study!


    SAMPLE REPO

    Want to follow along? Check out the sample app on GitHub repository used in this post.

    What is Power Platform custom connector?

    Power Platform is a low-code/no-code application development tool for fusion teams that consist of a group of people. Those people come from various disciplines, including field experts (domain experts), IT professionals and professional developers, to draw business values successfully. Within the fusion team, the domain experts become citizen developers or low-code developers by Power Platform. In addition, Making Power Platform more powerful is that it offers hundreds of connectors to other Microsoft 365 and third-party services like SAP, ServiceNow, Salesforce, Google, etc.

    However, what if you want to use your internal APIs or APIs not yet offering their official connectors? Here's an example. If your company has an inventory management system, and you want to use it within your Power Apps or Power Automate. That point is exactly where Power Platform custom connectors is necessary.

    Inventory Management System for Power Apps

    Therefore, Power Platform custom connectors enrich those citizen developers' capabilities because those connectors can connect any API applications for the citizen developers to use.

    In this post, let's build a custom connector that provides a static map image generated by Google Maps API and Naver Map API using your GPS location.

    Proxy app to Google Maps and Naver Map API

    First, let's build an Azure Functions app that connects to Google Maps and Naver Map. Suppose that you've already got the API keys for both services. If you haven't yet, get the keys first by visiting here for Google and here for Naver. Then, store them to local.settings.json within your Azure Functions app.

    {
    "Values": {
    ...
    "Maps__Google__ApiKey": "<GOOGLE_MAPS_API_KEY>",
    "Maps__Naver__ClientId": "<NAVER_MAP_API_CLIENT_ID>",
    "Maps__Naver__ClientSecret": "<NAVER_MAP_API_CLIENT_SECRET>"
    }
    }

    Here's the sample logic to get the static image from Google Maps API. It takes the latitude and longitude of your current location and image zoom level, then returns the static map image. There are a few hard-coded assumptions, though:

    • The image size should be 400x400.
    • The image should be in .png format.
    • The marker should show be red and show my location.
    public class GoogleMapService : IMapService
    {
    public async Task<byte[]> GetMapAsync(HttpRequest req)
    {
    var latitude = req.Query["lat"];
    var longitude = req.Query["long"];
    var zoom = (string)req.Query["zoom"] ?? "14";

    var sb = new StringBuilder();
    sb.Append("https://maps.googleapis.com/maps/api/staticmap")
    .Append($"?center={latitude},{longitude}")
    .Append("&size=400x400")
    .Append($"&zoom={zoom}")
    .Append($"&markers=color:red|{latitude},{longitude}")
    .Append("&format=png32")
    .Append($"&key={this._settings.Google.ApiKey}");
    var requestUri = new Uri(sb.ToString());

    var bytes = await this._http.GetByteArrayAsync(requestUri).ConfigureAwait(false);

    return bytes;
    }
    }

    The NaverMapService class has a similar logic with the same input and assumptions. Here's the code:

    public class NaverMapService : IMapService
    {
    public async Task<byte[]> GetMapAsync(HttpRequest req)
    {
    var latitude = req.Query["lat"];
    var longitude = req.Query["long"];
    var zoom = (string)req.Query["zoom"] ?? "13";

    var sb = new StringBuilder();
    sb.Append("https://naveropenapi.apigw.ntruss.com/map-static/v2/raster")
    .Append($"?center={longitude},{latitude}")
    .Append("&w=400")
    .Append("&h=400")
    .Append($"&level={zoom}")
    .Append($"&markers=color:blue|pos:{longitude}%20{latitude}")
    .Append("&format=png")
    .Append("&lang=en");
    var requestUri = new Uri(sb.ToString());

    this._http.DefaultRequestHeaders.Clear();
    this._http.DefaultRequestHeaders.Add("X-NCP-APIGW-API-KEY-ID", this._settings.Naver.ClientId);
    this._http.DefaultRequestHeaders.Add("X-NCP-APIGW-API-KEY", this._settings.Naver.ClientSecret);

    var bytes = await this._http.GetByteArrayAsync(requestUri).ConfigureAwait(false);

    return bytes;
    }
    }

    Let's take a look at the function endpoints. Here's for the Google Maps and Naver Map. As the GetMapAsync(req) method returns a byte array value, you need to transform it as FileContentResult, with the content type of image/png.

    // Google Maps
    public class GoogleMapsTrigger
    {
    [FunctionName(nameof(GoogleMapsTrigger.GetGoogleMapImage))]
    public async Task<IActionResult> GetGoogleMapImage(
    [HttpTrigger(AuthorizationLevel.Anonymous, "GET", Route = "google/image")] HttpRequest req)
    {
    this._logger.LogInformation("C# HTTP trigger function processed a request.");

    var bytes = await this._service.GetMapAsync(req).ConfigureAwait(false);

    return new FileContentResult(bytes, "image/png");
    }
    }

    // Naver Map
    public class NaverMapsTrigger
    {
    [FunctionName(nameof(NaverMapsTrigger.GetNaverMapImage))]
    public async Task<IActionResult> GetNaverMapImage(
    [HttpTrigger(AuthorizationLevel.Anonymous, "GET", Route = "naver/image")] HttpRequest req)
    {
    this._logger.LogInformation("C# HTTP trigger function processed a request.");

    var bytes = await this._service.GetMapAsync(req).ConfigureAwait(false);

    return new FileContentResult(bytes, "image/png");
    }
    }

    Then, add the OpenAPI capability to each function endpoint. Here's the example:

    // Google Maps
    public class GoogleMapsTrigger
    {
    [FunctionName(nameof(GoogleMapsTrigger.GetGoogleMapImage))]
    // ⬇️⬇️⬇️ Add decorators provided by the OpenAPI extension ⬇️⬇️⬇️
    [OpenApiOperation(operationId: nameof(GoogleMapsTrigger.GetGoogleMapImage), tags: new[] { "google" })]
    [OpenApiParameter(name: "lat", In = ParameterLocation.Query, Required = true, Type = typeof(string), Description = "The **latitude** parameter")]
    [OpenApiParameter(name: "long", In = ParameterLocation.Query, Required = true, Type = typeof(string), Description = "The **longitude** parameter")]
    [OpenApiParameter(name: "zoom", In = ParameterLocation.Query, Required = false, Type = typeof(string), Description = "The **zoom level** parameter &ndash; Default value is `14`")]
    [OpenApiResponseWithBody(statusCode: HttpStatusCode.OK, contentType: "image/png", bodyType: typeof(byte[]), Description = "The map image as an OK response")]
    // ⬆️⬆️⬆️ Add decorators provided by the OpenAPI extension ⬆️⬆️⬆️
    public async Task<IActionResult> GetGoogleMapImage(
    [HttpTrigger(AuthorizationLevel.Anonymous, "GET", Route = "google/image")] HttpRequest req)
    {
    ...
    }
    }

    // Naver Map
    public class NaverMapsTrigger
    {
    [FunctionName(nameof(NaverMapsTrigger.GetNaverMapImage))]
    // ⬇️⬇️⬇️ Add decorators provided by the OpenAPI extension ⬇️⬇️⬇️
    [OpenApiOperation(operationId: nameof(NaverMapsTrigger.GetNaverMapImage), tags: new[] { "naver" })]
    [OpenApiParameter(name: "lat", In = ParameterLocation.Query, Required = true, Type = typeof(string), Description = "The **latitude** parameter")]
    [OpenApiParameter(name: "long", In = ParameterLocation.Query, Required = true, Type = typeof(string), Description = "The **longitude** parameter")]
    [OpenApiParameter(name: "zoom", In = ParameterLocation.Query, Required = false, Type = typeof(string), Description = "The **zoom level** parameter &ndash; Default value is `13`")]
    [OpenApiResponseWithBody(statusCode: HttpStatusCode.OK, contentType: "image/png", bodyType: typeof(byte[]), Description = "The map image as an OK response")]
    // ⬆️⬆️⬆️ Add decorators provided by the OpenAPI extension ⬆️⬆️⬆️
    public async Task<IActionResult> GetNaverMapImage(
    [HttpTrigger(AuthorizationLevel.Anonymous, "GET", Route = "naver/image")] HttpRequest req)
    {
    ...
    }
    }

    Run the function app in the local. Here are the latitude and longitude values for Seoul, Korea.

    • latitude: 37.574703
    • longitude: 126.978519

    Google Map for Seoul

    It seems to be working! Let's deploy it to Azure.

    API Management integration

    Visual Studio 2022 provides a built-in deployment tool for Azure Functions app onto Azure. In addition, the deployment tool supports seamless integration with Azure API Management as long as your Azure Functions app enables the OpenAPI capability. In this post, I'm going to use this feature. Right-mouse click on the Azure Functions project and select the "Publish" menu.

    Visual Studio context menu for publish

    Then, you will see the publish screen. Click the "➕ New" button to create a new publish profile.

    Create a new publish profile

    Choose "Azure" and click the "Next" button.

    Choose the target platform for publish

    Select the app instance. This time simply pick up the "Azure Function App (Windows)" option, then click "Next".

    Choose the target OS for publish

    If you already provision an Azure Function app instance, you will see it on the screen. Otherwise, create a new one. Then, click "Next".

    Choose the target instance for publish

    In the next step, you are asked to choose the Azure API Management instance for integration. Choose one, or create a new one. Then, click "Next".

    Choose the APIM instance for integration

    Finally, select the publish method either local publish or GitHub Actions workflow. Let's pick up the local publish method for now. Then, click "Finish".

    Choose the deployment type

    The publish profile has been created. Click "Close" to move on.

    Publish profile created

    Now the function app is ready for deployment. Click the "Publish" button and see how it goes.

    Publish function app

    The Azure function app has been deployed and integrated with the Azure API Management instance.

    Function app published

    Go to the published function app site, and everything looks OK.

    Function app on Azure

    And API Management shows the function app integrated perfectly.

    Function app integrated with APIM

    Now, you are ready to create a custom connector. Let's move on.

    Two ways of building custom connector

    There are two ways to create a custom connector.

    Export custom connector from API Management

    First, you can directly use the built-in API Management feature. Then, click the ellipsis icon and select the "Create Power Connector" menu.

    Create Power Connector menu

    Then, you are redirected to this screen. While the "API" and "API display name" fields are pre-populated, you need to choose the Power Platform environment tied to your tenant. Choose an environment, click "Authenticate", and click "Create".

    Create custom connector screen

    Check your custom connector on Power Apps or Power Automate side.

    Custom connector created on Power Apps

    However, there's a caveat to this approach. Because it's tied to your tenant, you should use the second approach if you want to use this custom connector on the other tenant.

    Import custom connector from OpenAPI document or URL

    Click the ellipsis icon again and select the "Export" menu.

    Export menu

    On the Export API screen, choose the "OpenAPI v2 (JSON)" panel because Power Platform custom connector currently accepts version 2 of the OpenAPI document.

    Select OpenAPI v2

    Download the OpenAPI document to your local computer and move to your Power Apps or Power Automate page under your desired environment. I'm going to use the Power Automate page. First, go to the "Data" ➡️ "Custom connectors" page. Then, click the "➕ New custom connector" ➡️ "Import an OpenAPI file" at the top right corner.

    New custom connector

    When a modal pops up, give the custom connector name and import the OpenAPI document exported above. Then, click "Continue".

    Import custom connector

    Actually, that's it! Next, click the "✔️ Create connector" button to create the connector.

    Create custom connector

    Go back to the custom connector page, and you will see the "Maps API" custom connector you just created.

    Custom connector imported

    So, you are ready to create a Power Apps app to display your location on Google Maps or Naver Map! Let's move on.

    Where am I? Power Apps app

    Open the Power Apps Studio, and create an empty canvas app, named Who am I with a phone layout.

    Custom connector integration

    To use the custom connector created above, you need to add it to the Power App. Click the cylinder icon on the left and click the "Add data" button.

    Add custom connector to data pane

    Search the custom connector name, "Maps API", and click the custom connector to add.

    Search custom connector

    To use the custom connector, you also need to create a connection to it. Click the "Connect" button and move on.

    Create connection to custom connector

    Now, you've got the connection to the custom connector.

    Connection to custom connector ready

    Controls

    Let's build the Power Apps app. First of all, put three controls Image, Slider and Button onto the canvas.

    Power Apps control added

    Click the "Screen1" control and change the value on the property "OnVisible" to the formula below. The formula stores the current slider value in the zoomlevel collection.

    ClearCollect(
    zoomlevel,
    Slider1.Value
    )

    Click the "Botton1" control and change the value on the property "OnSelected" to the formula below. It passes the current latitude, longitude and zoom level to the custom connector and receives the image data. The received image data is stored in the result collection.

    ClearCollect(
    result,
    MAPS.GetGoogleMapImage(
    Location.Latitude,
    Location.Longitude,
    { zoom: First(zoomlevel).Value }
    )
    )

    Click the "Image1" control and change the value on the property "Image" to the formula below. It gets the image data from the result collection.

    First(result).Url

    Click the "Slider1" control and change the value on the property "OnChange" to the formula below. It stores the current slider value to the zoomlevel collection, followed by calling the custom connector to get the image data against the current location.

    ClearCollect(
    zoomlevel,
    Slider1.Value
    );
    ClearCollect(
    result,
    MAPS.GetGoogleMapImage(
    Location.Latitude,
    Location.Longitude,
    { zoom: First(zoomlevel).Value }
    )
    )

    That seems to be OK. Let's click the "Where am I?" button. But it doesn't show the image. The First(result).Url value is actually similar to this:

    appres://blobmanager/1090a86393a843adbfcf428f0b90e91b/1

    It's the image reference value somewhere you can't get there.

    Workaround Power Automate workflow

    Therefore, you need a workaround using a Power Automate workflow to sort out this issue. Open the Power Automate Studio, create an instant cloud flow with the Power App trigger, and give it the "Where am I" name. Then add input parameters of lat, long and zoom.

    Power Apps trigger on Power Automate workflow

    Add custom connector action to get the map image.

    Select action to get the Google Maps image

    In the action, pass the appropriate parameters to the action.

    Pass parameters to the custom connector action

    Add a "Response" action and put the following values into each field.

    • "Body" field:

      {
      "base64Image": <power_automate_expression>
      }

      The <power_automate_expression> should be concat('data:', body('GetGoogleMapImage')?['$content-type'], ';base64,', body('GetGoogleMapImage')?['$content']).

    • "Response Body JSON Schema" field:

      {
      "type": "object",
      "properties": {
      "base64Image": {
      "type": "string"
      }
      }
      }

    Format the Response action

    Let's return to the Power Apps Studio and add the Power Automate workflow you created.

    Add Power Automate workflow

    Select "Button1" and change the value on the property "OnSelect" below. It replaces the direct call to the custom connector with the Power Automate workflow.

    ClearCollect(
    result,
    WhereamI.Run(
    Location.Latitude,
    Location.Longitude,
    First(zoomlevel).Value
    )
    )

    Also, change the value on the property "OnChange" of the "Slider1" control below, replacing the custom connector call with the Power Automate workflow call.

    ClearCollect(
    zoomlevel,
    Slider1.Value
    );
    ClearCollect(
    result,
    WhereamI.Run(
    Location.Latitude,
    Location.Longitude,
    First(zoomlevel).Value
    )
    )

    And finally, change the "Image1" control's "Image" property value below.

    First(result).base64Image

    The workaround has been applied. Click the "Where am I?" button to see your current location from Google Maps.

    Run Power Apps app #1

    If you change the slider left or right, you will see either the zoomed-in image or the zoomed-out image.

    Run Power Apps app #2

    Now, you've created a Power Apps app to show your current location using:

    • Google Maps API through the custom connector, and
    • Custom connector written in Azure Functions with OpenAPI extension!

    Exercise: Try this yourself!

    You can fork this GitHub repository to your account and play around with it to see how the custom connector works. After forking the repository, make sure that you create all the necessary secrets to your repository documented in the README file.

    Then, click the "Deploy to Azure" button, and it will provision all necessary Azure resources and deploy an Azure Functions app for a custom connector.

    Deploy To Azure

    Once everything is deployed successfully, try to create a Power Apps app and Power Automate workflow to see your current location in real-time!

    Resources: For self-study!

    Want to know more about Power Platform custom connector and Azure Functions OpenAPI extension? Here are several resources you can take a look at:

    - + \ No newline at end of file diff --git a/blog/tags/30-days-of-serverless/page/20/index.html b/blog/tags/30-days-of-serverless/page/20/index.html index 3a6a88f47e..be731b13c3 100644 --- a/blog/tags/30-days-of-serverless/page/20/index.html +++ b/blog/tags/30-days-of-serverless/page/20/index.html @@ -14,7 +14,7 @@ - + @@ -22,7 +22,7 @@

    20 posts tagged with "30-days-of-serverless"

    View All Tags

    · 9 min read
    Nitya Narasimhan

    Welcome to Day 2️⃣ of #30DaysOfServerless!

    Today, we kickstart our journey into serveless on Azure with a look at Functions As a Service. We'll explore Azure Functions - from core concepts to usage patterns.

    Ready? Let's Go!


    What We'll Cover

    • What is Functions-as-a-Service? (FaaS)
    • What is Azure Functions?
    • Triggers, Bindings and Custom Handlers
    • What is Durable Functions?
    • Orchestrators, Entity Functions and Application Patterns
    • Exercise: Take the Cloud Skills Challenge!
    • Resources: #30DaysOfServerless Collection.


    1. What is FaaS?

    Faas stands for Functions As a Service (FaaS). But what does that mean for us as application developers? We know that building and deploying modern applications at scale can get complicated and it starts with us needing to take decisions on Compute. In other words, we need to answer this question: "where should I host my application given my resource dependencies and scaling requirements?"

    this useful flowchart

    Azure has this useful flowchart (shown below) to guide your decision-making. You'll see that hosting options generally fall into three categories:

    • Infrastructure as a Service (IaaS) - where you provision and manage Virtual Machines yourself (cloud provider manages infra).
    • Platform as a Service (PaaS) - where you use a provider-managed hosting environment like Azure Container Apps.
    • Functions as a Service (FaaS) - where you forget about hosting environments and simply deploy your code for the provider to run.

    Here, "serverless" compute refers to hosting options where we (as developers) can focus on building apps without having to manage the infrastructure. See serverless compute options on Azure for more information.


    2. Azure Functions

    Azure Functions is the Functions-as-a-Service (FaaS) option on Azure. It is the ideal serverless solution if your application is event-driven with short-lived workloads. With Azure Functions, we develop applications as modular blocks of code (functions) that are executed on demand, in response to configured events (triggers). This approach brings us two advantages:

    • It saves us money. We only pay for the time the function runs.
    • It scales with demand. We have 3 hosting plans for flexible scaling behaviors.

    Azure Functions can be programmed in many popular languages (C#, F#, Java, JavaScript, TypeScript, PowerShell or Python), with Azure providing language-specific handlers and default runtimes to execute them.

    Concept: Custom Handlers
    • What if we wanted to program in a non-supported language?
    • Or we wanted to use a different runtime for a supported language?

    Custom Handlers have you covered! These are lightweight webservers that can receive and process input events from the Functions host - and return responses that can be delivered to any output targets. By this definition, custom handlers can be implemented by any language that supports receiving HTTP events. Check out the quickstart for writing a custom handler in Rust or Go.

    Custom Handlers

    Concept: Trigger and Bindings

    We talked about what functions are (code blocks). But when are they invoked or executed? And how do we provide inputs (arguments) and retrieve outputs (results) from this execution?

    This is where triggers and bindings come in.

    • Triggers define how a function is invoked and what associated data it will provide. A function must have exactly one trigger.
    • Bindings declaratively define how a resource is connected to the function. The resource or binding can be of type input, output, or both. Bindings are optional. A Function can have multiple input, output bindings.

    Azure Functions comes with a number of supported bindings that can be used to integrate relevant services to power a specific scenario. For instance:

    • HTTP Triggers - invokes the function in response to an HTTP request. Use this to implement serverless APIs for your application.
    • Event Grid Triggers invokes the function on receiving events from an Event Grid. Use this to process events reactively, and potentially publish responses back to custom Event Grid topics.
    • SignalR Service Trigger invokes the function in response to messages from Azure SignalR, allowing your application to take actions with real-time contexts.

    Triggers and bindings help you abstract your function's interfaces to other components it interacts with, eliminating hardcoded integrations. They are configured differently based on the programming language you use. For example - JavaScript functions are configured in the functions.json file. Here's an example of what that looks like.

    {
    "disabled":false,
    "bindings":[
    // ... bindings here
    {
    "type": "bindingType",
    "direction": "in",
    "name": "myParamName",
    // ... more depending on binding
    }
    ]
    }

    The key thing to remember is that triggers and bindings have a direction property - triggers are always in, input bindings are in and output bindings are out. Some bindings can support a special inout direction.

    The documentation has code examples for bindings to popular Azure services. Here's an example of the bindings and trigger configuration for a BlobStorage use case.

    // function.json configuration

    {
    "bindings": [
    {
    "queueName": "myqueue-items",
    "connection": "MyStorageConnectionAppSetting",
    "name": "myQueueItem",
    "type": "queueTrigger",
    "direction": "in"
    },
    {
    "name": "myInputBlob",
    "type": "blob",
    "path": "samples-workitems/{queueTrigger}",
    "connection": "MyStorageConnectionAppSetting",
    "direction": "in"
    },
    {
    "name": "myOutputBlob",
    "type": "blob",
    "path": "samples-workitems/{queueTrigger}-Copy",
    "connection": "MyStorageConnectionAppSetting",
    "direction": "out"
    }
    ],
    "disabled": false
    }

    The code below shows the function implementation. In this scenario, the function is triggered by a queue message carrying an input payload with a blob name. In response, it copies that data to the resource associated with the output binding.

    // function implementation

    module.exports = async function(context) {
    context.log('Node.js Queue trigger function processed', context.bindings.myQueueItem);
    context.bindings.myOutputBlob = context.bindings.myInputBlob;
    };
    Concept: Custom Bindings

    What if we have a more complex scenario that requires bindings for non-supported resources?

    There is an option create custom bindings if necessary. We don't have time to dive into details here but definitely check out the documentation


    3. Durable Functions

    This sounds great, right?. But now, let's talk about one challenge for Azure Functions. In the use cases so far, the functions are stateless - they take inputs at runtime if necessary, and return output results if required. But they are otherwise self-contained, which is great for scalability!

    But what if I needed to build more complex workflows that need to store and transfer state, and complete operations in a reliable manner? Durable Functions are an extension of Azure Functions that makes stateful workflows possible.

    Concept: Orchestrator Functions

    How can I create workflows that coordinate functions?

    Durable Functions use orchestrator functions to coordinate execution of other Durable functions within a given Functions app. These functions are durable and reliable. Later in this post, we'll talk briefly about some application patterns that showcase popular orchestration scenarios.

    Concept: Entity Functions

    How do I persist and manage state across workflows?

    Entity Functions provide explicit state mangement for Durable Functions, defining operations to read and write state to durable entities. They are associated with a special entity trigger for invocation. These are currently available only for a subset of programming languages so check to see if they are supported for your programming language of choice.

    USAGE: Application Patterns

    Durable Functions are a fascinating topic that would require a separate, longer post, to do justice. For now, let's look at some application patterns that showcase the value of these starting with the simplest one - Function Chaining as shown below:

    Function Chaining

    Here, we want to execute a sequence of named functions in a specific order. As shown in the snippet below, the orchestrator function coordinates invocations on the given functions in the desired sequence - "chaining" inputs and outputs to establish the workflow. Take note of the yield keyword. This triggers a checkpoint, preserving the current state of the function for reliable operation.

    const df = require("durable-functions");

    module.exports = df.orchestrator(function*(context) {
    try {
    const x = yield context.df.callActivity("F1");
    const y = yield context.df.callActivity("F2", x);
    const z = yield context.df.callActivity("F3", y);
    return yield context.df.callActivity("F4", z);
    } catch (error) {
    // Error handling or compensation goes here.
    }
    });

    Other application patterns for durable functions include:

    There's a lot more to explore but we won't have time to do that today. Definitely check the documentation and take a minute to read the comparison with Azure Logic Apps to understand what each technology provides for serverless workflow automation.


    4. Exercise

    That was a lot of information to absorb! Thankfully, there are a lot of examples in the documentation that can help put these in context. Here are a couple of exercises you can do, to reinforce your understanding of these concepts.


    5. What's Next?

    The goal for today was to give you a quick tour of key terminology and concepts related to Azure Functions. Tomorrow, we dive into the developer experience, starting with core tools for local development and ending by deploying our first Functions app.

    Want to do some prep work? Here are a few useful links:


    6. Resources


    - + \ No newline at end of file diff --git a/blog/tags/30-days-of-serverless/page/3/index.html b/blog/tags/30-days-of-serverless/page/3/index.html index 9ea07ae9b4..c3c7b8caa8 100644 --- a/blog/tags/30-days-of-serverless/page/3/index.html +++ b/blog/tags/30-days-of-serverless/page/3/index.html @@ -14,13 +14,13 @@ - +

    20 posts tagged with "30-days-of-serverless"

    View All Tags

    · 7 min read
    Brian Benz

    Welcome to Day 25 of #30DaysOfServerless!

    Azure Container Apps enable application code packaged in containers to run and scale without the overhead of managing cloud infrastructure and container orchestration. In this post I'll show you how to deploy a Java application running on Spring Boot in a container to Azure Container Registry and Azure Container Apps.


    What We'll Cover

    • Introduction to Deploying Java containers in the cloud
    • Step-by-step: Deploying to Azure Container Registry
    • Step-by-step: Deploying and running on Azure Container Apps
    • Resources: For self-study!


    Deploy Java containers to cloud

    We'll deploy a Java application running on Spring Boot in a container to Azure Container Registry and Azure Container Apps. Here are the main steps:

    • Create Azure Container Registry (ACR) on Azure portal
    • Create Azure Container App (ACA) on Azure portal.
    • Deploy code to Azure Container Registry from the Azure CLI.
    • Deploy container from ACR to ACA using the Azure portal.
    PRE-REQUISITES

    Sign in to Azure from the CLI using the az login command, and follow the prompts in your browser to complete the authentication process. Also, ensure you're running the latest version of the CLI by using the az upgrade command.

    1. Get Sample Code

    Fork and clone the sample GitHub repo to your local machine. Navigate to the and click Fork in the top-right corner of the page.

    The example code that we're using is a very basic containerized Spring Boot example. There are a lot more details to learn about Spring boot apps in docker, for a deep dive check out this Spring Boot Guide

    2. Run Sample Locally (Optional)

    If you have docker installed locally, you can optionally test the code on your local machine. Navigate to the root directory of the forked repository and run the following commands:

    docker build -t spring-boot-docker-aca .
    docker run -p 8080:8080 spring-boot-docker-aca

    Open a browser and go to https://localhost:8080. You should see this message:

    Hello Docker World

    That indicates the the Spring Boot app is successfully running locally in a docker container.

    Next, let's set up an Azure Container Registry an an Azure Container App and deploy this container to the cloud!


    3. Step-by-step: Deploy to ACR

    To create a container registry from the portal dashboard, Select Create a resource > Containers > Container Registry.

    Navigate to container registry in portal

    In the Basics tab, enter values for Resource group and Registry name. The registry name must be unique within Azure, and contain 5-50 alphanumeric characters. Create a new resource group in the West US location named spring-boot-docker-aca. Select the 'Basic' SKU.

    Keep the default values for the remaining settings. Then select Review + create, then Create. When the Deployment succeeded message appears, select the container registry in the portal.

    Note the registry server name ending with azurecr.io. You will use this in the following steps when you push and pull images with Docker.

    3.1 Log into registry using the Azure CLI

    Before pushing and pulling container images, you must log in to the registry instance. Sign into the Azure CLI on your local machine, then run the az acr login command. For this step, use the registry name, not the server name ending with azurecr.io.

    From the command line, type:

    az acr login --name myregistryname

    The command returns Login Succeeded once completed.

    3.2 Build & deploy with az acr build

    Next, we're going to deploy the docker container we created earlier using the AZ ACR Build command. AZ ACR Build creates a docker build from local code and pushes the container to Azure Container Registry if the build is successful.

    Go to your local clone of the spring-boot-docker-aca repo in the command line, type:

    az acr build --registry myregistryname --image spring-boot-docker-aca:v1 .

    3.3 List container images

    Once the AZ ACR Build command is complete, you should be able to view the container as a repository in the registry. In the portal, open your registry and select Repositories, then select the spring-boot-docker-aca repository you created with docker push. You should also see the v1 image under Tags.

    4. Deploy on ACA

    Now that we have an image in the Azure Container Registry, we can deploy it to Azure Container Apps. For the first deployment, we'll pull the container from our ACR as part of the ACA setup.

    4.1 Create a container app

    We'll create the container app at the same place that we created the container registry in the Azure portal. From the portal, select Create a resource > Containers > Container App. In the Basics tab, set these values:

    4.2 Enter project details

    SettingAction
    SubscriptionYour Azure subscription.
    Resource groupUse the spring-boot-docker-aca resource group
    Container app nameEnter spring-boot-docker-aca.

    4.3 Create an environment

    1. In the Create Container App environment field, select Create new.

    2. In the Create Container App Environment page on the Basics tab, enter the following values:

      SettingValue
      Environment nameEnter my-environment.
      RegionSelect westus3.
    3. Select OK.

    4. Select the Create button at the bottom of the Create Container App Environment page.

    5. Select the Next: App settings button at the bottom of the page.

    5. App settings tab

    The App settings tab is where you connect to the ACR and pull the repository image:

    SettingAction
    Use quickstart imageUncheck the checkbox.
    NameEnter spring-boot-docker-aca.
    Image sourceSelect Azure Container Registry
    RegistrySelect your ACR from the list.
    ImageSelect spring-boot-docker-aca from the list.
    Image TagSelect v1 from the list.

    5.1 Application ingress settings

    SettingAction
    IngressSelect Enabled.
    Ingress visibilitySelect External to publicly expose your container app.
    Target portEnter 8080.

    5.2 Deploy the container app

    1. Select the Review and create button at the bottom of the page.
    2. Select Create.

    Once the deployment is successfully completed, you'll see the message: Your deployment is complete.

    5.3 Verify deployment

    In the portal, go to the Overview of your spring-boot-docker-aca Azure Container App, and click on the Application Url. You should see this message in the browser:

    Hello Docker World

    That indicates the the Spring Boot app is running in a docker container in your spring-boot-docker-aca Azure Container App.

    Resources: For self-study!

    Once you have an understanding of the basics in ths post, there is so much more to learn!

    Thanks for stopping by!

    - + \ No newline at end of file diff --git a/blog/tags/30-days-of-serverless/page/4/index.html b/blog/tags/30-days-of-serverless/page/4/index.html index 375124eef7..60150da81a 100644 --- a/blog/tags/30-days-of-serverless/page/4/index.html +++ b/blog/tags/30-days-of-serverless/page/4/index.html @@ -14,13 +14,13 @@ - +

    20 posts tagged with "30-days-of-serverless"

    View All Tags

    · 19 min read
    Alex Wolf

    Welcome to Day 24 of #30DaysOfServerless!

    We continue exploring E2E scenarios with this tutorial where you'll deploy a containerized ASP.NET Core 6.0 application to Azure Container Apps.

    The application consists of a front-end web app built using Blazor Server, as well as two Web API projects to manage data. These projects will exist as three separate containers inside of a shared container apps environment.


    What We'll Cover

    • Deploy ASP.NET Core 6.0 app to Azure Container Apps
    • Automate deployment workflows using GitHub Actions
    • Provision and deploy resources using Azure Bicep
    • Exercise: Try this yourself!
    • Resources: For self-study!


    Introduction

    Azure Container Apps enables you to run microservices and containerized applications on a serverless platform. With Container Apps, you enjoy the benefits of running containers while leaving behind the concerns of manually configuring cloud infrastructure and complex container orchestrators.

    In this tutorial, you'll deploy a containerized ASP.NET Core 6.0 application to Azure Container Apps. The application consists of a front-end web app built using Blazor Server, as well as two Web API projects to manage data. These projects will exist as three separate containers inside of a shared container apps environment.

    You will use GitHub Actions in combination with Bicep to deploy the application. These tools provide an approachable and sustainable solution for building CI/CD pipelines and working with Container Apps.

    PRE-REQUISITES

    Architecture

    In this tutorial, we'll setup a container app environment with a separate container for each project in the sample store app. The major components of the sample project include:

    • A Blazor Server front-end web app to display product information
    • A products API to list available products
    • An inventory API to determine how many products are in stock
    • GitHub Actions and Bicep templates to provision Azure resources and then build and deploy the sample app.

    You will explore these templates later in the tutorial.

    Public internet traffic should be proxied to the Blazor app. The back-end APIs should only be reachable via requests from the Blazor app inside the container apps environment. This setup can be achieved using container apps environment ingress configurations during deployment.

    An architecture diagram of the shopping app


    Project Sources

    Want to follow along? Fork the sample below. The tutorial can be completed with or without Dapr integration. Pick the path you feel comfortable in. Dapr provides various benefits that make working with Microservices easier - you can learn more in the docs. For this tutorial you will need GitHub and Azure CLI.

    PICK YOUR PATH

    To follow along with this tutorial, fork the relevant sample project below.

    You can run the app locally from Visual Studio:

    • Right click on the Blazor Store project and select Set as Startup Project.
    • Press the start button at the top of Visual Studio to run the app.
    • (Once running) start each API in the background by
    • right-clicking on the project node
    • selecting Debug --> Start without debugging.

    Once the Blazor app is running, you should see something like this:

    An architecture diagram of the shopping app


    Configuring Azure credentials

    In order to deploy the application to Azure through GitHub Actions, you first need to create a service principal. The service principal will allow the GitHub Actions process to authenticate to your Azure subscription to create resources and deploy code. You can learn more about Service Principals in the Azure CLI documentation. For this step you'll need to be logged into the Azure CLI.

    1) If you have not done so already, make sure to fork the sample project to your own GitHub account or organization.

    1) Once you have completed this step, create a service principal using the Azure CLI command below:

    ```azurecli
    $subscriptionId=$(az account show --query id --output tsv)
    az ad sp create-for-rbac --sdk-auth --name WebAndApiSample --role Contributor --scopes /subscriptions/$subscriptionId
    ```

    1) Copy the JSON output of the CLI command to your clipboard

    1) Under the settings tab of your forked GitHub repo, create a new secret named AzureSPN. The name is important to match the Bicep templates included in the project, which we'll review later. Paste the copied service principal values on your clipboard into the secret and save your changes. This new secret will be used by the GitHub Actions workflow to authenticate to Azure.

    :::image type="content" source="./img/dotnet/github-secrets.png" alt-text="A screenshot of adding GitHub secrets.":::

    Deploy using Github Actions

    You are now ready to deploy the application to Azure Container Apps using GitHub Actions. The sample application includes a GitHub Actions template that is configured to build and deploy any changes to a branch named deploy. The deploy branch does not exist in your forked repository by default, but you can easily create it through the GitHub user interface.

    1) Switch to the Actions tab along the top navigation of your GitHub repository. If you have not done so already, ensure that workflows are enabled by clicking the button in the center of the page.

    A screenshot showing how to enable GitHub actions

    1) Navigate to the main Code tab of your repository and select the main dropdown. Enter deploy into the branch input box, and then select Create branch: deploy from 'main'.

    A screenshot showing how to create the deploy branch

    1) On the new deploy branch, navigate down into the .github/workflows folder. You should see a file called deploy.yml, which contains the main GitHub Actions workflow script. Click on the file to view its content. You'll learn more about this file later in the tutorial.

    1) Click the pencil icon in the upper right to edit the document.

    1) Change the RESOURCE_GROUP_NAME: value to msdocswebappapis or another valid resource group name of your choosing.

    1) In the upper right of the screen, select Start commit and then Commit changes to commit your edit. This will persist the change to the file and trigger the GitHub Actions workflow to build and deploy the app.

    A screenshot showing how to commit changes

    1) Switch to the Actions tab along the top navigation again. You should see the workflow running to create the necessary resources and deploy the app. The workflow may take several minutes to run. When it completes successfully, all of the jobs should have a green checkmark icon next to them.

    The completed GitHub workflow.

    Explore the Azure resources

    Once the GitHub Actions workflow has completed successfully you can browse the created resources in the Azure portal.

    1) On the left navigation, select Resource Groups. Next,choose the msdocswebappapis resource group that was created by the GitHub Actions workflow.

    2) You should see seven resources available that match the screenshot and table descriptions below.

    The resources created in Azure.

    Resource nameTypeDescription
    inventoryContainer appThe containerized inventory API.
    msdocswebappapisacrContainer registryA registry that stores the built Container images for your apps.
    msdocswebappapisaiApplication insightsApplication insights provides advanced monitoring, logging and metrics for your apps.
    msdocswebappapisenvContainer apps environmentA container environment that manages networking, security and resource concerns. All of your containers live in this environment.
    msdocswebappapislogsLog Analytics workspaceA workspace environment for managing logging and analytics for the container apps environment
    productsContainer appThe containerized products API.
    storeContainer appThe Blazor front-end web app.

    3) You can view your running app in the browser by clicking on the store container app. On the overview page, click the Application Url link on the upper right of the screen.

    :::image type="content" source="./img/dotnet/application-url.png" alt-text="The link to browse the app.":::

    Understanding the GitHub Actions workflow

    The GitHub Actions workflow created and deployed resources to Azure using the deploy.yml file in the .github folder at the root of the project. The primary purpose of this file is to respond to events - such as commits to a branch - and run jobs to accomplish tasks. The deploy.yml file in the sample project has three main jobs:

    • Provision: Create the necessary resources in Azure, such as the container apps environment. This step leverages Bicep templates to create the Azure resources, which you'll explore in a moment.
    • Build: Create the container images for the three apps in the project and store them in the container registry.
    • Deploy: Deploy the container images to the different container apps created during the provisioning job.

    The deploy.yml file also accepts parameters to make the workflow more dynamic, such as setting the resource group name or the Azure region resources will be provisioned to.

    Below is a commented version of the deploy.yml file that highlights the essential steps.

    name: Build and deploy .NET application to Container Apps

    # Trigger the workflow on pushes to the deploy branch
    on:
    push:
    branches:
    - deploy

    env:
    # Set workflow variables
    RESOURCE_GROUP_NAME: msdocswebappapis

    REGION: eastus

    STORE_DOCKER: Store/Dockerfile
    STORE_IMAGE: store

    INVENTORY_DOCKER: Store.InventoryApi/Dockerfile
    INVENTORY_IMAGE: inventory

    PRODUCTS_DOCKER: Store.ProductApi/Dockerfile
    PRODUCTS_IMAGE: products

    jobs:
    # Create the required Azure resources
    provision:
    runs-on: ubuntu-latest

    steps:

    - name: Checkout to the branch
    uses: actions/checkout@v2

    - name: Azure Login
    uses: azure/login@v1
    with:
    creds: ${{ secrets.AzureSPN }}

    - name: Create resource group
    uses: azure/CLI@v1
    with:
    inlineScript: >
    echo "Creating resource group in Azure"
    echo "Executing 'az group create -l ${{ env.REGION }} -n ${{ env.RESOURCE_GROUP_NAME }}'"
    az group create -l ${{ env.REGION }} -n ${{ env.RESOURCE_GROUP_NAME }}

    # Use Bicep templates to create the resources in Azure
    - name: Creating resources
    uses: azure/CLI@v1
    with:
    inlineScript: >
    echo "Creating resources"
    az deployment group create --resource-group ${{ env.RESOURCE_GROUP_NAME }} --template-file '/github/workspace/Azure/main.bicep' --debug

    # Build the three app container images
    build:
    runs-on: ubuntu-latest
    needs: provision

    steps:

    - name: Checkout to the branch
    uses: actions/checkout@v2

    - name: Azure Login
    uses: azure/login@v1
    with:
    creds: ${{ secrets.AzureSPN }}

    - name: Set up Docker Buildx
    uses: docker/setup-buildx-action@v1

    - name: Login to ACR
    run: |
    set -euo pipefail
    access_token=$(az account get-access-token --query accessToken -o tsv)
    refresh_token=$(curl https://${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io/oauth2/exchange -v -d "grant_type=access_token&service=${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io&access_token=$access_token" | jq -r .refresh_token)
    docker login -u 00000000-0000-0000-0000-000000000000 --password-stdin ${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io <<< "$refresh_token"

    - name: Build the products api image and push it to ACR
    uses: docker/build-push-action@v2
    with:
    push: true
    tags: ${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io/${{ env.PRODUCTS_IMAGE }}:${{ github.sha }}
    file: ${{ env.PRODUCTS_DOCKER }}

    - name: Build the inventory api image and push it to ACR
    uses: docker/build-push-action@v2
    with:
    push: true
    tags: ${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io/${{ env.INVENTORY_IMAGE }}:${{ github.sha }}
    file: ${{ env.INVENTORY_DOCKER }}

    - name: Build the frontend image and push it to ACR
    uses: docker/build-push-action@v2
    with:
    push: true
    tags: ${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io/${{ env.STORE_IMAGE }}:${{ github.sha }}
    file: ${{ env.STORE_DOCKER }}

    # Deploy the three container images
    deploy:
    runs-on: ubuntu-latest
    needs: build

    steps:

    - name: Checkout to the branch
    uses: actions/checkout@v2

    - name: Azure Login
    uses: azure/login@v1
    with:
    creds: ${{ secrets.AzureSPN }}

    - name: Installing Container Apps extension
    uses: azure/CLI@v1
    with:
    inlineScript: >
    az config set extension.use_dynamic_install=yes_without_prompt

    az extension add --name containerapp --yes

    - name: Login to ACR
    run: |
    set -euo pipefail
    access_token=$(az account get-access-token --query accessToken -o tsv)
    refresh_token=$(curl https://${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io/oauth2/exchange -v -d "grant_type=access_token&service=${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io&access_token=$access_token" | jq -r .refresh_token)
    docker login -u 00000000-0000-0000-0000-000000000000 --password-stdin ${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io <<< "$refresh_token"

    - name: Deploy Container Apps
    uses: azure/CLI@v1
    with:
    inlineScript: >
    az containerapp registry set -n products -g ${{ env.RESOURCE_GROUP_NAME }} --server ${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io

    az containerapp update -n products -g ${{ env.RESOURCE_GROUP_NAME }} -i ${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io/${{ env.PRODUCTS_IMAGE }}:${{ github.sha }}

    az containerapp registry set -n inventory -g ${{ env.RESOURCE_GROUP_NAME }} --server ${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io

    az containerapp update -n inventory -g ${{ env.RESOURCE_GROUP_NAME }} -i ${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io/${{ env.INVENTORY_IMAGE }}:${{ github.sha }}

    az containerapp registry set -n store -g ${{ env.RESOURCE_GROUP_NAME }} --server ${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io

    az containerapp update -n store -g ${{ env.RESOURCE_GROUP_NAME }} -i ${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io/${{ env.STORE_IMAGE }}:${{ github.sha }}

    - name: logout
    run: >
    az logout

    Understanding the Bicep templates

    During the provisioning stage of the GitHub Actions workflow, the main.bicep file is processed. Bicep files provide a declarative way of generating resources in Azure and are ideal for managing infrastructure as code. You can learn more about Bicep in the related documentation. The main.bicep file in the sample project creates the following resources:

    • The container registry to store images of the containerized apps.
    • The container apps environment, which handles networking and resource management for the container apps.
    • Three container apps - one for the Blazor front-end and two for the back-end product and inventory APIs.
    • Configuration values to connect these services together

    main.bicep without Dapr

    param location string = resourceGroup().location

    # create the azure container registry
    resource acr 'Microsoft.ContainerRegistry/registries@2021-09-01' = {
    name: toLower('${resourceGroup().name}acr')
    location: location
    sku: {
    name: 'Basic'
    }
    properties: {
    adminUserEnabled: true
    }
    }

    # create the aca environment
    module env 'environment.bicep' = {
    name: 'containerAppEnvironment'
    params: {
    location: location
    }
    }

    # create the various configuration pairs
    var shared_config = [
    {
    name: 'ASPNETCORE_ENVIRONMENT'
    value: 'Development'
    }
    {
    name: 'APPINSIGHTS_INSTRUMENTATIONKEY'
    value: env.outputs.appInsightsInstrumentationKey
    }
    {
    name: 'APPLICATIONINSIGHTS_CONNECTION_STRING'
    value: env.outputs.appInsightsConnectionString
    }
    ]

    # create the products api container app
    module products 'container_app.bicep' = {
    name: 'products'
    params: {
    name: 'products'
    location: location
    registryPassword: acr.listCredentials().passwords[0].value
    registryUsername: acr.listCredentials().username
    containerAppEnvironmentId: env.outputs.id
    registry: acr.name
    envVars: shared_config
    externalIngress: false
    }
    }

    # create the inventory api container app
    module inventory 'container_app.bicep' = {
    name: 'inventory'
    params: {
    name: 'inventory'
    location: location
    registryPassword: acr.listCredentials().passwords[0].value
    registryUsername: acr.listCredentials().username
    containerAppEnvironmentId: env.outputs.id
    registry: acr.name
    envVars: shared_config
    externalIngress: false
    }
    }

    # create the store api container app
    var frontend_config = [
    {
    name: 'ProductsApi'
    value: 'http://${products.outputs.fqdn}'
    }
    {
    name: 'InventoryApi'
    value: 'http://${inventory.outputs.fqdn}'
    }
    ]

    module store 'container_app.bicep' = {
    name: 'store'
    params: {
    name: 'store'
    location: location
    registryPassword: acr.listCredentials().passwords[0].value
    registryUsername: acr.listCredentials().username
    containerAppEnvironmentId: env.outputs.id
    registry: acr.name
    envVars: union(shared_config, frontend_config)
    externalIngress: true
    }
    }

    main.bicep with Dapr


    param location string = resourceGroup().location

    # create the azure container registry
    resource acr 'Microsoft.ContainerRegistry/registries@2021-09-01' = {
    name: toLower('${resourceGroup().name}acr')
    location: location
    sku: {
    name: 'Basic'
    }
    properties: {
    adminUserEnabled: true
    }
    }

    # create the aca environment
    module env 'environment.bicep' = {
    name: 'containerAppEnvironment'
    params: {
    location: location
    }
    }

    # create the various config pairs
    var shared_config = [
    {
    name: 'ASPNETCORE_ENVIRONMENT'
    value: 'Development'
    }
    {
    name: 'APPINSIGHTS_INSTRUMENTATIONKEY'
    value: env.outputs.appInsightsInstrumentationKey
    }
    {
    name: 'APPLICATIONINSIGHTS_CONNECTION_STRING'
    value: env.outputs.appInsightsConnectionString
    }
    ]

    # create the products api container app
    module products 'container_app.bicep' = {
    name: 'products'
    params: {
    name: 'products'
    location: location
    registryPassword: acr.listCredentials().passwords[0].value
    registryUsername: acr.listCredentials().username
    containerAppEnvironmentId: env.outputs.id
    registry: acr.name
    envVars: shared_config
    externalIngress: false
    }
    }

    # create the inventory api container app
    module inventory 'container_app.bicep' = {
    name: 'inventory'
    params: {
    name: 'inventory'
    location: location
    registryPassword: acr.listCredentials().passwords[0].value
    registryUsername: acr.listCredentials().username
    containerAppEnvironmentId: env.outputs.id
    registry: acr.name
    envVars: shared_config
    externalIngress: false
    }
    }

    # create the store api container app
    module store 'container_app.bicep' = {
    name: 'store'
    params: {
    name: 'store'
    location: location
    registryPassword: acr.listCredentials().passwords[0].value
    registryUsername: acr.listCredentials().username
    containerAppEnvironmentId: env.outputs.id
    registry: acr.name
    envVars: shared_config
    externalIngress: true
    }
    }


    Bicep Modules

    The main.bicep file references modules to create resources, such as module products. Modules are a feature of Bicep templates that enable you to abstract resource declarations into their own files or sub-templates. As the main.bicep file is processed, the defined modules are also evaluated. Modules allow you to create resources in a more organized and reusable way. They can also define input and output parameters that are passed to and from the parent template, such as the name of a resource.

    For example, the environment.bicep module extracts the details of creating a container apps environment into a reusable template. The module defines necessary resource dependencies such as Log Analytics Workspaces and an Application Insights instance.

    environment.bicep without Dapr

    param baseName string = resourceGroup().name
    param location string = resourceGroup().location

    resource logs 'Microsoft.OperationalInsights/workspaces@2021-06-01' = {
    name: '${baseName}logs'
    location: location
    properties: any({
    retentionInDays: 30
    features: {
    searchVersion: 1
    }
    sku: {
    name: 'PerGB2018'
    }
    })
    }

    resource appInsights 'Microsoft.Insights/components@2020-02-02' = {
    name: '${baseName}ai'
    location: location
    kind: 'web'
    properties: {
    Application_Type: 'web'
    WorkspaceResourceId: logs.id
    }
    }

    resource env 'Microsoft.App/managedEnvironments@2022-01-01-preview' = {
    name: '${baseName}env'
    location: location
    properties: {
    appLogsConfiguration: {
    destination: 'log-analytics'
    logAnalyticsConfiguration: {
    customerId: logs.properties.customerId
    sharedKey: logs.listKeys().primarySharedKey
    }
    }
    }
    }

    output id string = env.id
    output appInsightsInstrumentationKey string = appInsights.properties.InstrumentationKey
    output appInsightsConnectionString string = appInsights.properties.ConnectionString

    environment.bicep with Dapr


    param baseName string = resourceGroup().name
    param location string = resourceGroup().location

    resource logs 'Microsoft.OperationalInsights/workspaces@2021-06-01' = {
    name: '${baseName}logs'
    location: location
    properties: any({
    retentionInDays: 30
    features: {
    searchVersion: 1
    }
    sku: {
    name: 'PerGB2018'
    }
    })
    }

    resource appInsights 'Microsoft.Insights/components@2020-02-02' = {
    name: '${baseName}ai'
    location: location
    kind: 'web'
    properties: {
    Application_Type: 'web'
    WorkspaceResourceId: logs.id
    }
    }

    resource env 'Microsoft.App/managedEnvironments@2022-01-01-preview' = {
    name: '${baseName}env'
    location: location
    properties: {
    appLogsConfiguration: {
    destination: 'log-analytics'
    logAnalyticsConfiguration: {
    customerId: logs.properties.customerId
    sharedKey: logs.listKeys().primarySharedKey
    }
    }
    }
    }

    output id string = env.id
    output appInsightsInstrumentationKey string = appInsights.properties.InstrumentationKey
    output appInsightsConnectionString string = appInsights.properties.ConnectionString


    The container_apps.bicep template defines numerous parameters to provide a reusable template for creating container apps. This allows the module to be used in other CI/CD pipelines as well.

    container_app.bicep without Dapr

    param name string
    param location string = resourceGroup().location
    param containerAppEnvironmentId string
    param repositoryImage string = 'mcr.microsoft.com/azuredocs/containerapps-helloworld:latest'
    param envVars array = []
    param registry string
    param minReplicas int = 1
    param maxReplicas int = 1
    param port int = 80
    param externalIngress bool = false
    param allowInsecure bool = true
    param transport string = 'http'
    param registryUsername string
    @secure()
    param registryPassword string

    resource containerApp 'Microsoft.App/containerApps@2022-01-01-preview' ={
    name: name
    location: location
    properties:{
    managedEnvironmentId: containerAppEnvironmentId
    configuration: {
    activeRevisionsMode: 'single'
    secrets: [
    {
    name: 'container-registry-password'
    value: registryPassword
    }
    ]
    registries: [
    {
    server: registry
    username: registryUsername
    passwordSecretRef: 'container-registry-password'
    }
    ]
    ingress: {
    external: externalIngress
    targetPort: port
    transport: transport
    allowInsecure: allowInsecure
    }
    }
    template: {
    containers: [
    {
    image: repositoryImage
    name: name
    env: envVars
    }
    ]
    scale: {
    minReplicas: minReplicas
    maxReplicas: maxReplicas
    }
    }
    }
    }

    output fqdn string = containerApp.properties.configuration.ingress.fqdn

    container_app.bicep with Dapr


    param name string
    param location string = resourceGroup().location
    param containerAppEnvironmentId string
    param repositoryImage string = 'mcr.microsoft.com/azuredocs/containerapps-helloworld:latest'
    param envVars array = []
    param registry string
    param minReplicas int = 1
    param maxReplicas int = 1
    param port int = 80
    param externalIngress bool = false
    param allowInsecure bool = true
    param transport string = 'http'
    param appProtocol string = 'http'
    param registryUsername string
    @secure()
    param registryPassword string

    resource containerApp 'Microsoft.App/containerApps@2022-01-01-preview' ={
    name: name
    location: location
    properties:{
    managedEnvironmentId: containerAppEnvironmentId
    configuration: {
    dapr: {
    enabled: true
    appId: name
    appPort: port
    appProtocol: appProtocol
    }
    activeRevisionsMode: 'single'
    secrets: [
    {
    name: 'container-registry-password'
    value: registryPassword
    }
    ]
    registries: [
    {
    server: registry
    username: registryUsername
    passwordSecretRef: 'container-registry-password'
    }
    ]
    ingress: {
    external: externalIngress
    targetPort: port
    transport: transport
    allowInsecure: allowInsecure
    }
    }
    template: {
    containers: [
    {
    image: repositoryImage
    name: name
    env: envVars
    }
    ]
    scale: {
    minReplicas: minReplicas
    maxReplicas: maxReplicas
    }
    }
    }
    }

    output fqdn string = containerApp.properties.configuration.ingress.fqdn


    Understanding configuration differences with Dapr

    The code for this specific sample application is largely the same whether or not Dapr is integrated. However, even with this simple app, there are a few benefits and configuration differences when using Dapr that are worth exploring.

    In this scenario most of the changes are related to communication between the container apps. However, you can explore the full range of Dapr benefits by reading the Dapr integration with Azure Container Apps article in the conceptual documentation.

    Without Dapr

    Without Dapr the main.bicep template handles wiring up the front-end store app to communicate with the back-end apis by manually managing environment variables. The bicep template retrieves the fully qualified domains (fqdn) of the API apps as output parameters when they are created. Those configurations are then set as environment variables on the store container app.


    # Retrieve environment variables from API container creation
    var frontend_config = [
    {
    name: 'ProductsApi'
    value: 'http://${products.outputs.fqdn}'
    }
    {
    name: 'InventoryApi'
    value: 'http://${inventory.outputs.fqdn}'
    }
    ]

    # create the store api container app, passing in config
    module store 'container_app.bicep' = {
    name: 'store'
    params: {
    name: 'store'
    location: location
    registryPassword: acr.listCredentials().passwords[0].value
    registryUsername: acr.listCredentials().username
    containerAppEnvironmentId: env.outputs.id
    registry: acr.name
    envVars: union(shared_config, frontend_config)
    externalIngress: true
    }
    }

    The environment variables are then retrieved inside of the program class and used to configure the base URLs of the corresponding HTTP clients.


    builder.Services.AddHttpClient("Products", (httpClient) => httpClient.BaseAddress = new Uri(builder.Configuration.GetValue<string>("ProductsApi")));
    builder.Services.AddHttpClient("Inventory", (httpClient) => httpClient.BaseAddress = new Uri(builder.Configuration.GetValue<string>("InventoryApi")));

    With Dapr

    Dapr can be enabled on a container app when it is created, as seen below. This configuration adds a Dapr sidecar to the app to streamline discovery and communication features between the different container apps in your environment.


    # Create the container app with Dapr enabled
    resource containerApp 'Microsoft.App/containerApps@2022-01-01-preview' ={
    name: name
    location: location
    properties:{
    managedEnvironmentId: containerAppEnvironmentId
    configuration: {
    dapr: {
    enabled: true
    appId: name
    appPort: port
    appProtocol: appProtocol
    }
    activeRevisionsMode: 'single'
    secrets: [
    {
    name: 'container-registry-password'
    value: registryPassword
    }
    ]

    # Rest of template omitted for brevity...
    }
    }

    Some of these Dapr features can be surfaced through the program file. You can configure your HttpClient to leverage Dapr configurations when communicating with other apps in your environment.


    // reconfigure code to make requests to Dapr sidecar
    var baseURL = (Environment.GetEnvironmentVariable("BASE_URL") ?? "http://localhost") + ":" + (Environment.GetEnvironmentVariable("DAPR_HTTP_PORT") ?? "3500");
    builder.Services.AddHttpClient("Products", (httpClient) =>
    {
    httpClient.BaseAddress = new Uri(baseURL);
    httpClient.DefaultRequestHeaders.Add("dapr-app-id", "Products");
    });

    builder.Services.AddHttpClient("Inventory", (httpClient) =>
    {
    httpClient.BaseAddress = new Uri(baseURL);
    httpClient.DefaultRequestHeaders.Add("dapr-app-id", "Inventory");
    });


    Clean up resources

    If you're not going to continue to use this application, you can delete the Azure Container Apps and all the associated services by removing the resource group.

    Follow these steps in the Azure portal to remove the resources you created:

    1. In the Azure portal, navigate to the msdocswebappsapi resource group using the left navigation or search bar.
    2. Select the Delete resource group button at the top of the resource group Overview.
    3. Enter the resource group name msdocswebappsapi in the Are you sure you want to delete "msdocswebappsapi" confirmation dialog.
    4. Select Delete.
      The process to delete the resource group may take a few minutes to complete.
    - + \ No newline at end of file diff --git a/blog/tags/30-days-of-serverless/page/5/index.html b/blog/tags/30-days-of-serverless/page/5/index.html index a127781574..b7fcfde2af 100644 --- a/blog/tags/30-days-of-serverless/page/5/index.html +++ b/blog/tags/30-days-of-serverless/page/5/index.html @@ -14,7 +14,7 @@ - + @@ -22,7 +22,7 @@

    20 posts tagged with "30-days-of-serverless"

    View All Tags

    · 10 min read
    Ayca Bas

    Welcome to Day 20 of #30DaysOfServerless!

    Every day millions of people spend their precious time in productivity tools. What if you use data and intelligence behind the Microsoft applications (Microsoft Teams, Outlook, and many other Office apps) to build seamless automations and custom apps to boost productivity?

    In this post, we'll learn how to build a seamless onboarding experience for new employees joining a company with the power of Microsoft Graph, integrated with Event Hubs and Logic Apps!


    What We'll Cover

    • ✨ The power of Microsoft Graph
    • 🖇️ How do Microsoft Graph and Event Hubs work together?
    • 🛠 Let's Build an Onboarding Workflow!
      • 1️⃣ Setup Azure Event Hubs + Key Vault
      • 2️⃣ Subscribe to users, receive change notifications from Logic Apps
      • 3️⃣ Create Onboarding workflow in the Logic Apps
    • 🚀 Debug: Your onboarding experience
    • ✋ Exercise: Try this tutorial out yourself!
    • 📚 Resources: For Self-Study


    ✨ The Power of Microsoft Graph

    Microsoft Graph is the gateway to data and intelligence in Microsoft 365 platform. Microsoft Graph exploses Rest APIs and client libraries to access data across Microsoft 365 core services such as Calendar, Teams, To Do, Outlook, People, Planner, OneDrive, OneNote and more.

    Overview of Microsoft Graph

    You can build custom experiences by using Microsoft Graph such as automating the onboarding process for new employees. When new employees are created in the Azure Active Directory, they will be automatically added in the Onboarding team on Microsoft Teams.

    Solution architecture


    🖇️ Microsoft Graph with Event Hubs

    Microsoft Graph uses a webhook mechanism to track changes in resources and deliver change notifications to the clients. For example, with Microsoft Graph Change Notifications, you can receive change notifications when:

    • a new task is added in the to-do list
    • a user changes the presence status from busy to available
    • an event is deleted/cancelled from the calendar

    If you'd like to track a large set of resources at a high frequency, use Azure Events Hubs instead of traditional webhooks to receive change notifications. Azure Event Hubs is a popular real-time events ingestion and distribution service built for scale.

    EVENT GRID - PARTNER EVENTS

    Microsoft Graph Change Notifications can be also received by using Azure Event Grid -- currently available for Microsoft Partners! Read the Partner Events Overview documentation for details.

    Setup Azure Event Hubs + Key Vault.

    To get Microsoft Graph Change Notifications delivered to Azure Event Hubs, we'll have to setup Azure Event Hubs and Azure Key Vault. We'll use Azure Key Vault to access to Event Hubs connection string.

    1️⃣ Create Azure Event Hubs

    1. Go to Azure Portal and select Create a resource, type Event Hubs and select click Create.
    2. Fill in the Event Hubs namespace creation details, and then click Create.
    3. Go to the newly created Event Hubs namespace page, select Event Hubs tab from the left pane and + Event Hub:
      • Name your Event Hub as Event Hub
      • Click Create.
    4. Click the name of the Event Hub, and then select Shared access policies and + Add to add a new policy:
      • Give a name to the policy
      • Check Send and Listen
      • Click Create.
    5. After the policy has been created, click the name of the policy to open the details panel, and then copy the Connection string-primary key value. Write it down; you'll need it for the next step.
    6. Go to Consumer groups tab in the left pane and select + Consumer group, give a name for your consumer group as onboarding and select Create.

    2️⃣ Create Azure Key Vault

    1. Go to Azure Portal and select Create a resource, type Key Vault and select Create.
    2. Fill in the Key Vault creation details, and then click Review + Create.
    3. Go to newly created Key Vault and select Secrets tab from the left pane and click + Generate/Import:
      • Give a name to the secret
      • For the value, paste in the connection string you generated at the Event Hubs step
      • Click Create
      • Copy the name of the secret.
    4. Select Access Policies from the left pane and + Add Access Policy:
      • For Secret permissions, select Get
      • For Principal, select Microsoft Graph Change Tracking
      • Click Add.
    5. Select Overview tab from the left pane and copy the Vault URI.

    Subscribe for Logic Apps change notifications

    To start receiving Microsoft Graph Change Notifications, we'll need to create subscription to the resource that we'd like to track - here, 'users'. We'll use Azure Logic Apps to create subscription.

    To create subscription for Microsoft Graph Change Notifications, we'll need to make a http post request to https://graph.microsoft.com/v1.0/subscriptions. Microsoft Graph requires Azure Active Directory authentication make API calls. First, we'll need to register an app to Azure Active Directory, and then we will make the Microsoft Graph Subscription API call with Azure Logic Apps.

    1️⃣ Create an app in Azure Active Directory

    1. In the Azure Portal, go to Azure Active Directory and select App registrations from the left pane and select + New registration. Fill in the details for the new App registration form as below:
      • Name: Graph Subscription Flow Auth
      • Supported account types: Accounts in any organizational directory (Any Azure AD directory - Multitenant) and personal Microsoft accounts (e.g. Skype, Xbox)
      • Select Register.
    2. Go to newly registered app in Azure Active Directory, select API permissions:
      • Select + Add a permission and Microsoft Graph
      • Select Application permissions and add User.Read.All and Directory.Read.All.
      • Select Grant admin consent for the organization
    3. Select Certificates & secrets tab from the left pane, select + New client secret:
      • Choose desired expiry duration
      • Select Add
      • Copy the value of the secret.
    4. Go to Overview from the left pane, copy Application (client) ID and Directory (tenant) ID.

    2️⃣ Create subscription with Azure Logic Apps

    1. Go to Azure Portal and select Create a resource, type Logic apps and select click Create.

    2. Fill in the Logic Apps creation details, and then click Create.

    3. Go to the newly created Logic Apps page, select Workflows tab from the left pane and select + Add:

      • Give a name to the new workflow as graph-subscription-flow
      • Select Stateful as a state type
      • Click Create.
    4. Go to graph-subscription-flow, and then select Designer tab.

    5. In the Choose an operation section, search for Schedule and select Recurrence as a trigger. Fill in the parameters as below:

      • Interval: 61
      • Frequency: Minute
      • Time zone: Select your own time zone
      • Start time: Set a start time
    6. Select + button in the flow and select add an action. Search for HTTP and select HTTP as an action. Fill in the parameters as below:

      • Method: POST
      • URI: https://graph.microsoft.com/v1.0/subscriptions
      • Headers:
        • Key: Content-type
        • Value: application/json
      • Body:
      {
      "changeType": "created, updated",
      "clientState": "secretClientValue",
      "expirationDateTime": "@{addHours(utcNow(), 1)}",
      "notificationUrl": "EventHub:https://<YOUR-VAULT-URI>/secrets/<YOUR-KEY-VAULT-SECRET-NAME>?tenantId=72f988bf-86f1-41af-91ab-2d7cd011db47",
      "resource": "users"
      }

      In notificationUrl, make sure to replace <YOUR-VAULT-URI> with the vault uri and <YOUR-KEY-VAULT-SECRET-NAME> with the secret name that you copied from the Key Vault.

      In resource, define the resource type you'd like to track changes. For our example, we will track changes for users resource.

      • Authentication:
        • Authentication type: Active Directory OAuth
        • Authority: https://login.microsoft.com
        • Tenant: Directory (tenant) ID copied from AAD app
        • Audience: https://graph.microsoft.com
        • Client ID: Application (client) ID copied from AAD app
        • Credential Type: Secret
        • Secret: value of the secret copied from AAD app
    7. Select Save and run your workflow from the Overview tab.

      Check your subscription in Graph Explorer: If you'd like to make sure that your subscription is created successfully by Logic Apps, you can go to Graph Explorer, login with your Microsoft 365 account and make GET request to https://graph.microsoft.com/v1.0/subscriptions. Your subscription should appear in the response after it's created successfully.

    Subscription workflow success

    After subscription is created successfully by Logic Apps, Azure Event Hubs will receive notifications whenever there is a new user created in Azure Active Directory.


    Create Onboarding workflow in Logic Apps

    We'll create a second workflow in the Logic Apps to receive change notifications from Event Hubs when there is a new user created in the Azure Active Directory and add new user in Onboarding team on Microsoft Teams.

    1. Go to the Logic Apps you created in the previous steps, select Workflows tab and create a new workflow by selecting + Add:
      • Give a name to the new workflow as teams-onboarding-flow
      • Select Stateful as a state type
      • Click Create.
    2. Go to teams-onboarding-flow, and then select Designer tab.
    3. In the Choose an operation section, search for Event Hub, select When events are available in Event Hub as a trigger. Setup Event Hub connection as below:
      • Create Connection:
        • Connection name: Connection
        • Authentication Type: Connection String
        • Connection String: Go to Event Hubs > Shared Access Policies > RootManageSharedAccessKey and copy Connection string–primary key
        • Select Create.
      • Parameters:
        • Event Hub Name: Event Hub
        • Consumer Group Name: onboarding
    4. Select + in the flow and add an action, search for Control and add For each as an action. Fill in For each action as below:
      • Select output from previous steps: Events
    5. Inside For each, select + in the flow and add an action, search for Data operations and select Parse JSON. Fill in Parse JSON action as below:
      • Content: Events Content
      • Schema: Copy the json content from schema-parse.json and paste as a schema
    6. Select + in the flow and add an action, search for Control and add For each as an action. Fill in For each action as below:
      • Select output from previous steps: value
      1. Inside For each, select + in the flow and add an action, search for Microsoft Teams and select Add a member to a team. Login with your Microsoft 365 account to create a connection and fill in Add a member to a team action as below:
      • Team: Create an Onboarding team on Microsoft Teams and select
      • A user AAD ID for the user to add to a team: id
    7. Select Save.

    🚀 Debug your onboarding experience

    To debug our onboarding experience, we'll need to create a new user in Azure Active Directory and see if it's added in Microsoft Teams Onboarding team automatically.

    1. Go to Azure Portal and select Azure Active Directory from the left pane and go to Users. Select + New user and Create new user. Fill in the details as below:

      • User name: JaneDoe
      • Name: Jane Doe

      new user in Azure Active Directory

    2. When you added Jane Doe as a new user, it should trigger the teams-onboarding-flow to run. teams onboarding flow success

    3. Once the teams-onboarding-flow runs successfully, you should be able to see Jane Doe as a member of the Onboarding team on Microsoft Teams! 🥳 new member in Onboarding team on Microsoft Teams

    Congratulations! 🎉

    You just built an onboarding experience using Azure Logic Apps, Azure Event Hubs and Azure Key Vault.


    📚 Resources

    - + \ No newline at end of file diff --git a/blog/tags/30-days-of-serverless/page/6/index.html b/blog/tags/30-days-of-serverless/page/6/index.html index f1e28acd04..4fe96efc4c 100644 --- a/blog/tags/30-days-of-serverless/page/6/index.html +++ b/blog/tags/30-days-of-serverless/page/6/index.html @@ -14,14 +14,14 @@ - +

    20 posts tagged with "30-days-of-serverless"

    View All Tags

    · 10 min read
    Brian Benz

    Welcome to Day 18 of #30DaysOfServerless!

    Yesterday my Serverless September post introduced you to making Azure Logic Apps and Azure Cosmos DB work together with a sample application that collects weather data. Today I'm sharing a more robust solution that actually reads my mail. Let's learn about Teaching the cloud to read your mail!

    Ready? Let's go!


    What We'll Cover

    • Introduction to the ReadMail solution
    • Setting up Azure storage, Cosmos DB and Computer Vision
    • Connecting it all together with a Logic App
    • Resources: For self-study!


    Introducing the ReadMail solution

    The US Postal system offers a subscription service that sends you images of mail it will be delivering to your home. I decided it would be cool to try getting Azure to collect data based on these images, so that I could categorize my mail and track the types of mail that I received.

    To do this, I used Azure storage, Cosmos DB, Logic Apps, and computer vision. When a new email comes in from the US Postal service (USPS), it triggers a logic app that:

    • Posts attachments to Azure storage
    • Triggers Azure Computer vision to perform an OCR function on attachments
    • Extracts any results into a JSON document
    • Writes the JSON document to Cosmos DB

    workflow for the readmail solution

    In this post I'll walk you through setting up the solution for yourself.

    Prerequisites

    Setup Azure Services

    First, we'll create all of the target environments we need to be used by our Logic App, then we;ll create the Logic App.

    1. Azure Storage

    We'll be using Azure storage to collect attached images from emails as they arrive. Adding images to Azure storage will also trigger a workflow that performs OCR on new attached images and stores the OCR data in Cosmos DB.

    To create a new Azure storage account from the portal dashboard, Select Create a resource > Storage account > Create.

    The Basics tab covers all of the features and information that we will need for this solution:

    SectionFieldRequired or optionalDescription
    Project detailsSubscriptionRequiredSelect the subscription for the new storage account.
    Project detailsResource groupRequiredCreate a new resource group that you will use for storage, Cosmos DB, Computer Vision and the Logic App.
    Instance detailsStorage account nameRequiredChoose a unique name for your storage account. Storage account names must be between 3 and 24 characters in length and may contain numbers and lowercase letters only.
    Instance detailsRegionRequiredSelect the appropriate region for your storage account.
    Instance detailsPerformanceRequiredSelect Standard performance for general-purpose v2 storage accounts (default).
    Instance detailsRedundancyRequiredSelect locally-redundant Storage (LRS) for this example.

    Select Review + create to accept the remaining default options, then validate and create the account.

    2. Azure CosmosDB

    CosmosDB will be used to store the JSON documents returned by the COmputer Vision OCR process.

    See more details and screen shots for setting up CosmosDB in yesterday's Serverless September post - Using Logic Apps with Cosmos DB

    To get started with Cosmos DB, you create an account, then a database, then a container to store JSON documents. To create a new Cosmos DB account from the portal dashboard, Select Create a resource > Azure Cosmos DB > Create. Choose core SQL for the API.

    Select your subscription, then for simplicity use the same resource group you created when you set up storage. Enter an account name and choose a location, select provisioned throughput capacity mode and apply the free tier discount. From here you can select Review and Create, then Create

    Next, create a new database and container. Go to the Data Explorer in your new Cosmos DB account, and choose New Container. Name the database, and keep all the other defaults except:

    SettingAction
    Container IDid
    Container partition/id

    Press OK to create a database and container

    3. Azure Computer Vision

    Azure Cognitive Services' Computer Vision will perform an OCR process on each image attachment that is stored in Azure storage.

    From the portal dashboard, Select Create a resource > AI + Machine Learning > Computer Vision > Create.

    The Basics and Identity tabs cover all of the features and information that we will need for this solution:

    Basics Tab

    SectionFieldRequired or optionalDescription
    Project detailsSubscriptionRequiredSelect the subscription for the new service.
    Project detailsResource groupRequiredUse the same resource group that you used for Azure storage and Cosmos DB.
    Instance detailsRegionRequiredSelect the appropriate region for your Computer Vision service.
    Instance detailsNameRequiredChoose a unique name for your Computer Vision service.
    Instance detailsPricingRequiredSelect the free tier for this example.

    Identity Tab

    SectionFieldRequired or optionalDescription
    System assigned managed identityStatusRequiredEnable system assigned identity to grant the resource access to other existing resources.

    Select Review + create to accept the remaining default options, then validate and create the account.


    Connect it all with a Logic App

    Now we're ready to put this all together in a Logic App workflow!

    1. Create Logic App

    From the portal dashboard, Select Create a resource > Integration > Logic App > Create. Name your Logic App and select a location, the rest of the settings can be left at their defaults.

    2. Create Workflow: Add Trigger

    Once the Logic App is created, select Create a workflow from designer.

    A workflow is a series of steps that defines a task or process. Each workflow starts with a single trigger, after which you must add one or more actions.

    When in designer, search for outlook.com on the right under Add a trigger. Choose outlook.com. Choose When a new email arrives as the trigger.

    A trigger is always the first step in any workflow and specifies the condition for running any further steps in that workflow.

    Set the following values:

    ParameterValue
    FolderInbox
    ImportanceAny
    Only With AttachmentsYes
    Include AttachmentsYes

    Then add a new parameter:

    ParameterValue
    FromAdd the email address that sends you the email with attachments
    3. Create Workflow: Add Action (for Trigger)

    Choose add an action and choose control > for-each.

    logic app for each

    Inside the for-each action, in Select an output from previous steps, choose attachments. Then, again inside the for-each action, add the create blob action:

    Set the following values:

    ParameterValue
    Folder Path/mailreaderinbox
    Blob NameAttachments Name
    Blob ContentAttachments Content

    This extracts attachments from the email and created a new blob for each attachment.

    Next, inside the same for-each action, add the get blob content action.

    Set the following values:

    ParameterValue
    Blobid
    Infer content typeYes

    We create and read from a blob for each attachment because Computer Vision needs a non-virtual source to read from when performing an OCR process. Because we enabled system assigned identity to grant Computer Vision to other existing resources, it can access the blob but not the outlook.com attachment. Also, we pass the ID of the blob to use as a unique ID when writing to Cosmos DB.

    create blob from attachments

    Next, inside the same for-each action, choose add an action and choose control > condition. Set the value to Media Type > is equal to > image/JPEG

    The USPS sends attachments of multiple types, but we only want to scan attachments that have images of our mail, which are always JPEG images. If the condition is true, we will process the image with Computer Vision OCR and write the results to a JSON document in CosmosDB.

    In the True section of the condition, add an action and choose Computer Vision API > Optical Character Recognition (OCR) to JSON.

    Set the following values:

    ParameterValue
    Image SourceImage Content
    Image contentFile Content

    In the same True section of the condition, choose add an action and choose Cosmos DB. Choose Create or Update Document from the actions. Select Access Key, and provide the primary read-write key (found under keys in Cosmos DB), and the Cosmos DB account ID (without 'documents.azure.com').

    Next, fill in your Cosmos DB Database ID and Collection ID. Create a JSON document by selecting dynamic content elements and wrapping JSON formatting around them.

    Be sure to use the ID passed from blob storage as your unique ID for CosmosDB. That way you can troubleshoot and JSON or OCR issues by tracing back the JSON document in Cosmos Db to the blob in Azure storage. Also, include the Computer Vision JSON response, as it contains the results of the Computer Vision OCR scan. all other elements are optional.

    4. TEST WORKFLOW

    When complete, you should have an action the Logic App designer that looks something like this:

    Logic App workflow create or update document in cosmosdb

    Save the workflow and test the connections by clicking Run Trigger > Run. If connections are working, you should see documents flowing into Cosmos DB each time that an email arrives with image attachments.

    Check the data in Cosmos Db by opening the Data explorer, then choosing the container you created and selecting items. You should see documents similar to this:

    Logic App workflow with trigger and action

    1. Congratulations

    You just built your personal ReadMail solution with Logic Apps! 🎉


    Resources: For self-study!

    Once you have an understanding of the basics in ths post, there is so much more to learn!

    Thanks for stopping by!

    - + \ No newline at end of file diff --git a/blog/tags/30-days-of-serverless/page/7/index.html b/blog/tags/30-days-of-serverless/page/7/index.html index 9dc5767176..4263ea32d2 100644 --- a/blog/tags/30-days-of-serverless/page/7/index.html +++ b/blog/tags/30-days-of-serverless/page/7/index.html @@ -14,14 +14,14 @@ - +

    20 posts tagged with "30-days-of-serverless"

    View All Tags

    · 6 min read
    Brian Benz

    Welcome to Day 17 of #30DaysOfServerless!

    In past weeks, we've covered serverless technologies that provide core capabilities (functions, containers, microservices) for building serverless solutions. This week we're looking at technologies that make service integrations more seamless, starting with Logic Apps. Let's look at one usage example today!

    Ready? Let's Go!


    What We'll Cover

    • Introduction to Logic Apps
    • Settng up Cosmos DB for Logic Apps
    • Setting up a Logic App connection and event
    • Writing data to Cosmos DB from a Logic app
    • Resources: For self-study!


    Introduction to Logic Apps

    Previously in Serverless September, we've covered Azure Functions, where the event triggers code. In Logic Apps, the event triggers a workflow that you design. Logic Apps enable serverless applications to connect to external sources for data then automate business processes via workflows.

    In this post I'll walk you through setting up a Logic App that works with Cosmos DB. For this example, we'll connect to the MSN weather service, an design a logic app workflow that collects data when weather changes, and writes the data to Cosmos DB.

    PREREQUISITES

    Setup Cosmos DB for Logic Apps

    Cosmos DB has many APIs to choose from, but to use the default Logic App connection, we need to choose the a Cosmos DB SQL API. We'll set this up via the Azure Portal.

    To get started with Cosmos DB, you create an account, then a database, then a container to store JSON documents. To create a new Cosmos DB account from the portal dashboard, Select Create a resource > Azure Cosmos DB > Create. Choose core SQL for the API.

    Select your subscription, then create a new resource group called CosmosWeather. Enter an account name and choose a location, select provisioned throughput capacity mode and apply the free tier discount. From here you can select Review and Create, then Create

    Azure Cosmos DB is available in two different capacity modes: provisioned throughput and serverless. You can perform the same database operations in both modes, but the way you get billed for these operations is different. We wil be using provisioned throughput and the free tier for this example.

    Setup the CosmosDB account

    Next, create a new database and container. Go to the Data Explorer in your new Cosmos DB account, and choose New Container. Name the database, and keep all the orher defaults except:

    SettingAction
    Container IDid
    Container partition/id

    Press OK to create a database and container

    A database is analogous to a traditional DBMS namespace. It's used to organize one or more containers.

    Setup the CosmosDB Container

    Now we're ready to set up our logic app an write to Cosmos DB!

    Setup Logic App connection + event

    Once the Cosmos DB SQL API account is created, we can set up our Logic App. From the portal dashboard, Select Create a resource > Integration > Logic App > Create. Name your Logic App and select a location, the rest fo the settings can be left at their defaults. Once you new Logic App is created, select Create a workflow from designer to get started.

    A workflow is a series of steps that defines a task or process. Each workflow starts with a single trigger, after which you must add one or more actions.

    When in designer, search for weather on the right under Add a trigger. Choose MSN Weather. Choose When the current conditions change as the trigger.

    A trigger is always the first step in any workflow and specifies the condition for running any further steps in that workflow.

    Add a location. Valid locations are City, Region, State, Country, Landmark, Postal Code, latitude and longitude. This triggers a new workflow when the conditions change for a location.

    Write data from Logic App to Cosmos DB

    Now we are ready to set up the action to write data to Cosmos DB. Choose add an action and choose Cosmos DB.

    An action is each step in a workflow after the trigger. Every action runs some operation in a workflow.

    In this case, we will be writing a JSON document to the Cosmos DB container we created earlier. Choose Create or Update Document from the actions. At this point you should have a workflow in designer that looks something like this:

    Logic App workflow with trigger

    Start wth the connection for set up the Cosmos DB action. Select Access Key, and provide the primary read-write key (found under keys in Cosmos DB), and the Cosmos DB account ID (without 'documents.azure.com').

    Next, fill in your Cosmos DB Database ID and Collection ID. Create a JSON document bt selecting dynamic content elements and wrapping JSON formatting around them.

    You will need a unique ID for each document that you write to Cosmos DB, for that you can use an expression. Because we declared id to be our unique ID in Cosmos DB, we will use use that for the name. Under expressions, type guid() and press enter to add a unique ID to the JSON document. When complete, you should have a workflow in designer that looks something like this:

    Logic App workflow with trigger and action

    Save the workflow and test the connections by clicking Run Trigger > Run. If connections are working, you should see documents flowing into Cosmos DB over the next few minutes.

    Check the data in Cosmos Db by opening the Data explorer, then choosing the container you created and selecting items. You should see documents similar to this:

    Logic App workflow with trigger and action

    Resources: For self-study!

    Once you've grasped the basics in ths post, there is so much more to learn!

    Thanks for stopping by!

    - + \ No newline at end of file diff --git a/blog/tags/30-days-of-serverless/page/8/index.html b/blog/tags/30-days-of-serverless/page/8/index.html index d0c66b0ebe..44b292cf2f 100644 --- a/blog/tags/30-days-of-serverless/page/8/index.html +++ b/blog/tags/30-days-of-serverless/page/8/index.html @@ -14,13 +14,13 @@ - +

    20 posts tagged with "30-days-of-serverless"

    View All Tags

    · 4 min read
    Nitya Narasimhan
    Devanshi Joshi

    Welcome to Day 15 of #30DaysOfServerless!

    This post marks the midpoint of our Serverless on Azure journey! Our Week 2 Roadmap showcased two key technologies - Azure Container Apps (ACA) and Dapr - for building serverless microservices. We'll also look at what happened elsewhere in #ServerlessSeptember, then set the stage for our next week's focus: Serverless Integrations.

    Ready? Let's Go!


    What We'll Cover

    • ICYMI: This Week on #ServerlessSeptember
    • Recap: Microservices, Azure Container Apps & Dapr
    • Coming Next: Serverless Integrations
    • Exercise: Take the Cloud Skills Challenge
    • Resources: For self-study!

    This Week In Events

    We had a number of activities happen this week - here's a quick summary:

    This Week in #30Days

    In our #30Days series we focused on Azure Container Apps and Dapr.

    • In Hello Container Apps we learned how Azure Container Apps helps you run microservices and containerized apps on serverless platforms. And we build and deployed our first ACA.
    • In Microservices Communication we explored concepts like environments and virtual networking, with a hands-on example to show how two microservices communicate in a deployed ACA.
    • In Scaling Your Container Apps we learned about KEDA (Kubernetes Event-Driven Autoscaler) and how to configure autoscaling for your ACA based on KEDA-supported triggers.
    • In Build with Dapr we introduced the Distributed Application Runtime (Dapr) and learned how its Building Block APIs and sidecar architecture make it easier to develop microservices with ACA.
    • In Secure ACA Access we learned how to secure ACA access to external services with - and without - Dapr, covering Secret Stores and Managed Identity.
    • Finally, Build ACA with Dapr tied it all together with a enterprise app scenario where an orders processor (ACA) uses Dapr APIs (PubSub, State Management) to receive and store order messages from Azure Service Bus.

    Here's a visual recap:

    Self Study: Code Samples & Tutorials

    There's no better way to get familiar with the concepts, than to dive in and play with code samples and hands-on tutorials. Here are 4 resources to bookmark and try out:

    1. Dapr Quickstarts - these walk you through samples showcasing individual Building Block APIs - with multiple language options available.
    2. Dapr Tutorials provides more complex examples of microservices applications and tools usage, including a Distributed Calculator polyglot app.
    3. Next, try to Deploy a Dapr application to Azure Container Apps to get familiar with the process of setting up the environment, then deploying the app.
    4. Or, explore the many Azure Container Apps samples showcasing various features and more complex architectures tied to real world scenarios.

    What's Next: Serverless Integrations!

    So far we've talked about core technologies (Azure Functions, Azure Container Apps, Dapr) that provide foundational support for your serverless solution. Next, we'll look at Serverless Integrations - specifically at technologies like Azure Logic Apps and Azure Event Grid that automate workflows and create seamless end-to-end solutions that integrate other Azure services in serverless-friendly ways.

    Take the Challenge!

    The Cloud Skills Challenge is still going on, and we've already had hundreds of participants join and complete the learning modules to skill up on Serverless.

    There's still time to join and get yourself on the leaderboard. Get familiar with Azure Functions, SignalR, Logic Apps, Azure SQL and more - in serverless contexts!!


    - + \ No newline at end of file diff --git a/blog/tags/30-days-of-serverless/page/9/index.html b/blog/tags/30-days-of-serverless/page/9/index.html index c245cbe84c..5636cfc73a 100644 --- a/blog/tags/30-days-of-serverless/page/9/index.html +++ b/blog/tags/30-days-of-serverless/page/9/index.html @@ -14,7 +14,7 @@ - + @@ -24,7 +24,7 @@ Image showing container apps role assignment

  • Lastly, we need to restart the container app revision, to do so run the command below:

     ##Get revision name and assign it to a variable
    $REVISION_NAME = (az containerapp revision list `
    --name $BACKEND_SVC_NAME `
    --resource-group $RESOURCE_GROUP `
    --query [0].name)

    ##Restart revision by name
    az containerapp revision restart `
    --resource-group $RESOURCE_GROUP `
    --name $BACKEND_SVC_NAME `
    --revision $REVISION_NAME
  • Run end-to-end Test on Azure

    From the Azure Portal, select the Azure Container App orders-processor and navigate to Log stream under Monitoring tab, leave the stream connected and opened. From the Azure Portal, select the Azure Service Bus Namespace ordersservices, select the topic orderreceivedtopic, select the subscription named orders-processor-subscription, then click on Service Bus Explorer (preview). From there we need to publish/send a message. Use the JSON payload below

    ```json
    {
    "data": {
    "reference": "Order 150",
    "quantity": 150,
    "createdOn": "2022-05-10T12:45:22.0983978Z"
    }
    }
    ```

    If all is configured correctly, you should start seeing the information logs in Container Apps Log stream, similar to the images below Image showing publishing messages from Azure Service

    Information logs on the Log stream of the deployed Azure Container App Image showing ACA Log Stream

    🎉 CONGRATULATIONS

    You have successfully deployed to the cloud an Azure Container App and configured Dapr Pub/Sub API with Azure Service Bus.

    9. Clean up

    If you are done with the tutorial, use the following command to delete the resource group and all its contained resources to avoid incurring further costs.

    az group delete --name $RESOURCE_GROUP

    Exercise

    I left for you the configuration of the Dapr State Store API with Azure Cosmos DB :)

    When you look at the action method OrderReceived in controller ExternalOrdersController, you will see that I left a line with ToDo: note, this line is responsible to save the received message (OrderModel) into Azure Cosmos DB.

    There is no need to change anything on the code base (other than removing this commented line), that's the beauty of Dapr Building Blocks and how easy it allows us to plug components to our microservice application without any plumping and brining external SDKs.

    For sure you need to work on the configuration part of Dapr State Store by creating a new component file like what we have done with the Pub/Sub API, things that you need to work on are:

    • Provision Azure Cosmos DB Account and obtain its masterKey.
    • Create a Dapr Component file adhering to Dapr Specs.
    • Create an Azure Container Apps component file adhering to ACA component specs.
    • Test locally on your dev machine using Dapr Component file.
    • Register the new Dapr State Store component with Azure Container Apps Environment and set the Cosmos Db masterKey from the Azure Portal. If you want to challenge yourself more, use the Managed Identity approach as done in this post! The right way to protect your keys and you will not worry about managing CosmosDb keys anymore!
    • Build a new image of the application and push it to Azure Container Registry.
    • Update Azure Container Apps and create a new revision which contains the updated code.
    • Verify the results by checking Azure Cosmos DB, you should see the Order Model stored in Cosmos DB.

    If you need help, you can always refer to my blog post Azure Container Apps State Store With Dapr State Management API which contains exactly what you need to implement here, so I'm very confident you will be able to complete this exercise with no issues, happy coding :)

    What's Next?

    If you enjoyed working with Dapr and Azure Container Apps, and you want to have a deep dive with more complex scenarios (Dapr bindings, service discovery, auto scaling with KEDA, sync services communication, distributed tracing, health probes, etc...) where multiple services deployed to a single Container App Environment; I have created a detailed tutorial which should walk you through step by step with through details to build the application.

    So far, the published posts below, and I'm publishing more posts on weekly basis, so stay tuned :)

    Resources

    - + \ No newline at end of file diff --git a/blog/tags/ask-the-expert/index.html b/blog/tags/ask-the-expert/index.html index 1b0f304f09..a53dbf8ad3 100644 --- a/blog/tags/ask-the-expert/index.html +++ b/blog/tags/ask-the-expert/index.html @@ -14,13 +14,13 @@ - +

    One post tagged with "ask-the-expert"

    View All Tags

    · 5 min read
    Nitya Narasimhan
    Devanshi Joshi

    SEP 08: CHANGE IN PUBLISHING SCHEDULE

    Starting from Week 2 (Sep 8), we'll be publishing blog posts in batches rather than on a daily basis, so you can read a series of related posts together. Don't want to miss updates? Just subscribe to the feed


    Welcome to Day 8 of #30DaysOfServerless!

    This marks the end of our Week 1 Roadmap focused on Azure Functions!! Today, we'll do a quick recap of all #ServerlessSeptember activities in Week 1, set the stage for Week 2 - and leave you with some excellent tutorials you should explore to build more advanced scenarios with Azure Functions.

    Ready? Let's go.


    What We'll Cover

    • Azure Functions: Week 1 Recap
    • Advanced Functions: Explore Samples
    • End-to-End: Serverless Hacks & Cloud Skills
    • What's Next: Hello, Containers & Microservices
    • Challenge: Complete the Learning Path


    Week 1 Recap: #30Days & Functions

    Congratulations!! We made it to the end of Week 1 of #ServerlessSeptember. Let's recap what we learned so far:

    • In Core Concepts we looked at where Azure Functions fits into the serverless options available on Azure. And we learned about key concepts like Triggers, Bindings, Custom Handlers and Durable Functions.
    • In Build Your First Function we looked at the tooling options for creating Functions apps, testing them locally, and deploying them to Azure - as we built and deployed our first Functions app.
    • In the next 4 posts, we explored new Triggers, Integrations, and Scenarios - as we looked at building Functions Apps in Java, JavaScript, .NET and Python.
    • And in the Zero-To-Hero series, we learned about Durable Entities - and how we can use them to create stateful serverless solutions using a Chirper Sample as an example scenario.

    The illustrated roadmap below summarizes what we covered each day this week, as we bring our Functions-as-a-Service exploration to a close.


    Advanced Functions: Code Samples

    So, now that we've got our first Functions app under our belt, and validated our local development setup for tooling, where can we go next? A good next step is to explore different triggers and bindings, that drive richer end-to-end scenarios. For example:

    • Integrate Functions with Azure Logic Apps - we'll discuss Azure Logic Apps in Week 3. For now, think of it as a workflow automation tool that lets you integrate seamlessly with other supported Azure services to drive an end-to-end scenario. In this tutorial, we set up a workflow connecting Twitter (get tweet) to Azure Cognitive Services (analyze sentiment) - and use that to trigger an Azure Functions app to send email about the result.
    • Integrate Functions with Event Grid - we'll discuss Azure Event Grid in Week 3. For now, think of it as an eventing service connecting event sources (publishers) to event handlers (subscribers) at cloud scale. In this tutorial, we handle a common use case - a workflow where loading an image to Blob Storage triggers an Azure Functions app that implements a resize function, helping automatically generate thumbnails for the uploaded image.
    • Integrate Functions with CosmosDB and SignalR to bring real-time push-based notifications to your web app. It achieves this by using a Functions app that is triggered by changes in a CosmosDB backend, causing it to broadcast that update (push notification to connected web clients over SignalR, in real time.

    Want more ideas? Check out the Azure Samples for Functions for implementations, and browse the Azure Architecture Center for reference architectures from real-world scenarios that involve Azure Functions usage.


    E2E Scenarios: Hacks & Cloud Skills

    Want to systematically work your way through a single End-to-End scenario involving Azure Functions alongside other serverless support technologies? Check out the Serverless Hacks activity happening during #ServerlessSeptember, and learn to build this "Serverless Tollbooth Application" in a series of 10 challenges. Check out the video series for a reference solution in .NET and sign up for weekly office hours to join peers and discuss your solutions or challenges.

    Or perhaps you prefer to learn core concepts with code in a structured learning path? We have that covered. Check out the 12-module "Create Serverless Applications" course from Microsoft Learn which walks your through concepts, one at a time, with code. Even better - sign up for the free Cloud Skills Challenge and complete the same path (in under 30 days) but this time, with the added fun of competing against your peers for a spot on a leaderboard, and swag.


    What's Next? Hello, Cloud-Native!

    So where to next? In Week 2 we turn our attention from Functions-as-a-Service to building more complex backends using Containers and Microservices. We'll focus on two core technologies - Azure Container Apps and Dapr (Distributed Application Runtime) - both key components of a broader vision around Building Cloud-Native Applications in Azure.

    What is Cloud-Native you ask?

    Fortunately for you, we have an excellent introduction in our Zero-to-Hero article on Go Cloud-Native with Azure Container Apps - that explains the 5 pillars of Cloud-Native and highlights the value of Azure Container Apps (scenarios) and Dapr (sidecar architecture) for simplified microservices-based solution with auto-scale capability. Prefer a visual summary? Here's an illustrate guide to that article for convenience.

    Go Cloud-Native Download a higher resolution version of the image


    Take The Challenge

    We typically end each post with an exercise or activity to reinforce what you learned. For Week 1, we encourage you to take the Cloud Skills Challenge and work your way through at least a subset of the modules, for hands-on experience with the different Azure Functions concepts, integrations, and usage.

    See you in Week 2!

    - + \ No newline at end of file diff --git a/blog/tags/asp-net/index.html b/blog/tags/asp-net/index.html index ef8a9412a1..55b1ba40a6 100644 --- a/blog/tags/asp-net/index.html +++ b/blog/tags/asp-net/index.html @@ -14,13 +14,13 @@ - +

    One post tagged with "asp.net"

    View All Tags

    · 19 min read
    Alex Wolf

    Welcome to Day 24 of #30DaysOfServerless!

    We continue exploring E2E scenarios with this tutorial where you'll deploy a containerized ASP.NET Core 6.0 application to Azure Container Apps.

    The application consists of a front-end web app built using Blazor Server, as well as two Web API projects to manage data. These projects will exist as three separate containers inside of a shared container apps environment.


    What We'll Cover

    • Deploy ASP.NET Core 6.0 app to Azure Container Apps
    • Automate deployment workflows using GitHub Actions
    • Provision and deploy resources using Azure Bicep
    • Exercise: Try this yourself!
    • Resources: For self-study!


    Introduction

    Azure Container Apps enables you to run microservices and containerized applications on a serverless platform. With Container Apps, you enjoy the benefits of running containers while leaving behind the concerns of manually configuring cloud infrastructure and complex container orchestrators.

    In this tutorial, you'll deploy a containerized ASP.NET Core 6.0 application to Azure Container Apps. The application consists of a front-end web app built using Blazor Server, as well as two Web API projects to manage data. These projects will exist as three separate containers inside of a shared container apps environment.

    You will use GitHub Actions in combination with Bicep to deploy the application. These tools provide an approachable and sustainable solution for building CI/CD pipelines and working with Container Apps.

    PRE-REQUISITES

    Architecture

    In this tutorial, we'll setup a container app environment with a separate container for each project in the sample store app. The major components of the sample project include:

    • A Blazor Server front-end web app to display product information
    • A products API to list available products
    • An inventory API to determine how many products are in stock
    • GitHub Actions and Bicep templates to provision Azure resources and then build and deploy the sample app.

    You will explore these templates later in the tutorial.

    Public internet traffic should be proxied to the Blazor app. The back-end APIs should only be reachable via requests from the Blazor app inside the container apps environment. This setup can be achieved using container apps environment ingress configurations during deployment.

    An architecture diagram of the shopping app


    Project Sources

    Want to follow along? Fork the sample below. The tutorial can be completed with or without Dapr integration. Pick the path you feel comfortable in. Dapr provides various benefits that make working with Microservices easier - you can learn more in the docs. For this tutorial you will need GitHub and Azure CLI.

    PICK YOUR PATH

    To follow along with this tutorial, fork the relevant sample project below.

    You can run the app locally from Visual Studio:

    • Right click on the Blazor Store project and select Set as Startup Project.
    • Press the start button at the top of Visual Studio to run the app.
    • (Once running) start each API in the background by
    • right-clicking on the project node
    • selecting Debug --> Start without debugging.

    Once the Blazor app is running, you should see something like this:

    An architecture diagram of the shopping app


    Configuring Azure credentials

    In order to deploy the application to Azure through GitHub Actions, you first need to create a service principal. The service principal will allow the GitHub Actions process to authenticate to your Azure subscription to create resources and deploy code. You can learn more about Service Principals in the Azure CLI documentation. For this step you'll need to be logged into the Azure CLI.

    1) If you have not done so already, make sure to fork the sample project to your own GitHub account or organization.

    1) Once you have completed this step, create a service principal using the Azure CLI command below:

    ```azurecli
    $subscriptionId=$(az account show --query id --output tsv)
    az ad sp create-for-rbac --sdk-auth --name WebAndApiSample --role Contributor --scopes /subscriptions/$subscriptionId
    ```

    1) Copy the JSON output of the CLI command to your clipboard

    1) Under the settings tab of your forked GitHub repo, create a new secret named AzureSPN. The name is important to match the Bicep templates included in the project, which we'll review later. Paste the copied service principal values on your clipboard into the secret and save your changes. This new secret will be used by the GitHub Actions workflow to authenticate to Azure.

    :::image type="content" source="./img/dotnet/github-secrets.png" alt-text="A screenshot of adding GitHub secrets.":::

    Deploy using Github Actions

    You are now ready to deploy the application to Azure Container Apps using GitHub Actions. The sample application includes a GitHub Actions template that is configured to build and deploy any changes to a branch named deploy. The deploy branch does not exist in your forked repository by default, but you can easily create it through the GitHub user interface.

    1) Switch to the Actions tab along the top navigation of your GitHub repository. If you have not done so already, ensure that workflows are enabled by clicking the button in the center of the page.

    A screenshot showing how to enable GitHub actions

    1) Navigate to the main Code tab of your repository and select the main dropdown. Enter deploy into the branch input box, and then select Create branch: deploy from 'main'.

    A screenshot showing how to create the deploy branch

    1) On the new deploy branch, navigate down into the .github/workflows folder. You should see a file called deploy.yml, which contains the main GitHub Actions workflow script. Click on the file to view its content. You'll learn more about this file later in the tutorial.

    1) Click the pencil icon in the upper right to edit the document.

    1) Change the RESOURCE_GROUP_NAME: value to msdocswebappapis or another valid resource group name of your choosing.

    1) In the upper right of the screen, select Start commit and then Commit changes to commit your edit. This will persist the change to the file and trigger the GitHub Actions workflow to build and deploy the app.

    A screenshot showing how to commit changes

    1) Switch to the Actions tab along the top navigation again. You should see the workflow running to create the necessary resources and deploy the app. The workflow may take several minutes to run. When it completes successfully, all of the jobs should have a green checkmark icon next to them.

    The completed GitHub workflow.

    Explore the Azure resources

    Once the GitHub Actions workflow has completed successfully you can browse the created resources in the Azure portal.

    1) On the left navigation, select Resource Groups. Next,choose the msdocswebappapis resource group that was created by the GitHub Actions workflow.

    2) You should see seven resources available that match the screenshot and table descriptions below.

    The resources created in Azure.

    Resource nameTypeDescription
    inventoryContainer appThe containerized inventory API.
    msdocswebappapisacrContainer registryA registry that stores the built Container images for your apps.
    msdocswebappapisaiApplication insightsApplication insights provides advanced monitoring, logging and metrics for your apps.
    msdocswebappapisenvContainer apps environmentA container environment that manages networking, security and resource concerns. All of your containers live in this environment.
    msdocswebappapislogsLog Analytics workspaceA workspace environment for managing logging and analytics for the container apps environment
    productsContainer appThe containerized products API.
    storeContainer appThe Blazor front-end web app.

    3) You can view your running app in the browser by clicking on the store container app. On the overview page, click the Application Url link on the upper right of the screen.

    :::image type="content" source="./img/dotnet/application-url.png" alt-text="The link to browse the app.":::

    Understanding the GitHub Actions workflow

    The GitHub Actions workflow created and deployed resources to Azure using the deploy.yml file in the .github folder at the root of the project. The primary purpose of this file is to respond to events - such as commits to a branch - and run jobs to accomplish tasks. The deploy.yml file in the sample project has three main jobs:

    • Provision: Create the necessary resources in Azure, such as the container apps environment. This step leverages Bicep templates to create the Azure resources, which you'll explore in a moment.
    • Build: Create the container images for the three apps in the project and store them in the container registry.
    • Deploy: Deploy the container images to the different container apps created during the provisioning job.

    The deploy.yml file also accepts parameters to make the workflow more dynamic, such as setting the resource group name or the Azure region resources will be provisioned to.

    Below is a commented version of the deploy.yml file that highlights the essential steps.

    name: Build and deploy .NET application to Container Apps

    # Trigger the workflow on pushes to the deploy branch
    on:
    push:
    branches:
    - deploy

    env:
    # Set workflow variables
    RESOURCE_GROUP_NAME: msdocswebappapis

    REGION: eastus

    STORE_DOCKER: Store/Dockerfile
    STORE_IMAGE: store

    INVENTORY_DOCKER: Store.InventoryApi/Dockerfile
    INVENTORY_IMAGE: inventory

    PRODUCTS_DOCKER: Store.ProductApi/Dockerfile
    PRODUCTS_IMAGE: products

    jobs:
    # Create the required Azure resources
    provision:
    runs-on: ubuntu-latest

    steps:

    - name: Checkout to the branch
    uses: actions/checkout@v2

    - name: Azure Login
    uses: azure/login@v1
    with:
    creds: ${{ secrets.AzureSPN }}

    - name: Create resource group
    uses: azure/CLI@v1
    with:
    inlineScript: >
    echo "Creating resource group in Azure"
    echo "Executing 'az group create -l ${{ env.REGION }} -n ${{ env.RESOURCE_GROUP_NAME }}'"
    az group create -l ${{ env.REGION }} -n ${{ env.RESOURCE_GROUP_NAME }}

    # Use Bicep templates to create the resources in Azure
    - name: Creating resources
    uses: azure/CLI@v1
    with:
    inlineScript: >
    echo "Creating resources"
    az deployment group create --resource-group ${{ env.RESOURCE_GROUP_NAME }} --template-file '/github/workspace/Azure/main.bicep' --debug

    # Build the three app container images
    build:
    runs-on: ubuntu-latest
    needs: provision

    steps:

    - name: Checkout to the branch
    uses: actions/checkout@v2

    - name: Azure Login
    uses: azure/login@v1
    with:
    creds: ${{ secrets.AzureSPN }}

    - name: Set up Docker Buildx
    uses: docker/setup-buildx-action@v1

    - name: Login to ACR
    run: |
    set -euo pipefail
    access_token=$(az account get-access-token --query accessToken -o tsv)
    refresh_token=$(curl https://${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io/oauth2/exchange -v -d "grant_type=access_token&service=${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io&access_token=$access_token" | jq -r .refresh_token)
    docker login -u 00000000-0000-0000-0000-000000000000 --password-stdin ${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io <<< "$refresh_token"

    - name: Build the products api image and push it to ACR
    uses: docker/build-push-action@v2
    with:
    push: true
    tags: ${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io/${{ env.PRODUCTS_IMAGE }}:${{ github.sha }}
    file: ${{ env.PRODUCTS_DOCKER }}

    - name: Build the inventory api image and push it to ACR
    uses: docker/build-push-action@v2
    with:
    push: true
    tags: ${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io/${{ env.INVENTORY_IMAGE }}:${{ github.sha }}
    file: ${{ env.INVENTORY_DOCKER }}

    - name: Build the frontend image and push it to ACR
    uses: docker/build-push-action@v2
    with:
    push: true
    tags: ${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io/${{ env.STORE_IMAGE }}:${{ github.sha }}
    file: ${{ env.STORE_DOCKER }}

    # Deploy the three container images
    deploy:
    runs-on: ubuntu-latest
    needs: build

    steps:

    - name: Checkout to the branch
    uses: actions/checkout@v2

    - name: Azure Login
    uses: azure/login@v1
    with:
    creds: ${{ secrets.AzureSPN }}

    - name: Installing Container Apps extension
    uses: azure/CLI@v1
    with:
    inlineScript: >
    az config set extension.use_dynamic_install=yes_without_prompt

    az extension add --name containerapp --yes

    - name: Login to ACR
    run: |
    set -euo pipefail
    access_token=$(az account get-access-token --query accessToken -o tsv)
    refresh_token=$(curl https://${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io/oauth2/exchange -v -d "grant_type=access_token&service=${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io&access_token=$access_token" | jq -r .refresh_token)
    docker login -u 00000000-0000-0000-0000-000000000000 --password-stdin ${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io <<< "$refresh_token"

    - name: Deploy Container Apps
    uses: azure/CLI@v1
    with:
    inlineScript: >
    az containerapp registry set -n products -g ${{ env.RESOURCE_GROUP_NAME }} --server ${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io

    az containerapp update -n products -g ${{ env.RESOURCE_GROUP_NAME }} -i ${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io/${{ env.PRODUCTS_IMAGE }}:${{ github.sha }}

    az containerapp registry set -n inventory -g ${{ env.RESOURCE_GROUP_NAME }} --server ${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io

    az containerapp update -n inventory -g ${{ env.RESOURCE_GROUP_NAME }} -i ${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io/${{ env.INVENTORY_IMAGE }}:${{ github.sha }}

    az containerapp registry set -n store -g ${{ env.RESOURCE_GROUP_NAME }} --server ${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io

    az containerapp update -n store -g ${{ env.RESOURCE_GROUP_NAME }} -i ${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io/${{ env.STORE_IMAGE }}:${{ github.sha }}

    - name: logout
    run: >
    az logout

    Understanding the Bicep templates

    During the provisioning stage of the GitHub Actions workflow, the main.bicep file is processed. Bicep files provide a declarative way of generating resources in Azure and are ideal for managing infrastructure as code. You can learn more about Bicep in the related documentation. The main.bicep file in the sample project creates the following resources:

    • The container registry to store images of the containerized apps.
    • The container apps environment, which handles networking and resource management for the container apps.
    • Three container apps - one for the Blazor front-end and two for the back-end product and inventory APIs.
    • Configuration values to connect these services together

    main.bicep without Dapr

    param location string = resourceGroup().location

    # create the azure container registry
    resource acr 'Microsoft.ContainerRegistry/registries@2021-09-01' = {
    name: toLower('${resourceGroup().name}acr')
    location: location
    sku: {
    name: 'Basic'
    }
    properties: {
    adminUserEnabled: true
    }
    }

    # create the aca environment
    module env 'environment.bicep' = {
    name: 'containerAppEnvironment'
    params: {
    location: location
    }
    }

    # create the various configuration pairs
    var shared_config = [
    {
    name: 'ASPNETCORE_ENVIRONMENT'
    value: 'Development'
    }
    {
    name: 'APPINSIGHTS_INSTRUMENTATIONKEY'
    value: env.outputs.appInsightsInstrumentationKey
    }
    {
    name: 'APPLICATIONINSIGHTS_CONNECTION_STRING'
    value: env.outputs.appInsightsConnectionString
    }
    ]

    # create the products api container app
    module products 'container_app.bicep' = {
    name: 'products'
    params: {
    name: 'products'
    location: location
    registryPassword: acr.listCredentials().passwords[0].value
    registryUsername: acr.listCredentials().username
    containerAppEnvironmentId: env.outputs.id
    registry: acr.name
    envVars: shared_config
    externalIngress: false
    }
    }

    # create the inventory api container app
    module inventory 'container_app.bicep' = {
    name: 'inventory'
    params: {
    name: 'inventory'
    location: location
    registryPassword: acr.listCredentials().passwords[0].value
    registryUsername: acr.listCredentials().username
    containerAppEnvironmentId: env.outputs.id
    registry: acr.name
    envVars: shared_config
    externalIngress: false
    }
    }

    # create the store api container app
    var frontend_config = [
    {
    name: 'ProductsApi'
    value: 'http://${products.outputs.fqdn}'
    }
    {
    name: 'InventoryApi'
    value: 'http://${inventory.outputs.fqdn}'
    }
    ]

    module store 'container_app.bicep' = {
    name: 'store'
    params: {
    name: 'store'
    location: location
    registryPassword: acr.listCredentials().passwords[0].value
    registryUsername: acr.listCredentials().username
    containerAppEnvironmentId: env.outputs.id
    registry: acr.name
    envVars: union(shared_config, frontend_config)
    externalIngress: true
    }
    }

    main.bicep with Dapr


    param location string = resourceGroup().location

    # create the azure container registry
    resource acr 'Microsoft.ContainerRegistry/registries@2021-09-01' = {
    name: toLower('${resourceGroup().name}acr')
    location: location
    sku: {
    name: 'Basic'
    }
    properties: {
    adminUserEnabled: true
    }
    }

    # create the aca environment
    module env 'environment.bicep' = {
    name: 'containerAppEnvironment'
    params: {
    location: location
    }
    }

    # create the various config pairs
    var shared_config = [
    {
    name: 'ASPNETCORE_ENVIRONMENT'
    value: 'Development'
    }
    {
    name: 'APPINSIGHTS_INSTRUMENTATIONKEY'
    value: env.outputs.appInsightsInstrumentationKey
    }
    {
    name: 'APPLICATIONINSIGHTS_CONNECTION_STRING'
    value: env.outputs.appInsightsConnectionString
    }
    ]

    # create the products api container app
    module products 'container_app.bicep' = {
    name: 'products'
    params: {
    name: 'products'
    location: location
    registryPassword: acr.listCredentials().passwords[0].value
    registryUsername: acr.listCredentials().username
    containerAppEnvironmentId: env.outputs.id
    registry: acr.name
    envVars: shared_config
    externalIngress: false
    }
    }

    # create the inventory api container app
    module inventory 'container_app.bicep' = {
    name: 'inventory'
    params: {
    name: 'inventory'
    location: location
    registryPassword: acr.listCredentials().passwords[0].value
    registryUsername: acr.listCredentials().username
    containerAppEnvironmentId: env.outputs.id
    registry: acr.name
    envVars: shared_config
    externalIngress: false
    }
    }

    # create the store api container app
    module store 'container_app.bicep' = {
    name: 'store'
    params: {
    name: 'store'
    location: location
    registryPassword: acr.listCredentials().passwords[0].value
    registryUsername: acr.listCredentials().username
    containerAppEnvironmentId: env.outputs.id
    registry: acr.name
    envVars: shared_config
    externalIngress: true
    }
    }


    Bicep Modules

    The main.bicep file references modules to create resources, such as module products. Modules are a feature of Bicep templates that enable you to abstract resource declarations into their own files or sub-templates. As the main.bicep file is processed, the defined modules are also evaluated. Modules allow you to create resources in a more organized and reusable way. They can also define input and output parameters that are passed to and from the parent template, such as the name of a resource.

    For example, the environment.bicep module extracts the details of creating a container apps environment into a reusable template. The module defines necessary resource dependencies such as Log Analytics Workspaces and an Application Insights instance.

    environment.bicep without Dapr

    param baseName string = resourceGroup().name
    param location string = resourceGroup().location

    resource logs 'Microsoft.OperationalInsights/workspaces@2021-06-01' = {
    name: '${baseName}logs'
    location: location
    properties: any({
    retentionInDays: 30
    features: {
    searchVersion: 1
    }
    sku: {
    name: 'PerGB2018'
    }
    })
    }

    resource appInsights 'Microsoft.Insights/components@2020-02-02' = {
    name: '${baseName}ai'
    location: location
    kind: 'web'
    properties: {
    Application_Type: 'web'
    WorkspaceResourceId: logs.id
    }
    }

    resource env 'Microsoft.App/managedEnvironments@2022-01-01-preview' = {
    name: '${baseName}env'
    location: location
    properties: {
    appLogsConfiguration: {
    destination: 'log-analytics'
    logAnalyticsConfiguration: {
    customerId: logs.properties.customerId
    sharedKey: logs.listKeys().primarySharedKey
    }
    }
    }
    }

    output id string = env.id
    output appInsightsInstrumentationKey string = appInsights.properties.InstrumentationKey
    output appInsightsConnectionString string = appInsights.properties.ConnectionString

    environment.bicep with Dapr


    param baseName string = resourceGroup().name
    param location string = resourceGroup().location

    resource logs 'Microsoft.OperationalInsights/workspaces@2021-06-01' = {
    name: '${baseName}logs'
    location: location
    properties: any({
    retentionInDays: 30
    features: {
    searchVersion: 1
    }
    sku: {
    name: 'PerGB2018'
    }
    })
    }

    resource appInsights 'Microsoft.Insights/components@2020-02-02' = {
    name: '${baseName}ai'
    location: location
    kind: 'web'
    properties: {
    Application_Type: 'web'
    WorkspaceResourceId: logs.id
    }
    }

    resource env 'Microsoft.App/managedEnvironments@2022-01-01-preview' = {
    name: '${baseName}env'
    location: location
    properties: {
    appLogsConfiguration: {
    destination: 'log-analytics'
    logAnalyticsConfiguration: {
    customerId: logs.properties.customerId
    sharedKey: logs.listKeys().primarySharedKey
    }
    }
    }
    }

    output id string = env.id
    output appInsightsInstrumentationKey string = appInsights.properties.InstrumentationKey
    output appInsightsConnectionString string = appInsights.properties.ConnectionString


    The container_apps.bicep template defines numerous parameters to provide a reusable template for creating container apps. This allows the module to be used in other CI/CD pipelines as well.

    container_app.bicep without Dapr

    param name string
    param location string = resourceGroup().location
    param containerAppEnvironmentId string
    param repositoryImage string = 'mcr.microsoft.com/azuredocs/containerapps-helloworld:latest'
    param envVars array = []
    param registry string
    param minReplicas int = 1
    param maxReplicas int = 1
    param port int = 80
    param externalIngress bool = false
    param allowInsecure bool = true
    param transport string = 'http'
    param registryUsername string
    @secure()
    param registryPassword string

    resource containerApp 'Microsoft.App/containerApps@2022-01-01-preview' ={
    name: name
    location: location
    properties:{
    managedEnvironmentId: containerAppEnvironmentId
    configuration: {
    activeRevisionsMode: 'single'
    secrets: [
    {
    name: 'container-registry-password'
    value: registryPassword
    }
    ]
    registries: [
    {
    server: registry
    username: registryUsername
    passwordSecretRef: 'container-registry-password'
    }
    ]
    ingress: {
    external: externalIngress
    targetPort: port
    transport: transport
    allowInsecure: allowInsecure
    }
    }
    template: {
    containers: [
    {
    image: repositoryImage
    name: name
    env: envVars
    }
    ]
    scale: {
    minReplicas: minReplicas
    maxReplicas: maxReplicas
    }
    }
    }
    }

    output fqdn string = containerApp.properties.configuration.ingress.fqdn

    container_app.bicep with Dapr


    param name string
    param location string = resourceGroup().location
    param containerAppEnvironmentId string
    param repositoryImage string = 'mcr.microsoft.com/azuredocs/containerapps-helloworld:latest'
    param envVars array = []
    param registry string
    param minReplicas int = 1
    param maxReplicas int = 1
    param port int = 80
    param externalIngress bool = false
    param allowInsecure bool = true
    param transport string = 'http'
    param appProtocol string = 'http'
    param registryUsername string
    @secure()
    param registryPassword string

    resource containerApp 'Microsoft.App/containerApps@2022-01-01-preview' ={
    name: name
    location: location
    properties:{
    managedEnvironmentId: containerAppEnvironmentId
    configuration: {
    dapr: {
    enabled: true
    appId: name
    appPort: port
    appProtocol: appProtocol
    }
    activeRevisionsMode: 'single'
    secrets: [
    {
    name: 'container-registry-password'
    value: registryPassword
    }
    ]
    registries: [
    {
    server: registry
    username: registryUsername
    passwordSecretRef: 'container-registry-password'
    }
    ]
    ingress: {
    external: externalIngress
    targetPort: port
    transport: transport
    allowInsecure: allowInsecure
    }
    }
    template: {
    containers: [
    {
    image: repositoryImage
    name: name
    env: envVars
    }
    ]
    scale: {
    minReplicas: minReplicas
    maxReplicas: maxReplicas
    }
    }
    }
    }

    output fqdn string = containerApp.properties.configuration.ingress.fqdn


    Understanding configuration differences with Dapr

    The code for this specific sample application is largely the same whether or not Dapr is integrated. However, even with this simple app, there are a few benefits and configuration differences when using Dapr that are worth exploring.

    In this scenario most of the changes are related to communication between the container apps. However, you can explore the full range of Dapr benefits by reading the Dapr integration with Azure Container Apps article in the conceptual documentation.

    Without Dapr

    Without Dapr the main.bicep template handles wiring up the front-end store app to communicate with the back-end apis by manually managing environment variables. The bicep template retrieves the fully qualified domains (fqdn) of the API apps as output parameters when they are created. Those configurations are then set as environment variables on the store container app.


    # Retrieve environment variables from API container creation
    var frontend_config = [
    {
    name: 'ProductsApi'
    value: 'http://${products.outputs.fqdn}'
    }
    {
    name: 'InventoryApi'
    value: 'http://${inventory.outputs.fqdn}'
    }
    ]

    # create the store api container app, passing in config
    module store 'container_app.bicep' = {
    name: 'store'
    params: {
    name: 'store'
    location: location
    registryPassword: acr.listCredentials().passwords[0].value
    registryUsername: acr.listCredentials().username
    containerAppEnvironmentId: env.outputs.id
    registry: acr.name
    envVars: union(shared_config, frontend_config)
    externalIngress: true
    }
    }

    The environment variables are then retrieved inside of the program class and used to configure the base URLs of the corresponding HTTP clients.


    builder.Services.AddHttpClient("Products", (httpClient) => httpClient.BaseAddress = new Uri(builder.Configuration.GetValue<string>("ProductsApi")));
    builder.Services.AddHttpClient("Inventory", (httpClient) => httpClient.BaseAddress = new Uri(builder.Configuration.GetValue<string>("InventoryApi")));

    With Dapr

    Dapr can be enabled on a container app when it is created, as seen below. This configuration adds a Dapr sidecar to the app to streamline discovery and communication features between the different container apps in your environment.


    # Create the container app with Dapr enabled
    resource containerApp 'Microsoft.App/containerApps@2022-01-01-preview' ={
    name: name
    location: location
    properties:{
    managedEnvironmentId: containerAppEnvironmentId
    configuration: {
    dapr: {
    enabled: true
    appId: name
    appPort: port
    appProtocol: appProtocol
    }
    activeRevisionsMode: 'single'
    secrets: [
    {
    name: 'container-registry-password'
    value: registryPassword
    }
    ]

    # Rest of template omitted for brevity...
    }
    }

    Some of these Dapr features can be surfaced through the program file. You can configure your HttpClient to leverage Dapr configurations when communicating with other apps in your environment.


    // reconfigure code to make requests to Dapr sidecar
    var baseURL = (Environment.GetEnvironmentVariable("BASE_URL") ?? "http://localhost") + ":" + (Environment.GetEnvironmentVariable("DAPR_HTTP_PORT") ?? "3500");
    builder.Services.AddHttpClient("Products", (httpClient) =>
    {
    httpClient.BaseAddress = new Uri(baseURL);
    httpClient.DefaultRequestHeaders.Add("dapr-app-id", "Products");
    });

    builder.Services.AddHttpClient("Inventory", (httpClient) =>
    {
    httpClient.BaseAddress = new Uri(baseURL);
    httpClient.DefaultRequestHeaders.Add("dapr-app-id", "Inventory");
    });


    Clean up resources

    If you're not going to continue to use this application, you can delete the Azure Container Apps and all the associated services by removing the resource group.

    Follow these steps in the Azure portal to remove the resources you created:

    1. In the Azure portal, navigate to the msdocswebappsapi resource group using the left navigation or search bar.
    2. Select the Delete resource group button at the top of the resource group Overview.
    3. Enter the resource group name msdocswebappsapi in the Are you sure you want to delete "msdocswebappsapi" confirmation dialog.
    4. Select Delete.
      The process to delete the resource group may take a few minutes to complete.
    - + \ No newline at end of file diff --git a/blog/tags/autoscaling/index.html b/blog/tags/autoscaling/index.html index f09b7a0d85..bfacdc38b8 100644 --- a/blog/tags/autoscaling/index.html +++ b/blog/tags/autoscaling/index.html @@ -14,13 +14,13 @@ - +

    One post tagged with "autoscaling"

    View All Tags

    · 7 min read
    Paul Yu

    Welcome to Day 11 of #30DaysOfServerless!

    Yesterday we explored Azure Container Concepts related to environments, networking and microservices communication - and illustrated these with a deployment example. Today, we turn our attention to scaling your container apps with demand.


    What We'll Cover

    • What makes ACA Serverless?
    • What is Keda?
    • Scaling Your ACA
    • ACA Scaling In Action
    • Exercise: Explore azure-opensource-labs examples
    • Resources: For self-study!


    So, what makes Azure Container Apps "serverless"?

    Today we are going to focus on what makes Azure Container Apps (ACA) a "serverless" offering. But what does the term "serverless" really mean? As much as we'd like to think there aren't any servers involved, that is certainly not the case. In general, "serverless" means that most (if not all) server maintenance has been abstracted away from you.

    With serverless, you don't spend any time managing and patching servers. This concern is offloaded to Azure and you simply focus on adding business value through application delivery. In addition to operational efficiency, cost efficiency can be achieved with serverless on-demand pricing models. Your workload horizontally scales out based on need and you only pay for what you use. To me, this is serverless, and my teammate @StevenMurawski said it best... "being able to scale to zero is what gives ACA it's serverless magic."

    Scaling your Container Apps

    If you don't know by now, ACA is built on a solid open-source foundation. Behind the scenes, it runs on a managed Kubernetes cluster and includes several open-source components out-of-the box including Dapr to help you build and run microservices, Envoy Proxy for ingress capabilities, and KEDA for event-driven autoscaling. Again, you do not need to install these components yourself. All you need to be concerned with is enabling and/or configuring your container app to leverage these components.

    Let's take a closer look at autoscaling in ACA to help you optimize your container app.

    What is KEDA?

    KEDA stands for Kubernetes Event-Driven Autoscaler. It is an open-source project initially started by Microsoft and Red Hat and has been donated to the Cloud-Native Computing Foundation (CNCF). It is being maintained by a community of 200+ contributors and adopted by many large organizations. In terms of its status as a CNCF project it is currently in the Incubating Stage which means the project has gone through significant due diligence and on its way towards the Graduation Stage.

    Prior to KEDA, horizontally scaling your Kubernetes deployment was achieved through the Horizontal Pod Autoscaler (HPA) which relies on resource metrics such as CPU and memory to determine when additional replicas should be deployed. Being limited to CPU and memory falls a bit short for certain workloads. This is especially true for apps that need to processes messages from a queue or HTTP-based apps that can handle a specific amount of incoming HTTP requests at a time. KEDA aims to fill that gap and provides a much more robust framework for scaling by working in conjunction with HPA. It offers many scalers for you to implement and even allows your deployments to scale to zero! 🥳

    KEDA architecture

    Configuring ACA scale rules

    As I mentioned above, ACA's autoscaling feature leverages KEDA and gives you the ability to configure the number of replicas to deploy based on rules (event triggers). The number of replicas can be configured as a static number or a range (minimum and maximum). So if you need your containers to run 24/7, set the min and max to be the same value. By default, when you deploy a container app, it is set to scale from 0 to 10 replicas. The default scaling rule uses HTTP scaling and defaults to a minimum of 10 concurrent requests per second. Once the threshold of 10 concurrent request per second is met, another replica will be deployed until it reaches the maximum number of replicas.

    At the time of this writing, a container app can have up to 30 replicas.

    Default autoscaler

    As a best practice, if you have a Min / max replicas range configured, you should configure a scaling rule even if it is just explicitly setting the default values.

    Adding HTTP scaling rule

    In addition to HTTP scaling, you can also configure an Azure queue rule, which allows you to use Azure Storage Queues as an event data source.

    Adding Azure Queue scaling rule

    The most flexibility comes with the Custom rule type. This opens up a LOT more options for scaling. All of KEDA's event-based scalers are supported with this option 🚀

    Adding Custom scaling rule

    Translating KEDA templates to Azure templates

    When you implement Custom rules, you need to become familiar with translating KEDA templates to Azure Resource Manager templates or ACA YAML manifests. The KEDA scaler documentation is great and it should be simple to translate KEDA template metadata to an ACA rule metadata.

    The images below shows how to translated a scaling rule which uses Azure Service Bus as an event data source. The custom rule type is set to azure-servicebus and details of the service bus is added to the Metadata section. One important thing to note here is that the connection string to the service bus was added as a secret on the container app and the trigger parameter must be set to connection.

    Azure Container App custom rule metadata

    Azure Container App custom rule metadata

    Additional examples of KEDA scaler conversion can be found in the resources section and example video below.

    See Container App scaling in action

    Now that we've built up some foundational knowledge on how ACA autoscaling is implemented and configured, let's look at a few examples.

    Autoscaling based on HTTP traffic load

    Autoscaling based on Azure Service Bus message queues

    Summary

    ACA brings you a true serverless experience and gives you the ability to configure autoscaling rules based on KEDA scaler templates. This gives you flexibility to scale based on a wide variety of data sources in an event-driven manner. With the amount built-in scalers currently available, there is probably a scaler out there for all your use cases. If not, I encourage you to get involved with the KEDA community and help make it better!

    Exercise

    By now, you've probably read and seen enough and now ready to give this autoscaling thing a try. The example I walked through in the videos above can be found at the azure-opensource-labs repo. I highly encourage you to head over to the containerapps-terraform folder and try the lab out. There you'll find instructions which will cover all the steps and tools you'll need implement autoscaling container apps within your own Azure subscription.

    If you have any questions or feedback, please let us know in the comments below or reach out on Twitter @pauldotyu

    Have fun scaling your containers!

    Resources

    - + \ No newline at end of file diff --git a/blog/tags/azd/index.html b/blog/tags/azd/index.html index 04989aee0c..94111c89bc 100644 --- a/blog/tags/azd/index.html +++ b/blog/tags/azd/index.html @@ -14,7 +14,7 @@ - + @@ -26,7 +26,7 @@

    ...and that's it! We've successfully deployed our application on Azure!

    But there's more!

    Best practices: Monitoring and CI/CD!

    In my opinion, it's not enough to just set up the application on Azure! I want to know that my web app is performant and serving my users reliably! I also want to make sure that I'm not inadvertently breaking my application as I continue to make changes to it. Thankfully, the Azure Developer CLI also handles all of this via two additional commands - azd monitor and azd pipeline config.

    Application Monitoring

    When we provisioned all of our infrastructure, we also set up application monitoring via a Bicep file in our .infra/ directory that spec'd out an Application Insights dashboard. By running azd monitor we can see the dashboard with live metrics that was configured for the application.

    We can also navigate to the Application Dashboard by clicking on the resource group name, where you can set a specific refresh rate for the dashboard, and see usage, reliability, and performance metrics over time.

    I don't know about everyone else but I have spent a ton of time building out similar dashboards. It can be super time-consuming to write all the queries and create the visualizations so this feels like a real time saver.

    CI/CD

    Finally let's talk about setting up CI/CD! This might be my favorite azd feature. As I mentioned before, the Azure Developer CLI has a command, azd pipeline config, which uses the files in the .github/ directory to set up a GitHub Action. More than that, if there is no upstream repo, the Developer CLI will actually help you create one. But what does this mean exactly? Because our GitHub Action is using the same commands you'd run in the CLI under the hood, we're actually going to have CI/CD set up to run on every commit into the repo, against real Azure resources. What a sweet collaboration feature!

    That's it! We've gone end-to-end with the Azure Developer CLI - initialized a project, provisioned the resources on Azure, deployed our code on Azure, set up monitoring logs and dashboards, and set up a CI/CD pipeline with GitHub Actions to run on every commit into the repo (on real Azure resources!).

    Exercise: Try it yourself or create your own template!

    As an exercise, try out the workflow above with any template on GitHub!

    Or, try turning your own project into an Azure Developer CLI-enabled template by following this guidance. If you create your own template, don't forget to tag the repo with the azd-templates topic on GitHub to help others find it (unfamiliar with GitHub topics? Learn how to add topics to your repo)! We'd also love to chat with you about your experience creating an azd template - if you're open to providing feedback around this, please fill out this form!

    Resources

    - + \ No newline at end of file diff --git a/blog/tags/azure-container-apps/index.html b/blog/tags/azure-container-apps/index.html index 81e0b7b52a..639f978a4b 100644 --- a/blog/tags/azure-container-apps/index.html +++ b/blog/tags/azure-container-apps/index.html @@ -14,14 +14,14 @@ - +

    20 posts tagged with "azure-container-apps"

    View All Tags

    · 7 min read
    Devanshi Joshi

    It's Serverless September in a Nutshell! Join us as we unpack our month-long learning journey exploring the core technology pillars for Serverless architectures on Azure. Then end with a look at next steps to build your Cloud-native applications on Azure.


    What We'll Cover

    • Functions-as-a-Service (FaaS)
    • Microservices and Containers
    • Serverless Integrations
    • End-to-End Solutions
    • Developer Tools & #Hacktoberfest

    Banner for Serverless September


    Building Cloud-native Apps

    By definition, cloud-native technologies empower organizations to build and run scalable applications in modern, dynamic environments such as public, private, and hybrid clouds. You can learn more about cloud-native in Kendall Roden's #ServerlessSeptember post on Going Cloud-native with Azure Container Apps.

    Serveless technologies accelerate productivity and minimize costs for deploying applications at cloud scale. So, what can we build with serverless technologies in cloud-native on Azure? Anything that is event-driven - examples include:

    • Microservices - scaled by KEDA-compliant triggers
    • Public API Endpoints - scaled by #concurrent HTTP requests
    • Event-Driven Applications - scaled by length of message queue
    • Web Applications - scaled by #concurrent HTTP requests
    • Background Process - scaled by CPU and Memory usage

    Great - but as developers, we really want to know how we can get started building and deploying serverless solutions on Azure. That was the focus of our #ServerlessSeptember journey. Let's take a quick look at the four key themes.

    Functions-as-a-Service (FaaS)

    Functions-as-a-Service (FaaS) is the epitome of developer productivity for full-stack modern apps. As developers, you don't manage infrastructure and focus only on business logic and application code. And, with Serverless Compute you only pay for when your code runs - making this the simplest first step to begin migrating your application to cloud-native.

    In Azure, FaaS is provided by Azure Functions. Check out our Functions + Serverless on Azure to go from learning core concepts, to building your first Functions app in your programming language of choice. Azure functions support multiple programming languages including C#, F#, Java, JavaScript, Python, Typescript, and PowerShell.

    Want to get extended language support for languages like Go, and Rust? You can Use Custom Handlers to make this happen! But what if you want to have long-running functions, or create complex workflows involving more than one function? Read our post on Durable Entities to learn how you can orchestrate this with Azure Functions.

    Check out this recent AskTheExpert Q&A session with the Azure Functions team to get answers to popular community questions on Azure Functions features and usage.

    Microservices and Containers

    Functions-as-a-Service is an ideal first step towards serverless development. But Functions are just one of the 5 pillars of cloud-native. This week we'll look at two of the other pillars: microservices and containers - with specific focus on two core technologies: Azure Container Apps and Dapr (Distributed Application Runtime).

    In this 6-part series of posts, we walk through each technology independently, before looking at the value of building Azure Container Apps with Dapr.

    • In Hello Container Apps we learned core concepts & deployed our first ACA.
    • In Microservices Communication we learned about ACA environments and virtual networks, and how microservices communicate in ACA with a hands-on tutorial.
    • In Scaling Your Container Apps we learned about KEDA (Kubernetes Event-Driven Autoscaler) and configuring ACA for autoscaling with KEDA-compliant triggers.
    • In Build with Dapr we introduced the Distributed Application Runtime (Dapr), exploring its Building Block APIs and sidecar architecture for working with ACA.
    • In Secure ACA Access we learned how to secure ACA access to external services with - and without - Dapr, covering Secret Stores and Managed Identity.
    • Finally, Build ACA with Dapr tied it all together with a enterprise app scenario where an orders processor (ACA) uses Dapr APIs (PubSub, State Management) to receive and store order messages from Azure Service Bus.

    Build ACA with Dapr

    Check out this recent AskTheExpert Q&A session with the Azure Container Apps team for answers to popular community questions on core features and usage.

    Serverless Integrations

    In the first half of the month we looked at compute resources for building and deploying serverless applications. In the second half, we look at integration tools and resources that automate developer workflows to streamline the end-to-end developer experience.

    In Azure, this is enabled by services like Azure Logic Apps and Azure Event Grid. Azure Logic Apps provides a visual designer to create and automate workflows with little or no code involved. Azure Event Grid provides a highly-scable event broker with support for pub/sub communications to drive async event-driven architectures.

    • In Tracking Weather Data Changes With Logic Apps we look at how you can use Logic Apps to integrate the MSN weather service with Azure CosmosDB, allowing automated collection of weather data on changes.

    • In Teach the Cloud to Read & Categorize Mail we take it a step further, using Logic Apps to automate a workflow that includes a Computer Vision service to "read" images and store the results to CosmosDB.

    • In Integrate with Microsoft Graph we explore a multi-cloud scenario (Azure + M365) where change notifications from Microsoft Graph can be integrated using Logic Apps and Event Hubs to power an onboarding workflow.

    • In Cloud Events with Event Grid we learn about the CloudEvents specification (for consistently describing event data) - and learn how Event Grid brokers events in this format. Azure Logic Apps can be an Event handler (subscriber) that uses the event to trigger an automated workflow on receipt.

      Azure Event Grid And Logic Apps

    Want to explore other such integrations? Browse Azure Architectures and filter by selected Azure services for more real-world scenarios.


    End-to-End Solutions

    We've covered serverless compute solutions (for building your serverless applications) and serverless integration services to automate end-to-end workflows in synchronous or asynchronous event-driven architectures. In this final week, we want to leave you with a sense of end-to-end development tools and use cases that can be enabled by Serverless on Azure. Here are some key examples:

    ArticleDescription
    In this tutorial, you'll learn to deploy a containerized ASP.NET Core 6.0 application to Azure Container Apps - with a Blazor front-end and two Web API projects
    Deploy Java containers to cloudIn this tutorial you learn to build and deploy a Java application running on Spring Boot, by publishing it in a container to Azure Container Registry, then deploying to Azure Container Apps,, from ACR, via the Azure Portal.
    **Where am I? My GPS Location with Serverless Power Platform Custom Connector**In this step-by-step tutorial you learn to integrate a serverless application (built on Azure Functions and OpenAPI) with Power Platforms custom connectors via Azure API Management (API-M).This pattern can empower a new ecosystem of fusion apps for cases like inventory management.
    And in our Serverless Hacks initiative, we walked through an 8-step hack to build a serverless tollbooth. Check out this 12-part video walkthrough of a reference solution using .NET.

    Developer Tools

    But wait - there's more. Those are a sample of the end-to-end application scenarios that are built on serverless on Azure. But what about the developer experience? In this article, we say hello to the Azure Developer CLI - an open-source tool that streamlines your develop-deploy workflow, with simple commands that map to core stages of your development journey. Go from code to cloud with one CLI

    And watch this space for more such tutorials and content through October, including a special #Hacktoberfest focused initiative to encourage and support first-time contributors to open-source. Here's a sneak peek at the project we plan to share - the new awesome-azd templates gallery.


    Join us at Microsoft Ignite!

    Want to continue your learning journey, and learn about what's next for Serverless on Azure? Microsoft Ignite happens Oct 12-14 this year and has multiple sessions on relevant technologies and tools. Check out the Session Catalog and register here to attend online.

    - + \ No newline at end of file diff --git a/blog/tags/azure-container-apps/page/10/index.html b/blog/tags/azure-container-apps/page/10/index.html index 1a79c922f4..38c24e61fa 100644 --- a/blog/tags/azure-container-apps/page/10/index.html +++ b/blog/tags/azure-container-apps/page/10/index.html @@ -14,14 +14,14 @@ - +

    20 posts tagged with "azure-container-apps"

    View All Tags

    · 11 min read
    Kendall Roden

    Welcome to Day 13 of #30DaysOfServerless!

    In the previous post, we learned about all things Distributed Application Runtime (Dapr) and highlighted the capabilities you can unlock through managed Dapr in Azure Container Apps! Today, we'll dive into how we can make use of Container Apps secrets and managed identities to securely access cloud-hosted resources that your Container Apps depend on!

    Ready? Let's go.


    What We'll Cover

    • Secure access to external services overview
    • Using Container Apps Secrets
    • Using Managed Identity for connecting to Azure resources
    • Using Dapr secret store component references (Dapr-only)
    • Conclusion
    • Resources: For self-study!


    Securing access to external services

    In most, if not all, microservice-based applications, one or more services in the system will rely on other cloud-hosted resources; Think external services like databases, secret stores, message brokers, event sources, etc. To interact with these services, an application must have the ability to establish a secure connection. Traditionally, an application will authenticate to these backing resources using some type of connection string or password.

    I'm not sure if it was just me, but one of the first things I learned as a developer was to ensure credentials and other sensitive information were never checked into the codebase. The ability to inject these values at runtime is a non-negotiable.

    In Azure Container Apps, applications can securely leverage connection information via Container Apps Secrets. If the resource is Azure-based, a more ideal solution that removes the dependence on secrets altogether is using Managed Identity.

    Specifically for Dapr-enabled container apps, users can now tap into the power of the Dapr secrets API! With this new capability unlocked in Container Apps, users can call the Dapr secrets API from application code to securely access secrets from Key Vault or other backing secret stores. In addition, customers can also make use of a secret store component reference when wiring up Dapr state store components and more!

    ALSO, I'm excited to share that support for Dapr + Managed Identity is now available!!. What does this mean? It means that you can enable Managed Identity for your container app - and when establishing connections via Dapr, the Dapr sidecar can use this identity! This means simplified components without the need for secrets when connecting to Azure services!

    Let's dive a bit deeper into the following three topics:

    1. Using Container Apps secrets in your container apps
    2. Using Managed Identity to connect to Azure services
    3. Connecting to services securely for Dapr-enabled apps

    Secure access to external services without Dapr

    Leveraging Container Apps secrets at runtime

    Users can leverage this approach for any values which need to be securely stored, however, it is recommended to use Managed Identity where possible when connecting to Azure-specific resources.

    First, let's establish a few important points regarding secrets in container apps:

    • Secrets are scoped at the container app level, meaning secrets cannot be shared across container apps today
    • When running in multiple-revision mode,
      • changes to secrets do not generate a new revision
      • running revisions will not be automatically restarted to reflect changes. If you want to force-update existing container app revisions to reflect the changed secrets values, you will need to perform revision restarts.
    STEP 1

    Provide the secure value as a secret parameter when creating your container app using the syntax "SECRET_NAME=SECRET_VALUE"

    az containerapp create \
    --resource-group "my-resource-group" \
    --name queuereader \
    --environment "my-environment-name" \
    --image demos/queuereader:v1 \
    --secrets "queue-connection-string=$CONNECTION_STRING"
    STEP 2

    Create an environment variable which references the value of the secret created in step 1 using the syntax "ENV_VARIABLE_NAME=secretref:SECRET_NAME"

    az containerapp create \
    --resource-group "my-resource-group" \
    --name myQueueApp \
    --environment "my-environment-name" \
    --image demos/myQueueApp:v1 \
    --secrets "queue-connection-string=$CONNECTIONSTRING" \
    --env-vars "QueueName=myqueue" "ConnectionString=secretref:queue-connection-string"

    This ConnectionString environment variable can be used within your application code to securely access the connection string value at runtime.

    Using Managed Identity to connect to Azure services

    A managed identity from Azure Active Directory (Azure AD) allows your container app to access other Azure AD-protected resources. This approach is recommended where possible as it eliminates the need for managing secret credentials in your container apps and allows you to properly scope the permissions needed for a given container app using role-based access control. Both system-assigned and user-assigned identities are available in container apps. For more background on managed identities in Azure AD, see Managed identities for Azure resources.

    To configure your app with a system-assigned managed identity you will follow similar steps to the following:

    STEP 1

    Run the following command to create a system-assigned identity for your container app

    az containerapp identity assign \
    --name "myQueueApp" \
    --resource-group "my-resource-group" \
    --system-assigned
    STEP 2

    Retrieve the identity details for your container app and store the Principal ID for the identity in a variable "PRINCIPAL_ID"

    az containerapp identity show \
    --name "myQueueApp" \
    --resource-group "my-resource-group"
    STEP 3

    Assign the appropriate roles and permissions to your container app's managed identity using the Principal ID in step 2 based on the resources you need to access (example below)

    az role assignment create \
    --role "Storage Queue Data Contributor" \
    --assignee $PRINCIPAL_ID \
    --scope "/subscriptions/<subscription>/resourceGroups/<resource-group>/providers/Microsoft.Storage/storageAccounts/<storage-account>/queueServices/default/queues/<queue>"

    After running the above commands, your container app will be able to access your Azure Store Queue because it's managed identity has been assigned the "Store Queue Data Contributor" role. The role assignments you create will be contingent solely on the resources your container app needs to access. To instrument your code to use this managed identity, see more details here.

    In addition to using managed identity to access services from your container app, you can also use managed identity to pull your container images from Azure Container Registry.

    Secure access to external services with Dapr

    For Dapr-enabled apps, there are a few ways to connect to the resources your solutions depend on. In this section, we will discuss when to use each approach.

    1. Using Container Apps secrets in your Dapr components
    2. Using Managed Identity with Dapr Components
    3. Using Dapr Secret Stores for runtime secrets and component references

    Using Container Apps secrets in Dapr components

    Prior to providing support for the Dapr Secret's Management building block, this was the only approach available for securely storing sensitive values for use in Dapr components.

    In Dapr OSS, when no secret store reference is provided in a Dapr component file, the default secret store is set to "Kubernetes secrets". In Container Apps, we do not expose the ability to use this default store. Rather, Container Apps secrets can be used in it's place.

    With the introduction of the Secrets API and the ability to use Dapr + Managed Identity, this approach is useful for a limited number of scenarios:

    • Quick demos and dev/test scenarios using the Container Apps CLI
    • Securing values when a secret store is not configured or available for use
    • Using service principal credentials to configure an Azure Key Vault secret store component (Using Managed Identity is recommend)
    • Securing access credentials which may be required when creating a non-Azure secret store component
    STEP 1

    Create a Dapr component which can be used by one or more services in the container apps environment. In the below example, you will create a secret to store the storage account key and reference this secret from the appropriate Dapr metadata property.

       componentType: state.azure.blobstorage
    version: v1
    metadata:
    - name: accountName
    value: testStorage
    - name: accountKey
    secretRef: account-key
    - name: containerName
    value: myContainer
    secrets:
    - name: account-key
    value: "<STORAGE_ACCOUNT_KEY>"
    scopes:
    - myApp
    STEP 2

    Deploy the Dapr component using the below command with the appropriate arguments.

     az containerapp env dapr-component set \
    --name "my-environment" \
    --resource-group "my-resource-group" \
    --dapr-component-name statestore \
    --yaml "./statestore.yaml"

    Using Managed Identity with Dapr Components

    Dapr-enabled container apps can now make use of managed identities within Dapr components. This is the most ideal path for connecting to Azure services securely, and allows for the removal of sensitive values in the component itself.

    The Dapr sidecar makes use of the existing identities available within a given container app; Dapr itself does not have it's own identity. Therefore, the steps to enable Dapr + MI are similar to those in the section regarding managed identity for non-Dapr apps. See example steps below specifically for using a system-assigned identity:

    1. Create a system-assigned identity for your container app

    2. Retrieve the identity details for your container app and store the Principal ID for the identity in a variable "PRINCIPAL_ID"

    3. Assign the appropriate roles and permissions (for accessing resources backing your Dapr components) to your ACA's managed identity using the Principal ID

    4. Create a simplified Dapr component without any secrets required

          componentType: state.azure.blobstorage
      version: v1
      metadata:
      - name: accountName
      value: testStorage
      - name: containerName
      value: myContainer
      scopes:
      - myApp
    5. Deploy the component to test the connection from your container app via Dapr!

    Keep in mind, all Dapr components will be loaded by each Dapr-enabled container app in an environment by default. In order to avoid apps without the appropriate permissions from loading a component unsuccessfully, use scopes. This will ensure that only applications with the appropriate identities to access the backing resource load the component.

    Using Dapr Secret Stores for runtime secrets and component references

    Dapr integrates with secret stores to provide apps and other components with secure storage and access to secrets such as access keys and passwords. The Dapr Secrets API is now available for use in Container Apps.

    Using Dapr’s secret store building block typically involves the following:

    • Setting up a component for a specific secret store solution.
    • Retrieving secrets using the Dapr secrets API in the application code.
    • Optionally, referencing secrets in Dapr component files.

    Let's walk through a couple sample workflows involving the use of Dapr's Secrets Management capabilities!

    Setting up a component for a specific secret store solution

    1. Create an Azure Key Vault instance for hosting the secrets required by your application.

      az keyvault create --name "<your-unique-keyvault-name>" --resource-group "my-resource-group" --location "<your-location>"
    2. Create an Azure Key Vault component in your environment without the secrets values, as the connection will be established to Azure Key Vault via Managed Identity.

          componentType: secretstores.azure.keyvault
      version: v1
      metadata:
      - name: vaultName
      value: "[your_keyvault_name]"
      scopes:
      - myApp
      az containerapp env dapr-component set \
      --name "my-environment" \
      --resource-group "my-resource-group" \
      --dapr-component-name secretstore \
      --yaml "./secretstore.yaml"
    3. Run the following command to create a system-assigned identity for your container app

      az containerapp identity assign \
      --name "myApp" \
      --resource-group "my-resource-group" \
      --system-assigned
    4. Retrieve the identity details for your container app and store the Principal ID for the identity in a variable "PRINCIPAL_ID"

      az containerapp identity show \
      --name "myApp" \
      --resource-group "my-resource-group"
    5. Assign the appropriate roles and permissions to your container app's managed identity to access Azure Key Vault

      az role assignment create \
      --role "Key Vault Secrets Officer" \
      --assignee $PRINCIPAL_ID \
      --scope /subscriptions/{subscriptionid}/resourcegroups/{resource-group-name}/providers/Microsoft.KeyVault/vaults/{key-vault-name}
    6. Begin using the Dapr Secrets API in your application code to retrieve secrets! See additional details here.

    Referencing secrets in Dapr component files

    Once a Dapr secret store component is available in the environment, it can be used to retrieve secrets for use in other components. For example, when creating a state store component, you can add a reference to the Dapr secret store from which you would like to source connection information. You will no longer use secrets directly in the component spec, but rather will instruct the Dapr sidecar to retrieve the secrets from the specified store.

          componentType: state.azure.blobstorage
    version: v1
    metadata:
    - name: accountName
    value: testStorage
    - name: accountKey
    secretRef: account-key
    - name: containerName
    value: myContainer
    secretStoreComponent: "<SECRET_STORE_COMPONENT_NAME>"
    scopes:
    - myApp

    Summary

    In this post, we have covered the high-level details on how to work with secret values in Azure Container Apps for both Dapr and Non-Dapr apps. In the next article, we will walk through a complex Dapr example from end-to-end which makes use of the new support for Dapr + Managed Identity. Stayed tuned for additional documentation around Dapr secrets as it will be release in the next two weeks!

    Resources

    Here are the main resources to explore for self-study:

    - + \ No newline at end of file diff --git a/blog/tags/azure-container-apps/page/11/index.html b/blog/tags/azure-container-apps/page/11/index.html index 0925bfd33a..4fcf53990a 100644 --- a/blog/tags/azure-container-apps/page/11/index.html +++ b/blog/tags/azure-container-apps/page/11/index.html @@ -14,13 +14,13 @@ - +

    20 posts tagged with "azure-container-apps"

    View All Tags

    · 8 min read
    Nitya Narasimhan

    Welcome to Day 12 of #30DaysOfServerless!

    So far we've looked at Azure Container Apps - what it is, how it enables microservices communication, and how it enables auto-scaling with KEDA compliant scalers. Today we'll shift gears and talk about Dapr - the Distributed Application Runtime - and how it makes microservices development with ACA easier with core building blocks and a sidecar architecture!

    Ready? Let's go!


    What We'll Cover

    • What is Dapr and why use it?
    • Building Block APIs
    • Dapr Quickstart and Tutorials
    • Dapr-enabled ACA: A Sidecar Approach
    • Exercise: Build & Deploy a Dapr-enabled ACA.
    • Resources: For self-study!


    Hello, Dapr!

    Building distributed applications is hard. Building reliable and portable microservces means having middleware that deals with challenges like service discovery, sync and async communications, state management, secure information sharing and more. Integrating these support services into your application can be challenging from both development and maintenance perspectives, adding complexity that is independent of the core application logic you want to focus on.

    This is where Dapr (Distributed Application Runtime) shines - it's defined as::

    a portable, event-driven runtime that makes it easy for any developer to build resilient, stateless and stateful applications that run on the cloud and edge and embraces the diversity of languages and developer frameworks.

    But what does this actually mean to me as an app developer?


    Dapr + Apps: A Sidecar Approach

    The strength of Dapr lies in its ability to:

    • abstract complexities of distributed systems middleware - with Building Block APIs that implement components using best practices to tackle key challenges.
    • implement a Sidecar Pattern with interactions via APIs - allowing applications to keep their codebase clean and focus on app logic.
    • be Incrementally Adoptable - allowing developers to start by integrating one API, then evolving to use more as and when needed.
    • be Platform Agnostic - allowing applications to be developed in a preferred language or framework without impacting integration capabilities.

    The application-dapr sidecar interaction is illustrated below. The API abstraction allows applications to get the desired functionality without having to know how it was implemented, or without having to integrate Dapr-specific code into their codebase. Note how the sidecar process listens on port 3500 and the API provides clear routes for the specific building blocks supported by Dapr (e.g, /secrets, /state etc.)


    Dapr Building Blocks: API Interactions

    Dapr Building Blocks refers to HTTP and gRPC endpoints exposed by Dapr API endpoints exposed by the Dapr sidecar, providing key capabilities like state management, observability, service-to-service invocation, pub/sub messaging and more to the associated application.

    Building Blocks: Under the Hood
    The Dapr API is implemented by modular components that codify best practices for tackling the specific challenge that they represent. The API abstraction allows component implementations to evolve, or alternatives to be used , without requiring changes to the application codebase.

    The latest Dapr release has the building blocks shown in the above figure. Not all capabilities are available to Azure Container Apps by default - check the documentation for the latest updates on this. For now, Azure Container Apps + Dapr integration provides the following capabilities to the application:

    In the next section, we'll dive into Dapr-enabled Azure Container Apps. Before we do that, here are a couple of resources to help you explore the Dapr platform by itself, and get more hands-on experience with the concepts and capabilities:

    • Dapr Quickstarts - build your first Dapr app, then explore quickstarts for a core APIs including service-to-service invocation, pub/sub, state mangement, bindings and secrets management.
    • Dapr Tutorials - go beyond the basic quickstart and explore more realistic service integrations and usage scenarios. Try the distributed calculator example!

    Integrate Dapr & Azure Container Apps

    Dapr currently has a v1.9 (preview) version, but Azure Container Apps supports Dapr v1.8. In this section, we'll look at what it takes to enable, configure, and use, Dapr integration with Azure Container Apps. It involves 3 steps: enabling Dapr using settings, configuring Dapr components (API) for use, then invoking the APIs.

    Here's a simple a publisher-subscriber scenario from the documentation. We have two Container apps identified as publisher-app and subscriber-app deployed in a single environment. Each ACA has an activated daprd sidecar, allowing them to use the Pub/Sub API to communicate asynchronously with each other - without having to write the underlying pub/sub implementation themselves. Rather, we can see that the Dapr API uses a pubsub,azure.servicebus component to implement that capability.

    Pub/sub example

    Let's look at how this is setup.

    1. Enable Dapr in ACA: Settings

    We can enable Dapr integration in the Azure Container App during creation by specifying settings in one of two ways, based on your development preference:

    • Using Azure CLI: use custom commandline options for each setting
    • Using Infrastructure-as-Code (IaC): using properties for Bicep, ARM templates

    Once enabled, Dapr will run in the same environment as the Azure Container App, and listen on port 3500 for API requests. The Dapr sidecar can be shared my multiple Container Apps deployed in the same environment.

    There are four main settings we will focus on for this demo - the example below shows the ARM template properties, but you can find the equivalent CLI parameters here for comparison.

    • dapr.enabled - enable Dapr for Azure Container App
    • dapr.appPort - specify port on which app is listening
    • dapr.appProtocol - specify if using http (default) or gRPC for API
    • dapr.appId - specify unique application ID for service discovery, usage

    These are defined under the properties.configuration section for your resource. Changing Dapr settings does not update the revision but it will restart ACA revisions and replicas. Here is what the relevant section of the ARM template looks like for the publisher-app ACA in the scenario shown above.

    "dapr": {
    "enabled": true,
    "appId": "publisher-app",
    "appProcotol": "http",
    "appPort": 80
    }

    2. Configure Dapr in ACA: Components

    The next step after activating the Dapr sidecar, is to define the APIs that you want to use and potentially specify the Dapr components (specific implementations of that API) that you prefer. These components are created at environment-level and by default, Dapr-enabled containers apps in an environment will load the complete set of deployed components -- use the scopes property to ensure only components needed by a given app are loaded at runtime. Here's what the ARM template resources section looks like for the example above. This tells us that the environment has a dapr-pubsub component of type pubsub.azure.servicebus deployed - where that component is loaded by container apps with dapr ids (publisher-app, subscriber-app).

    USING MANAGED IDENTITY + DAPR

    The secrets approach used here is idea for demo purposes. However, we recommend using Managed Identity with Dapr in production. For more details on secrets, check out tomorrow's post on Secrets and Managed Identity in Azure Container Apps

    {
    "resources": [
    {
    "type": "daprComponents",
    "name": "dapr-pubsub",
    "properties": {
    "componentType": "pubsub.azure.servicebus",
    "version": "v1",
    "secrets": [
    {
    "name": "sb-root-connectionstring",
    "value": "value"
    }
    ],
    "metadata": [
    {
    "name": "connectionString",
    "secretRef": "sb-root-connectionstring"
    }
    ],
    // Application scopes
    "scopes": ["publisher-app", "subscriber-app"]

    }
    }
    ]
    }

    With this configuration, the ACA is now set to use pub/sub capabilities from the Dapr sidecar, using standard HTTP requests to the exposed API endpoint for this service.

    Exercise: Deploy Dapr-enabled ACA

    In the next couple posts in this series, we'll be discussing how you can use the Dapr secrets API and doing a walkthrough of a more complex example, to show how Dapr-enabled Azure Container Apps are created and deployed.

    However, you can get hands-on experience with these concepts by walking through one of these two tutorials, each providing an alternative approach to configure and setup the application describe in the scenario below:

    Resources

    Here are the main resources to explore for self-study:

    - + \ No newline at end of file diff --git a/blog/tags/azure-container-apps/page/12/index.html b/blog/tags/azure-container-apps/page/12/index.html index 57a23efe3f..ffa2c96e32 100644 --- a/blog/tags/azure-container-apps/page/12/index.html +++ b/blog/tags/azure-container-apps/page/12/index.html @@ -14,13 +14,13 @@ - +

    20 posts tagged with "azure-container-apps"

    View All Tags

    · 6 min read
    Melony Qin

    Welcome to Day 12 of #30DaysOfServerless!

    Today, we have a special set of posts from our Zero To Hero 🚀 initiative, featuring blog posts authored by our Product Engineering teams for #ServerlessSeptember. Posts were originally published on the Apps on Azure blog on Microsoft Tech Community.


    What We'll Cover

    • What are Custom Handlers, and why use them?
    • How Custom Handler Works
    • Message Processing With Azure Custom Handler
    • Azure Portal Monitoring


    If you have been working with Azure Functions for a while, you may know Azure Functions is a serverless FaaS (Function as a Service) offered provided by Microsoft Azure, which is built for your key scenarios, including building web APIs, processing file uploads, responding to database changes, processing IoT data streams, managing message queues, and more.

    Custom Handlers: What and Why

    Azure functions support multiple programming languages including C#, F#, Java, JavaScript, Python, typescript, and PowerShell. If you want to get extended language support with Azure functions for other languages such as Go, and Rust, that’s where custom handler comes in.

    An Azure function custom handler allows the use of any language that supports HTTP primitives and author Azure functions. With custom handlers, you can use triggers and input and output bindings via extension bundles, hence it supports all the triggers and bindings you're used to with Azure functions.

    How a Custom Handler Works

    Let’s take a look at custom handlers and how it works.

    • A request is sent to the function host when an event is triggered. It’s up to the function host to issue a request payload to the custom handler, which holds the trigger and inputs binding data as well as other metadata for the function.
    • The custom handler is a local HTTP web server. It executes the function code and returns a response payload to the Functions host.
    • The Functions host passes data from the response to the function's output bindings which will be passed to the downstream stream services for data processing.

    Check out this article to know more about Azure functions custom handlers.


    Message processing with Custom Handlers

    Message processing is one of the key scenarios that Azure functions are trying to address. In the message-processing scenario, events are often collected in queues. These events can trigger Azure functions to execute a piece of business logic.

    You can use the Service Bus trigger to respond to messages from an Azure Service Bus queue - it's then up to the Azure functions custom handlers to take further actions to process the messages. The process is described in the following diagram:

    Building Serverless Go Applications with Azure functions custom handlers

    In Azure function, the function.json defines the function's trigger, input and output bindings, and other configuration settings. Note that every function can have multiple bindings, but it can only have one trigger. The following is an example of setting up the Service Bus queue trigger in the function.json file :

    {
    "bindings": [
    {
    "name": "queueItem",
    "type": "serviceBusTrigger",
    "direction": "in",
    "queueName": "functionqueue",
    "connection": "ServiceBusConnection"
    }
    ]
    }

    You can add a binding definition in the function.json to write the output to a database or other locations of your desire. Supported bindings can be found here.

    As we’re programming in Go, so we need to set the value of defaultExecutablePath to handler in the customHandler.description section in the host.json file.

    Assume we’re programming in Windows OS, and we have named our go application as server.go, after we executed go build server.go command, it produces an executable called server.exe. So here we set server.exe in the host.json as the following example :

      "customHandler": {
    "description": {
    "defaultExecutablePath": "./server.exe",
    "workingDirectory": "",
    "arguments": []
    }
    }

    We’re showcasing a simple Go application here with Azure functions custom handlers where we print out the messages received from the function host. The following is the full code of server.go application :

    package main

    import (
    "encoding/json"
    "fmt"
    "log"
    "net/http"
    "os"
    )

    type InvokeRequest struct {
    Data map[string]json.RawMessage
    Metadata map[string]interface{}
    }

    func queueHandler(w http.ResponseWriter, r *http.Request) {
    var invokeRequest InvokeRequest

    d := json.NewDecoder(r.Body)
    d.Decode(&invokeRequest)

    var parsedMessage string
    json.Unmarshal(invokeRequest.Data["queueItem"], &parsedMessage)

    fmt.Println(parsedMessage)
    }

    func main() {
    customHandlerPort, exists := os.LookupEnv("FUNCTIONS_CUSTOMHANDLER_PORT")
    if !exists {
    customHandlerPort = "8080"
    }
    mux := http.NewServeMux()
    mux.HandleFunc("/MessageProcessorFunction", queueHandler)
    fmt.Println("Go server Listening on: ", customHandlerPort)
    log.Fatal(http.ListenAndServe(":"+customHandlerPort, mux))

    }

    Ensure you have Azure functions core tools installed, then we can use func start command to start our function. Then we’ll have have a C#-based Message Sender application on Github to send out 3000 messages to the Azure service bus queue. You’ll see Azure functions instantly start to process the messages and print out the message as the following:

    Monitoring Serverless Go Applications with Azure functions custom handlers


    Azure portal monitoring

    Let’s go back to Azure portal portal the events see how those messages in Azure Service Bus queue were being processed. There was 3000 messages were queued in the Service Bus queue ( the Blue line stands for incoming Messages ). The outgoing messages (the red line in smaller wave shape ) showing there are progressively being read by Azure functions as the following :

    Monitoring Serverless Go Applications with Azure functions custom handlers

    Check out this article about monitoring Azure Service bus for further information.

    Next steps

    Thanks for following along, we’re looking forward to hearing your feedback. Also, if you discover potential issues, please record them on Azure Functions host GitHub repository or tag us @AzureFunctions on Twitter.

    RESOURCES

    Start to build your serverless applications with custom handlers, check out the official documentation:

    Life is a journey of learning. Let’s stay tuned!

    - + \ No newline at end of file diff --git a/blog/tags/azure-container-apps/page/13/index.html b/blog/tags/azure-container-apps/page/13/index.html index 5f02819bd3..7398c70406 100644 --- a/blog/tags/azure-container-apps/page/13/index.html +++ b/blog/tags/azure-container-apps/page/13/index.html @@ -14,13 +14,13 @@ - +

    20 posts tagged with "azure-container-apps"

    View All Tags

    · 5 min read
    Anthony Chu

    Welcome to Day 12 of #30DaysOfServerless!

    Today, we have a special set of posts from our Zero To Hero 🚀 initiative, featuring blog posts authored by our Product Engineering teams for #ServerlessSeptember. Posts were originally published on the Apps on Azure blog on Microsoft Tech Community.


    What We'll Cover

    • Using Visual Studio
    • Using Visual Studio Code: Docker, ACA extensions
    • Using Azure CLI
    • Using CI/CD Pipelines


    Last week, @kendallroden wrote about what it means to be Cloud-Native and how Azure Container Apps provides a serverless containers platform for hosting all of your Cloud-Native applications. Today, we’ll walk through a few ways to get your apps up and running on Azure Container Apps.

    Depending on where you are in your Cloud-Native app development journey, you might choose to use different tools to deploy your apps.

    • “Right-click, publish” – Deploying an app directly from an IDE or code editor is often seen as a bad practice, but it’s one of the quickest ways to test out an app in a cloud environment.
    • Command line interface – CLIs are useful for deploying apps from a terminal. Commands can be run manually or in a script.
    • Continuous integration/deployment – To deploy production apps, the recommended approach is to automate the process in a robust CI/CD pipeline.

    Let's explore some of these options in more depth.

    Visual Studio

    Visual Studio 2022 has built-in support for deploying .NET applications to Azure Container Apps. You can use the familiar publish dialog to provision Container Apps resources and deploy to them directly. This helps you prototype an app and see it run in Azure Container Apps with the least amount of effort.

    Journey to the cloud with Azure Container Apps

    Once you’re happy with the app and it’s ready for production, Visual Studio allows you to push your code to GitHub and set up a GitHub Actions workflow to build and deploy your app every time you push changes. You can do this by checking a box.

    Journey to the cloud with Azure Container Apps

    Visual Studio Code

    There are a couple of valuable extensions that you’ll want to install if you’re working in VS Code.

    Docker extension

    The Docker extension provides commands for building a container image for your app and pushing it to a container registry. It can even do this without requiring Docker Desktop on your local machine --- the “Build image in Azure” command remotely builds and pushes a container image to Azure Container Registry.

    Journey to the cloud with Azure Container Apps

    And if your app doesn’t have a dockerfile, the extension will generate one for you.

    Journey to the cloud with Azure Container Apps

    Azure Container Apps extension

    Once you’ve built your container image and pushed it to a registry, the Azure Container Apps VS Code extension provides commands for creating a container app and deploying revisions using the image you’ve built.

    Journey to the cloud with Azure Container Apps

    Azure CLI

    The Azure CLI can be used to manage pretty much anything in Azure. For Azure Container Apps, you’ll find commands for creating, updating, and managing your Container Apps resources.

    Just like in VS Code, with a few commands in the Azure CLI, you can create your Azure resources, build and push your container image, and then deploy it to a container app.

    To make things as simple as possible, the Azure CLI also has an “az containerapp up” command. This single command takes care of everything that’s needed to turn your source code from your local machine to a cloud-hosted application in Azure Container Apps.

    az containerapp up --name myapp --source ./src

    We saw earlier that Visual Studio can generate a GitHub Actions workflow to automatically build and deploy your app on every commit. “az containerapp up” can do this too. The following adds a workflow to a repo.

    az containerapp up --name myapp --repo https://github.com/myorg/myproject

    CI/CD pipelines

    When it’s time to take your app to production, it’s strongly recommended to set up a CI/CD pipeline to automatically and repeatably build, test, and deploy it. We’ve already seen that tools such as Visual Studio and Azure CLI can automatically generate a workflow for GitHub Actions. You can set up a pipeline in Azure DevOps too. This is an example Azure DevOps pipeline.

    trigger:
    branches:
    include:
    - main

    pool:
    vmImage: ubuntu-latest

    stages:

    - stage: Build

    jobs:
    - job: build
    displayName: Build app

    steps:
    - task: Docker@2
    inputs:
    containerRegistry: 'myregistry'
    repository: 'hello-aca'
    command: 'buildAndPush'
    Dockerfile: 'hello-container-apps/Dockerfile'
    tags: '$(Build.BuildId)'

    - stage: Deploy

    jobs:
    - job: deploy
    displayName: Deploy app

    steps:
    - task: AzureCLI@2
    inputs:
    azureSubscription: 'my-subscription(5361b9d6-46ea-43c3-a898-15f14afb0db6)'
    scriptType: 'bash'
    scriptLocation: 'inlineScript'
    inlineScript: |
    # automatically install Container Apps CLI extension
    az config set extension.use_dynamic_install=yes_without_prompt

    # ensure registry is configured in container app
    az containerapp registry set \
    --name hello-aca \
    --resource-group mygroup \
    --server myregistry.azurecr.io \
    --identity system

    # update container app
    az containerapp update \
    --name hello-aca \
    --resource-group mygroup \
    --image myregistry.azurecr.io/hello-aca:$(Build.BuildId)

    Conclusion

    In this article, we looked at a few ways to deploy your Cloud-Native applications to Azure Container Apps and how to decide which one to use based on where you are in your app’s journey to the cloud.

    To learn more, visit Azure Container Apps | Microsoft Azure today!

    ASK THE EXPERT: LIVE Q&A

    The Azure Container Apps team will answer questions live on September 29.

    - + \ No newline at end of file diff --git a/blog/tags/azure-container-apps/page/14/index.html b/blog/tags/azure-container-apps/page/14/index.html index 97451cd3ff..a622462e43 100644 --- a/blog/tags/azure-container-apps/page/14/index.html +++ b/blog/tags/azure-container-apps/page/14/index.html @@ -14,13 +14,13 @@ - +

    20 posts tagged with "azure-container-apps"

    View All Tags

    · 7 min read
    Paul Yu

    Welcome to Day 11 of #30DaysOfServerless!

    Yesterday we explored Azure Container Concepts related to environments, networking and microservices communication - and illustrated these with a deployment example. Today, we turn our attention to scaling your container apps with demand.


    What We'll Cover

    • What makes ACA Serverless?
    • What is Keda?
    • Scaling Your ACA
    • ACA Scaling In Action
    • Exercise: Explore azure-opensource-labs examples
    • Resources: For self-study!


    So, what makes Azure Container Apps "serverless"?

    Today we are going to focus on what makes Azure Container Apps (ACA) a "serverless" offering. But what does the term "serverless" really mean? As much as we'd like to think there aren't any servers involved, that is certainly not the case. In general, "serverless" means that most (if not all) server maintenance has been abstracted away from you.

    With serverless, you don't spend any time managing and patching servers. This concern is offloaded to Azure and you simply focus on adding business value through application delivery. In addition to operational efficiency, cost efficiency can be achieved with serverless on-demand pricing models. Your workload horizontally scales out based on need and you only pay for what you use. To me, this is serverless, and my teammate @StevenMurawski said it best... "being able to scale to zero is what gives ACA it's serverless magic."

    Scaling your Container Apps

    If you don't know by now, ACA is built on a solid open-source foundation. Behind the scenes, it runs on a managed Kubernetes cluster and includes several open-source components out-of-the box including Dapr to help you build and run microservices, Envoy Proxy for ingress capabilities, and KEDA for event-driven autoscaling. Again, you do not need to install these components yourself. All you need to be concerned with is enabling and/or configuring your container app to leverage these components.

    Let's take a closer look at autoscaling in ACA to help you optimize your container app.

    What is KEDA?

    KEDA stands for Kubernetes Event-Driven Autoscaler. It is an open-source project initially started by Microsoft and Red Hat and has been donated to the Cloud-Native Computing Foundation (CNCF). It is being maintained by a community of 200+ contributors and adopted by many large organizations. In terms of its status as a CNCF project it is currently in the Incubating Stage which means the project has gone through significant due diligence and on its way towards the Graduation Stage.

    Prior to KEDA, horizontally scaling your Kubernetes deployment was achieved through the Horizontal Pod Autoscaler (HPA) which relies on resource metrics such as CPU and memory to determine when additional replicas should be deployed. Being limited to CPU and memory falls a bit short for certain workloads. This is especially true for apps that need to processes messages from a queue or HTTP-based apps that can handle a specific amount of incoming HTTP requests at a time. KEDA aims to fill that gap and provides a much more robust framework for scaling by working in conjunction with HPA. It offers many scalers for you to implement and even allows your deployments to scale to zero! 🥳

    KEDA architecture

    Configuring ACA scale rules

    As I mentioned above, ACA's autoscaling feature leverages KEDA and gives you the ability to configure the number of replicas to deploy based on rules (event triggers). The number of replicas can be configured as a static number or a range (minimum and maximum). So if you need your containers to run 24/7, set the min and max to be the same value. By default, when you deploy a container app, it is set to scale from 0 to 10 replicas. The default scaling rule uses HTTP scaling and defaults to a minimum of 10 concurrent requests per second. Once the threshold of 10 concurrent request per second is met, another replica will be deployed until it reaches the maximum number of replicas.

    At the time of this writing, a container app can have up to 30 replicas.

    Default autoscaler

    As a best practice, if you have a Min / max replicas range configured, you should configure a scaling rule even if it is just explicitly setting the default values.

    Adding HTTP scaling rule

    In addition to HTTP scaling, you can also configure an Azure queue rule, which allows you to use Azure Storage Queues as an event data source.

    Adding Azure Queue scaling rule

    The most flexibility comes with the Custom rule type. This opens up a LOT more options for scaling. All of KEDA's event-based scalers are supported with this option 🚀

    Adding Custom scaling rule

    Translating KEDA templates to Azure templates

    When you implement Custom rules, you need to become familiar with translating KEDA templates to Azure Resource Manager templates or ACA YAML manifests. The KEDA scaler documentation is great and it should be simple to translate KEDA template metadata to an ACA rule metadata.

    The images below shows how to translated a scaling rule which uses Azure Service Bus as an event data source. The custom rule type is set to azure-servicebus and details of the service bus is added to the Metadata section. One important thing to note here is that the connection string to the service bus was added as a secret on the container app and the trigger parameter must be set to connection.

    Azure Container App custom rule metadata

    Azure Container App custom rule metadata

    Additional examples of KEDA scaler conversion can be found in the resources section and example video below.

    See Container App scaling in action

    Now that we've built up some foundational knowledge on how ACA autoscaling is implemented and configured, let's look at a few examples.

    Autoscaling based on HTTP traffic load

    Autoscaling based on Azure Service Bus message queues

    Summary

    ACA brings you a true serverless experience and gives you the ability to configure autoscaling rules based on KEDA scaler templates. This gives you flexibility to scale based on a wide variety of data sources in an event-driven manner. With the amount built-in scalers currently available, there is probably a scaler out there for all your use cases. If not, I encourage you to get involved with the KEDA community and help make it better!

    Exercise

    By now, you've probably read and seen enough and now ready to give this autoscaling thing a try. The example I walked through in the videos above can be found at the azure-opensource-labs repo. I highly encourage you to head over to the containerapps-terraform folder and try the lab out. There you'll find instructions which will cover all the steps and tools you'll need implement autoscaling container apps within your own Azure subscription.

    If you have any questions or feedback, please let us know in the comments below or reach out on Twitter @pauldotyu

    Have fun scaling your containers!

    Resources

    - + \ No newline at end of file diff --git a/blog/tags/azure-container-apps/page/15/index.html b/blog/tags/azure-container-apps/page/15/index.html index 1c567acef5..f86e11a3dd 100644 --- a/blog/tags/azure-container-apps/page/15/index.html +++ b/blog/tags/azure-container-apps/page/15/index.html @@ -14,13 +14,13 @@ - +

    20 posts tagged with "azure-container-apps"

    View All Tags

    · 8 min read
    Paul Yu

    Welcome to Day 10 of #30DaysOfServerless!

    We continue our exploraton into Azure Container Apps, with today's focus being communication between microservices, and how to configure your Azure Container Apps environment in the context of a deployment example.


    What We'll Cover

    • ACA Environments & Virtual Networking
    • Basic Microservices Communications
    • Walkthrough: ACA Deployment Example
    • Summary and Next Steps
    • Exercise: Try this yourself!
    • Resources: For self-study!


    Introduction

    In yesterday's post, we learned what the Azure Container Apps (ACA) service is and the problems it aims to solve. It is considered to be a Container-as-a-Service platform since much of the complex implementation details of running a Kubernetes cluster is managed for you.

    Some of the use cases for ACA include event-driven processing jobs and background tasks, but this article will focus on hosting microservices, and how they can communicate with each other within the ACA service. At the end of this article, you will have a solid understanding of how networking and communication is handled and will leave you with a few tutorials to try.

    Environments and virtual networking in ACA

    Before we jump into microservices communication, we should review how networking works within ACA. With ACA being a managed service, Azure will take care of most of your underlying infrastructure concerns. As you provision an ACA resource, Azure provisions an Environment to deploy Container Apps into. This environment is your isolation boundary.

    Azure Container Apps Environment

    By default, Azure creates and manages a new Virtual Network (VNET) for you and the VNET is associated with the environment. As you deploy container apps, they are deployed into the same VNET and the environment is assigned a static public IP address which allows your apps to be accessible over the internet. This VNET is not visible or manageable.

    If you need control of the networking flows within the VNET, you can pre-provision one and tell Azure to deploy an environment within it. This "bring-your-own" VNET model allows you to deploy an environment in either External or Internal modes. Deploying an environment in External mode gives you the flexibility of managing your own VNET, while still allowing your containers to be accessible from outside the environment; a static public IP address is assigned to the environment. When deploying in Internal mode, your containers are accessible within the environment and/or VNET but not accessible from the internet.

    Bringing your own VNET will require some planning and you will need dedicate an empty subnet which will be used exclusively by the ACA environment. The size of your subnet will be dependant on how many containers you plan on deploying and your scaling requirements and one requirement to know is that the subnet address range must have have a /23 CIDR prefix at minimum. You will also need to think about your deployment strategy since ACA has the concept of Revisions which will also consume IPs from your subnet.

    Some additional restrictions to consider when planning your subnet address space is listed in the Resources section below and can be addressed in future posts, so be sure to follow us on dev.to and bookmark the ServerlessSeptember site.

    Basic microservices communication in ACA

    When it comes to communications between containers, ACA addresses this concern with its Ingress capabilities. With HTTP Ingress enabled on your container app, you can expose your app on a HTTPS endpoint.

    If your environment is deployed using default networking and your containers needs to be accessible from outside the environment, you will need to set the Ingress traffic option to Accepting traffic from anywhere. This will generate a Full-Qualified Domain Name (FQDN) which you can use to access your app right away. The ingress feature also generates and assigns a Secure Socket Layer (SSL) certificate for the FQDN.

    External ingress on Container App

    If your environment is deployed using default networking and your containers only need to communicate with other containers in the environment, you'll need to set the Ingress traffic option to Limited to Container Apps Environment. You get a FQDN here as well, but in the section below we'll see how that changes.

    Internal ingress on Container App

    As mentioned in the networking section above, if you deploy your ACA environment into a VNET in internal mode, your options will be Limited to Container Apps Environment or Limited to VNet.

    Ingress on internal virtual network

    Note how the Accepting traffic from anywhere option is greyed out. If your VNET is deployed in external mode, then the option will be available.

    Let's walk though an example ACA deployment

    The diagram below illustrates a simple microservices application that I deployed to ACA. The three container apps all have ingress enabled. The greeting-service app calls two backend services; a hello-service that returns the text Hello (in random casing) and a world-service that returns the text World (in a few random languages). The greeting-service concatenates the two strings together and returns Hello World to the browser. The greeting-service is the only service accessible via external ingress while two backend services are only accessible via internal ingress.

    Greeting Service overview

    With ingress enabled, let's take a quick look at the FQDN structures. Here is the FQDN of the external greeting-service.

    https://greeting-service.victoriouswave-3749d046.eastus.azurecontainerapps.io

    We can break it down into these components:

    https://[YOUR-CONTAINER-APP-NAME].[RANDOM-NAME]-[RANDOM-CHARACTERS].[AZURE-REGION].containerapps.io

    And here is the FQDN of the internal hello-service.

    https://hello-service.internal.victoriouswave-3749d046.eastus.azurecontainerapps.io

    Can you spot the difference between FQDNs?

    That was too easy 😉... the word internal is added as a subdomain in the FQDN between your container app name and the random name for all internal ingress endpoints.

    https://[YOUR-CONTAINER-APP-NAME].internal.[RANDOM-NAME]-[RANDOM-CHARACTERS].[AZURE-REGION].containerapps.io

    Now that we know the internal service FQDNs, we use them in the greeting-service app to achieve basic service-to-service communications.

    So we can inject FQDNs of downstream APIs to upstream apps using environment variables, but the downside to this approach is that need to deploy downstream containers ahead of time and this dependency will need to be planned for during your deployment process. There are ways around this and one option is to leverage the auto-injected environment variables within your app code.

    If I use the Console blade for the hello-service container app and run the env command, you will see environment variables named CONTAINER_APP_NAME and CONTAINER_APP_ENV_DNS_SUFFIX. You can use these values to determine FQDNs within your upstream app.

    hello-service environment variables

    Back in the greeting-service container I can invoke the hello-service container's sayhello method. I know the container app name is hello-service and this service is exposed over an internal ingress, therefore, if I add the internal subdomain to the CONTAINER_APP_ENV_DNS_SUFFIX I can invoke a HTTP request to the hello-service from my greeting-service container.

    Invoke the sayHello method from the greeting-service container

    As you can see, the ingress feature enables communications to other container apps over HTTP/S and ACA will inject environment variables into our container to help determine what the ingress FQDNs would be. All we need now is a little bit of code modification in the greeting-service app and build the FQDNs of our backend APIs by retrieving these environment variables.

    Greeting service code

    ... and now we have a working microservices app on ACA! 🎉

    Hello World

    Summary and next steps

    We've covered Container Apps networking and the basics of how containers communicate with one another. However, there is a better way to address service-to-service invocation using Dapr, which is an open-source framework for building microservices. It is natively integrated into the ACA service and in a future post, you'll learn how to enable it in your Container App to address microservices concerns and more. So stay tuned!

    Exercises

    As a takeaway for today's post, I encourage you to complete this tutorial and if you'd like to deploy the sample app that was presented in this article, my teammate @StevenMurawski is hosting a docker-compose-examples repo which includes samples for deploying to ACA using Docker Compose files. To learn more about the az containerapp compose command, a link to his blog articles are listed in the Resources section below.

    If you have any questions or feedback, please let us know in the comments below or reach out on Twitter @pauldotyu

    Have fun packing and shipping containers! See you in the next post!

    Resources

    The sample app presented here was inspired by services demonstrated in the book Introducing Distributed Application Runtime (Dapr): Simplifying Microservices Applications Development Through Proven and Reusable Patterns and Practices. Go check it out to leran more about Dapr!

    - + \ No newline at end of file diff --git a/blog/tags/azure-container-apps/page/16/index.html b/blog/tags/azure-container-apps/page/16/index.html index 72d820f78b..fdb508cd3c 100644 --- a/blog/tags/azure-container-apps/page/16/index.html +++ b/blog/tags/azure-container-apps/page/16/index.html @@ -14,13 +14,13 @@ - +

    20 posts tagged with "azure-container-apps"

    View All Tags

    · 12 min read
    Nitya Narasimhan

    Welcome to Day 9 of #30DaysOfServerless!


    What We'll Cover

    • The Week Ahead
    • Hello, Container Apps!
    • Quickstart: Build Your First ACA!
    • Under The Hood: Core ACA Concepts
    • Exercise: Try this yourself!
    • Resources: For self-study!


    The Week Ahead

    Welcome to Week 2 of #ServerlessSeptember, where we put the focus on Microservices and building Cloud-Native applications that are optimized for serverless solutions on Azure. One week is not enough to do this complex topic justice so consider this a 7-part jumpstart to the longer journey.

    1. Hello, Container Apps (ACA) - Learn about Azure Container Apps, a key service that helps you run microservices and containerized apps on a serverless platform. Know the core concepts. (Tutorial 1: First ACA)
    2. Communication with Microservices - Dive deeper into two key concepts: environments and virtual networking. Learn how microservices communicate in ACA, and walkthrough an example. (Tutorial 2: ACA with 3 Microservices)
    3. Scaling Your Container Apps - Learn about KEDA. Understand how to configure your ACA for auto-scaling with KEDA-supported triggers. Put this into action by walking through a tutorial. (Tutorial 3: Configure Autoscaling)
    4. Hello, Distributed Application Runtime (Dapr) - Learn about Dapr and how its Building Block APIs simplify microservices development with ACA. Know how the sidecar pattern enables incremental adoption of Dapr APIs without requiring any Dapr code integration in app. (Tutorial 4: Setup & Explore Dapr)
    5. Building ACA with Dapr - See how Dapr works with ACA by building a Dapr-enabled Azure Container App. Walk through a .NET tutorial using Pub/Sub and State Management APIs in an enterprise scenario. (Tutorial 5: Build ACA with Dapr)
    6. Managing Secrets With Dapr - We'll look at the Secrets API (a key Building Block of Dapr) and learn how it simplifies management of sensitive information in ACA.
    7. Microservices + Serverless On Azure - We recap Week 2 (Microservices) and set the stage for Week 3 ( Integrations) of Serverless September. Plus, self-study resources including ACA development tutorials in different languages.

    Ready? Let's go!


    Azure Container Apps!

    When building your application, your first decision is about where you host your application. The Azure Architecture Center has a handy chart to help you decide between choices like Azure Functions, Azure App Service, Azure Container Instances, Azure Container Apps and more. But if you are new to this space, you'll need a good understanding of the terms and concepts behind the services Today, we'll focus on Azure Container Apps (ACA) - so let's start with the fundamentals.

    Containerized App Defined

    A containerized app is one where the application components, dependencies, and configuration, are packaged into a single file (container image), which can be instantiated in an isolated runtime environment (container) that is portable across hosts (OS). This makes containers lightweight and scalable - and ensures that applications behave consistently on different host platforms.

    Container images can be shared via container registries (public or private) helping developers discover and deploy related apps with less effort. Scaling a containerized app can be as simple as activating more instances of its container image. However, this requires container orchestrators to automate the management of container apps for efficiency. Orchestrators use technologies like Kubernetes to support capabilities like workload scheduling, self-healing and auto-scaling on demand.

    Cloud-Native & Microservices

    Containers are seen as one of the 5 pillars of Cloud-Native app development, an approach where applications are designed explicitly to take advantage of the unique benefits of modern dynamic environments (involving public, private and hybrid clouds). Containers are particularly suited to serverless solutions based on microservices.

    • With serverless - developers use managed services instead of managing their own infrastructure. Services are typically event-driven and can be configured for autoscaling with rules tied to event triggers. Serverless is cost-effective, with developers paying only for the compute cycles and resources they use.
    • With microservices - developers compose their applications from independent components. Each component can be deployed in its own container, and scaled at that granularity. This simplifies component reuse (across apps) and maintainability (over time) - with developers evolving functionality at microservice (vs. app) levels.

    Hello, Azure Container Apps!

    Azure Container Apps is the managed service that helps you run containerized apps and microservices as a serverless compute solution, on Azure. You can:

    • deploy serverless API endpoints - autoscaled by HTTP request traffic
    • host background processing apps - autoscaled by CPU or memory load
    • handle event-driven processing - autoscaled by #messages in queue
    • run microservices - autoscaled by any KEDA-supported scaler.

    Want a quick intro to the topic? Start by watching the short video below - then read these two posts from our ZeroToHero series:


    Deploy Your First ACA

    Dev Options

    We typically have three options for development:

    • Use the Azure Portal - provision and deploy from a browser.
    • Use Visual Studio Code (with relevant extensions) - if you prefer an IDE
    • Using Azure CLI - if you prefer to build and deploy from command line.

    The documentation site has quickstarts for three contexts:

    For this quickstart, we'll go with the first option (sample image) so we can move quickly to core concepts. We'll leave the others as an exercise for you to explore.

    1. Setup Resources

    PRE-REQUISITES

    You need:

    • An Azure account with an active subscription
    • An installed Azure CLI

    Start by logging into Azure from the CLI. The command should launch a browser to complete the auth flow (or give you an option to take an alternative path).

    $ az login

    Successful authentication will result in extensive command-line output detailing the status of your subscription.

    Next, install the Azure Container Apps extension for the CLI

    $ az extension add --name containerapp --upgrade
    ...
    The installed extension 'containerapp' is in preview.

    Once successfully installed, register the Microsoft.App namespace.

    $ az provider register --namespace Microsoft.App

    Then set local environment variables in that terminal - and verify they are set correctly:

    $ RESOURCE_GROUP="my-container-apps"
    $ LOCATION="canadacentral"
    $ CONTAINERAPPS_ENVIRONMENT="my-environment"

    $ echo $LOCATION $RESOURCE_GROUP $CONTAINERAPPS_ENVIRONMENT
    canadacentral my-container-apps my-environment

    Now you can use Azure CLI to provision a resource group for this tutorial. Creating a resource group also makes it easier for us to delete/reclaim all resources used at the end of this tutorial.

    az group create \
    --name $RESOURCE_GROUP \
    --location $LOCATION
    Congratulations

    You completed the Setup step!

    On completion, the console should print out the details of the newly created resource group. You should also be able to visit the Azure Portal and find the newly-active my-container-apps resource group under your active subscription.

    2. Create Environment

    An environment is like the picket fence around your property. It creates a secure boundary that contains a group of container apps - such that all apps deployed to it share the same virtual network and logging resources.

    $ az containerapp env create \
    --name $CONTAINERAPPS_ENVIRONMENT \
    --resource-group $RESOURCE_GROUP \
    --location $LOCATION

    No Log Analytics workspace provided.
    Generating a Log Analytics workspace with name ...

    This can take a few minutes. When done, you will see the terminal display more details. You can also check the resource group in the portal and see that a Container Apps Environment and a Log Analytics Workspace are created for you as part of this step.

    You've got the fence set up. Now it's time to build your home - er, container app!

    3. Create Container App

    Here's the command we'll use to create our first Azure Container App. Note that the --image argument provides the link to a pre-existing containerapps-helloworld image.

    az containerapp create \
    --name my-container-app \
    --resource-group $RESOURCE_GROUP \
    --environment $CONTAINERAPPS_ENVIRONMENT \
    --image mcr.microsoft.com/azuredocs/containerapps-helloworld:latest \
    --target-port 80 \
    --ingress 'external' \
    --query properties.configuration.ingress.fqdn
    ...
    ...

    Container app created. Access your app at <URL>

    The --ingress property shows that the app is open to external requests; in other words, it is publicly visible at the <URL> that is printed out on the terminal on successsful completion of this step.

    4. Verify Deployment

    Let's see if this works. You can verify that your container app by visitng the URL returned above in your browser. You should see something like this!

    Container App Hello World

    You can also visit the Azure Portal and look under the created Resource Group. You should see a new Container App type of resource was created after this step.

    Congratulations

    You just created and deployed your first "Hello World" Azure Container App! This validates your local development environment setup and existence of a valid Azure subscription.

    5. Clean Up Your Resources

    It's good practice to clean up resources once you are done with a tutorial.

    THIS ACTION IS IRREVERSIBLE

    This command deletes the resource group we created above - and all resources in it. So make sure you specified the right name, then confirm deletion.

    $ az group delete --name $RESOURCE_GROUP
    Are you sure you want to perform this operation? (y/n):

    Note that you can also delete the resource group from the Azure Portal interface if that feels more comfortable. For now, we'll just use the Portal to verify that deletion occurred. If you had previously opened the Resource Group page for the created resource, just refresh it. You should see something like this:

    Resource Not Found


    Core Concepts

    COMING SOON

    An illustrated guide summarizing these concepts in a single sketchnote.

    We covered a lot today - we'll stop with a quick overview of core concepts behind Azure Container Apps, each linked to documentation for self-study. We'll dive into more details on some of these concepts in upcoming articles:

    • Environments - are the secure boundary around a group of container apps that are deployed in the same virtual network. They write logs to a shared Log Analytics workspace and can communicate seamlessly using Dapr, if used.
    • Containers refer to the container image deployed in the Azure Container App. They can use any runtime, programming language, or development stack - and be discovered using any public or private container registry. A container app can support multiple containers.
    • Revisions are immutable snapshots of an Azure Container App. The first revision is created when the ACA is first deployed, with new revisions created when redeployment occurs with revision-scope changes. Multiple revisions can run concurrently in an environment.
    • Application Lifecycle Management revolves around these revisions, with a container app having three phases: deployment, update and deactivation.
    • Microservices are independent units of functionality in Cloud-Native architectures. A single container app typically represents a single microservice, and can be composed from one or more containers. Microservices can now be scaled and upgraded indepedently, giving your application more flexbility and control.
    • Networking architecture consist of a virtual network (VNET) associated with the environment. Unless you provide a custom VNET at environment creation time, a default VNET is automatically created. The VNET configuration determines access (ingress, internal vs. external) and can influence auto-scaling choices (e.g., use HTTP Edge Proxy and scale based on number of HTTP requests).
    • Observability is about monitoring the health of your application and diagnosing it to improve reliability or performance. Azure Container Apps has a number of features - from Log streaming and Container console to integration with Azure Monitor - to provide a holistic view of application status over time.
    • Easy Auth is possible with built-in support for authentication and authorization including support for popular identity providers like Facebook, Google, Twitter and GitHub - alongside the Microsoft Identity Platform.

    Keep these terms in mind as we walk through more tutorials this week, to see how they find application in real examples. Finally, a note on Dapr, the Distributed Application Runtime that abstracts away many of the challenges posed by distributed systems - and lets you focus on your application logic.

    DAPR INTEGRATION MADE EASY

    Dapr uses a sidecar architecture, allowing Azure Container Apps to communicate with Dapr Building Block APIs over either gRPC or HTTP. Your ACA can be built to run with or without Dapr - giving you the flexibility to incrementally adopt specific APIs and unlock related capabilities as the need arises.

    In later articles this week, we'll do a deeper dive into Dapr and build our first Dapr-enable Azure Container App to get a better understanding of this integration.

    Exercise

    Congratulations! You made it! By now you should have a good idea of what Cloud-Native development means, why Microservices and Containers are important to that vision - and how Azure Container Apps helps simplify the building and deployment of microservices based applications using serverless architectures on Azure.

    Now it's your turn to reinforce learning by doing.

    Resources

    Three key resources to bookmark and explore:

    - + \ No newline at end of file diff --git a/blog/tags/azure-container-apps/page/17/index.html b/blog/tags/azure-container-apps/page/17/index.html index 516cab3f0b..7c3be7332b 100644 --- a/blog/tags/azure-container-apps/page/17/index.html +++ b/blog/tags/azure-container-apps/page/17/index.html @@ -14,13 +14,13 @@ - +

    20 posts tagged with "azure-container-apps"

    View All Tags

    · 5 min read
    Nitya Narasimhan
    Devanshi Joshi

    SEP 08: CHANGE IN PUBLISHING SCHEDULE

    Starting from Week 2 (Sep 8), we'll be publishing blog posts in batches rather than on a daily basis, so you can read a series of related posts together. Don't want to miss updates? Just subscribe to the feed


    Welcome to Day 8 of #30DaysOfServerless!

    This marks the end of our Week 1 Roadmap focused on Azure Functions!! Today, we'll do a quick recap of all #ServerlessSeptember activities in Week 1, set the stage for Week 2 - and leave you with some excellent tutorials you should explore to build more advanced scenarios with Azure Functions.

    Ready? Let's go.


    What We'll Cover

    • Azure Functions: Week 1 Recap
    • Advanced Functions: Explore Samples
    • End-to-End: Serverless Hacks & Cloud Skills
    • What's Next: Hello, Containers & Microservices
    • Challenge: Complete the Learning Path


    Week 1 Recap: #30Days & Functions

    Congratulations!! We made it to the end of Week 1 of #ServerlessSeptember. Let's recap what we learned so far:

    • In Core Concepts we looked at where Azure Functions fits into the serverless options available on Azure. And we learned about key concepts like Triggers, Bindings, Custom Handlers and Durable Functions.
    • In Build Your First Function we looked at the tooling options for creating Functions apps, testing them locally, and deploying them to Azure - as we built and deployed our first Functions app.
    • In the next 4 posts, we explored new Triggers, Integrations, and Scenarios - as we looked at building Functions Apps in Java, JavaScript, .NET and Python.
    • And in the Zero-To-Hero series, we learned about Durable Entities - and how we can use them to create stateful serverless solutions using a Chirper Sample as an example scenario.

    The illustrated roadmap below summarizes what we covered each day this week, as we bring our Functions-as-a-Service exploration to a close.


    Advanced Functions: Code Samples

    So, now that we've got our first Functions app under our belt, and validated our local development setup for tooling, where can we go next? A good next step is to explore different triggers and bindings, that drive richer end-to-end scenarios. For example:

    • Integrate Functions with Azure Logic Apps - we'll discuss Azure Logic Apps in Week 3. For now, think of it as a workflow automation tool that lets you integrate seamlessly with other supported Azure services to drive an end-to-end scenario. In this tutorial, we set up a workflow connecting Twitter (get tweet) to Azure Cognitive Services (analyze sentiment) - and use that to trigger an Azure Functions app to send email about the result.
    • Integrate Functions with Event Grid - we'll discuss Azure Event Grid in Week 3. For now, think of it as an eventing service connecting event sources (publishers) to event handlers (subscribers) at cloud scale. In this tutorial, we handle a common use case - a workflow where loading an image to Blob Storage triggers an Azure Functions app that implements a resize function, helping automatically generate thumbnails for the uploaded image.
    • Integrate Functions with CosmosDB and SignalR to bring real-time push-based notifications to your web app. It achieves this by using a Functions app that is triggered by changes in a CosmosDB backend, causing it to broadcast that update (push notification to connected web clients over SignalR, in real time.

    Want more ideas? Check out the Azure Samples for Functions for implementations, and browse the Azure Architecture Center for reference architectures from real-world scenarios that involve Azure Functions usage.


    E2E Scenarios: Hacks & Cloud Skills

    Want to systematically work your way through a single End-to-End scenario involving Azure Functions alongside other serverless support technologies? Check out the Serverless Hacks activity happening during #ServerlessSeptember, and learn to build this "Serverless Tollbooth Application" in a series of 10 challenges. Check out the video series for a reference solution in .NET and sign up for weekly office hours to join peers and discuss your solutions or challenges.

    Or perhaps you prefer to learn core concepts with code in a structured learning path? We have that covered. Check out the 12-module "Create Serverless Applications" course from Microsoft Learn which walks your through concepts, one at a time, with code. Even better - sign up for the free Cloud Skills Challenge and complete the same path (in under 30 days) but this time, with the added fun of competing against your peers for a spot on a leaderboard, and swag.


    What's Next? Hello, Cloud-Native!

    So where to next? In Week 2 we turn our attention from Functions-as-a-Service to building more complex backends using Containers and Microservices. We'll focus on two core technologies - Azure Container Apps and Dapr (Distributed Application Runtime) - both key components of a broader vision around Building Cloud-Native Applications in Azure.

    What is Cloud-Native you ask?

    Fortunately for you, we have an excellent introduction in our Zero-to-Hero article on Go Cloud-Native with Azure Container Apps - that explains the 5 pillars of Cloud-Native and highlights the value of Azure Container Apps (scenarios) and Dapr (sidecar architecture) for simplified microservices-based solution with auto-scale capability. Prefer a visual summary? Here's an illustrate guide to that article for convenience.

    Go Cloud-Native Download a higher resolution version of the image


    Take The Challenge

    We typically end each post with an exercise or activity to reinforce what you learned. For Week 1, we encourage you to take the Cloud Skills Challenge and work your way through at least a subset of the modules, for hands-on experience with the different Azure Functions concepts, integrations, and usage.

    See you in Week 2!

    - + \ No newline at end of file diff --git a/blog/tags/azure-container-apps/page/18/index.html b/blog/tags/azure-container-apps/page/18/index.html index a59e2c31e8..6ecbbbece1 100644 --- a/blog/tags/azure-container-apps/page/18/index.html +++ b/blog/tags/azure-container-apps/page/18/index.html @@ -14,7 +14,7 @@ - + @@ -22,7 +22,7 @@

    20 posts tagged with "azure-container-apps"

    View All Tags

    · 8 min read
    David Justo

    Welcome to Day 6 of #30DaysOfServerless!

    Today, we have a special set of posts from our Zero To Hero 🚀 initiative, featuring blog posts authored by our Product Engineering teams for #ServerlessSeptember. Posts were originally published on the Apps on Azure blog on Microsoft Tech Community.


    What We'll Cover

    • What are Durable Entities
    • Some Background
    • A Programming Model
    • Entities for a Micro-Blogging Platform


    Durable Entities are a special type of Azure Function that allow you to implement stateful objects in a serverless environment. They make it easy to introduce stateful components to your app without needing to manually persist data to external storage, so you can focus on your business logic. We’ll demonstrate their power with a real-life example in the last section.

    Entities 101: Some Background

    Programming Durable Entities feels a lot like object-oriented programming, except that these “objects” exist in a distributed system. Like objects, each Entity instance has a unique identifier, i.e. an entity ID that can be used to read and manipulate their internal state. Entities define a list of operations that constrain how their internal state is managed, like an object interface.

    Some experienced readers may realize that Entities sound a lot like an implementation of the Actor Pattern. For a discussion of the relationship between Entities and Actors, please refer to this documentation.

    Entities are a part of the Durable Functions Extension, an extension of Azure Functions that empowers programmers with stateful abstractions for serverless, such as Orchestrations (i.e. workflows).

    Durable Functions is available in most Azure Functions runtime environments: .NET, Node.js, Python, PowerShell, and Java (preview). For this article, we’ll focus on the C# experience, but note that Entities are also available in Node.js and Python; their availability in other languages is underway.

    Entities 102: The programming model

    Imagine you want to implement a simple Entity that just counts things. Its interface allows you to get the current count, add to the current count, and to reset the count to zero.

    If you implement this in an object-oriented way, you’d probably define a class (say “Counter”) with a method to get the current count (say “Counter.Get”), another to add to the count (say “Counter.Add”), and another to reset the count (say “Counter.Reset”). Well, the implementation of an Entity in C# is not that different from this sketch:

    [JsonObject(MemberSerialization.OptIn)] 
    public class Counter
    {
    [JsonProperty("value")]
    public int Value { get; set; }

    public void Add(int amount)
    {
    this.Value += amount;
    }

    public Task Reset()
    {
    this.Value = 0;
    return Task.CompletedTask;
    }

    public Task<int> Get()
    {
    return Task.FromResult(this.Value);
    }
    [FunctionName(nameof(Counter))]
    public static Task Run([EntityTrigger] IDurableEntityContext ctx)
    => ctx.DispatchAsync<Counter>();

    }

    We’ve defined a class named Counter, with an internal count stored in the variable “Value” which is manipulated through the “Add” and “Reset” methods, and which can be read via “Get”.

    The “Run” method is simply boilerplate required for the Azure Functions framework to interact with the object we’ve defined – it’s the method that the framework calls internally when it needs to load the Entity object. When DispatchAsync is called, the Entity and its corresponded state (the last count in “Value”) is loaded from storage. Again, this is mostly just boilerplate: your Entity’s business logic lies in the rest of the class.

    Finally, the Json annotation on top of the class and the Value field tells the Durable Functions framework that the “Value” field is to be durably persisted as part of the durable state on each Entity invocation. If you were to annotate other class variables with JsonProperty, they would also become part of the managed state.

    Entities for a micro-blogging platform

    We’ll try to implement a simple micro-blogging platform, a la Twitter. Let’s call it “Chirper”. In Chirper, users write chirps (i.e tweets), they can follow, and unfollow other users, and they can read the chirps of users they follow.

    Defining Entity

    Just like in OOP, it’s useful to begin by identifying what are the stateful agents of this scenario. In this case, users have state (who they follow and their chirps), and chirps have state in the form of their content. So, we could model these stateful agents as Entities!

    Below is a potential way to implement a User for Chirper as an Entity:

      [JsonObject(MemberSerialization = MemberSerialization.OptIn)] 
    public class User: IUser
    {
    [JsonProperty]
    public List<string> FollowedUsers { get; set; } = new List<string>();

    public void Add(string user)
    {
    FollowedUsers.Add(user);
    }

    public void Remove(string user)
    {
    FollowedUsers.Remove(user);
    }

    public Task<List<string>> Get()
    {
    return Task.FromResult(FollowedUsers);
    }
    // note: removed boilerplate “Run” method, for conciseness.
    }

    In this case, our Entity’s internal state is stored in “FollowedUsers” which is an array of accounts followed by this user. The operations exposed by this entity allow clients to read and modify this data: it can be read by “Get”, a new follower can be added via “Add”, and a user can be unfollowed via “Remove”.

    With that, we’ve modeled a Chirper’s user as an Entity! Recall that Entity instances each has a unique ID, so we can consider that unique ID to correspond to a specific user account.

    What about chirps? Should we represent them as Entities as well? That would certainly be valid. However, we would then need to create a mapping between an entity ID and every chirp entity ID that this user wrote.

    For demonstration purposes, a simpler approach would be to create an Entity that stores the list of all chirps authored by a given user; call it UserChirps. Then, we could fix each User Entity to share the same entity ID as its corresponding UserChirps Entity, making client operations easier.

    Below is a simple implementation of UserChirps:

      [JsonObject(MemberSerialization = MemberSerialization.OptIn)] 
    public class UserChirps : IUserChirps
    {
    [JsonProperty]
    public List<Chirp> Chirps { get; set; } = new List<Chirp>();

    public void Add(Chirp chirp)
    {
    Chirps.Add(chirp);
    }

    public void Remove(DateTime timestamp)
    {
    Chirps.RemoveAll(chirp => chirp.Timestamp == timestamp);
    }

    public Task<List<Chirp>> Get()
    {
    return Task.FromResult(Chirps);
    }

    // Omitted boilerplate “Run” function
    }

    Here, our state is stored in Chirps, a list of user posts. Our operations are the same as before: Get, Read, and Add. It’s the same pattern as before, but we’re representing different data.

    To put it all together, let’s set up Entity clients to generate and manipulate these Entities according to some REST API.

    Interacting with Entity

    Before going there, let’s talk briefly about how you can interact with an Entity. Entity interactions take one of two forms -- calls and signals:

    Calling an entity is a two-way communication. You send an operation message to the entity and then wait for the response message before you continue. The response can be a result value or an error. Signaling an entity is a one-way (fire-and-forget) communication. You send an operation message but don’t wait for a response. You have the reassurance that the message will be delivered eventually, but you don’t know when and don’t know what the response is. For example, when you read the state of an Entity, you are performing a “call” interaction. When you record that a user has followed another, you may choose to simply signal it.

    Now say user with a given userId (say “durableFan99” ) wants to post a chirp. For this, you can write an HTTP endpoint to signal the UserChips entity to record that chirp. We can leverage the HTTP Trigger functionality from Azure Functions and pair it with an entity client binding that signals the Add operation of our Chirp Entity:

    [FunctionName("UserChirpsPost")] 
    public static async Task<HttpResponseMessage> UserChirpsPost(
    [HttpTrigger(AuthorizationLevel.Function, "post", Route = "user/{userId}/chirps")]
    HttpRequestMessage req,
    DurableClient] IDurableClient client,
    ILogger log,
    string userId)
    {
    Authenticate(req, userId);
    var chirp = new Chirp()
    {
    UserId = userId,
    Timestamp = DateTime.UtcNow,
    Content = await req.Content.ReadAsStringAsync(),
    };
    await client.SignalEntityAsync<IUserChirps>(userId, x => x.Add(chirp));
    return req.CreateResponse(HttpStatusCode.Accepted, chirp);
    }

    Following the same pattern as above, to get all the chirps from a user, you could read the status of your Entity via ReadEntityStateAsync, which follows the call-interaction pattern as your client expects a response:

    [FunctionName("UserChirpsGet")] 
    public static async Task<HttpResponseMessage> UserChirpsGet(
    [HttpTrigger(AuthorizationLevel.Function, "get", Route = "user/{userId}/chirps")] HttpRequestMessage req,
    [DurableClient] IDurableClient client,
    ILogger log,
    string userId)
    {

    Authenticate(req, userId);
    var target = new EntityId(nameof(UserChirps), userId);
    var chirps = await client.ReadEntityStateAsync<UserChirps>(target);
    return chirps.EntityExists
    ? req.CreateResponse(HttpStatusCode.OK, chirps.EntityState.Chirps)
    : req.CreateResponse(HttpStatusCode.NotFound);
    }

    And there you have it! To play with a complete implementation of Chirper, you can try out our sample in the Durable Functions extension repo.

    Thank you!

    info

    Thanks for following along, and we hope you find Entities as useful as we do! If you have questions or feedback, please file issues in the repo above or tag us @AzureFunctions on Twitter

    - + \ No newline at end of file diff --git a/blog/tags/azure-container-apps/page/19/index.html b/blog/tags/azure-container-apps/page/19/index.html index 5b1d0422c0..154211dabc 100644 --- a/blog/tags/azure-container-apps/page/19/index.html +++ b/blog/tags/azure-container-apps/page/19/index.html @@ -14,13 +14,13 @@ - +

    20 posts tagged with "azure-container-apps"

    View All Tags

    · 8 min read
    Kendall Roden

    Welcome to Day 6 of #30DaysOfServerless!

    Today, we have a special set of posts from our Zero To Hero 🚀 initiative, featuring blog posts authored by our Product Engineering teams for #ServerlessSeptember. Posts were originally published on the Apps on Azure blog on Microsoft Tech Community.


    What We'll Cover

    • Defining Cloud-Native
    • Introduction to Azure Container Apps
    • Dapr In Azure Container Apps
    • Conclusion


    Defining Cloud-Native

    While I’m positive I’m not the first person to ask this, I think it’s an appropriate way for us to kick off this article: “How many developers does it take to define Cloud-Native?” I hope you aren’t waiting for a punch line because I seriously want to know your thoughts (drop your perspectives in the comments..) but if you ask me, the limit does not exist!

    A quick online search of the topic returns a laundry list of articles, e-books, twitter threads, etc. all trying to nail down the one true definition. While diving into the rabbit hole of Cloud-Native, you will inevitably find yourself on the Cloud-Native Computing Foundation (CNCF) site. The CNCF is part of the Linux Foundation and aims to make "Cloud-Native computing ubiquitous" through deep open source project and community involvement. The CNCF has also published arguably the most popularized definition of Cloud-Native which begins with the following statement:

    “Cloud-Native technologies empower organizations to build and run scalable applications in modern, dynamic environments such as public, private, and hybrid clouds."

    Over the past four years, my day-to-day work has been driven primarily by the surging demand for application containerization and the drastic adoption of Kubernetes as the de-facto container orchestrator. Customers are eager to learn and leverage patterns, practices and technologies that enable building "loosely coupled systems that are resilient, manageable, and observable". Enterprise developers at these organizations are being tasked with rapidly deploying event-driven, horizontally-scalable, polyglot services via repeatable, code-to-cloud pipelines.

    While building Cloud-Native solutions can enable rapid innovation, the transition to adopting a Cloud-Native architectural approach comes with a steep learning curve and a new set of considerations. In a document published by Microsoft called What is Cloud-Native?, there are a few key areas highlighted to aid customers in the adoption of best practices for building modern, portable applications which I will summarize below:

    Cloud infrastructure

    • Cloud-Native applications leverage cloud infrastructure and make use of Platform-as-a-service offerings
    • Cloud-Native applications depend on highly-elastic infrastructure with automatic scaling, self-healing, and monitoring capabilities

    Modern application design

    • Cloud-Native applications should be constructed using principles outlined in the 12 factor methodology

    Microservices

    • Cloud-Native applications are typically composed of microservices where each core function, or service, is built and deployed independently

    Containers

    • Cloud-Native applications are typically deployed using containers as a packaging mechanism where an application's code and dependencies are bundled together for consistency of deployment
    • Cloud-Native applications leverage container orchestration technologies- primarily Kubernetes- for achieving capabilities such as workload scheduling, self-healing, auto-scale, etc.

    Backing services

    • Cloud-Native applications are ideally stateless workloads which retrieve and store data in data stores external to the application hosting infrastructure. Cloud providers like Azure provide an array of backing data services which can be securely accessed from application code and provide capabilities for ensuring application data is highly-available

    Automation

    • Cloud-Native solutions should use deployment automation for backing cloud infrastructure via versioned, parameterized Infrastructure as Code (IaC) templates which provide a consistent, repeatable process for provisioning cloud resources.
    • Cloud-Native solutions should make use of modern CI/CD practices and pipelines to ensure successful, reliable infrastructure and application deployment.

    Azure Container Apps

    In many of the conversations I've had with customers that involve talk of Kubernetes and containers, the topics of cost-optimization, security, networking, and reducing infrastructure and operations inevitably arise. I personally have yet to meet with any customers eager to have their developers get more involved with infrastructure concerns.

    One of my former colleagues, Jeff Hollan, made a statement while appearing on a 2019 episode of The Cloud-Native Show where he shared his perspective on Cloud-Native:

    "When I think about Cloud-Native... it's writing applications in a way where you are specifically thinking about the benefits the cloud can provide... to me, serverless is the perfect realization of that because the only reason you can write serverless applications is because the cloud exists."

    I must say that I agree with Jeff's perspective. In addition to optimizing development practices for the Cloud-Native world, reducing infrastructure exposure and operations is equally as important to many organizations and can be achieved as a result of cloud platform innovation.

    In May of 2022, Microsoft announced the general availability of Azure Container Apps. Azure Container Apps provides customers with the ability to run microservices and containerized applications on a serverless, consumption-based platform.

    For those interested in taking advantage of the open source ecosystem while reaping the benefits of a managed platform experience, Container Apps run on Kubernetes and provides a set of managed open source projects embedded directly into the platform including the Kubernetes Event Driven Autoscaler (KEDA), the Distributed Application Runtime (Dapr) and Envoy.

    Azure Kubernetes Service vs. Azure Container Apps

    Container apps provides other Cloud-Native features and capabilities in addition to those above including, but not limited to:

    The ability to dynamically scale and support growing numbers of users, events, and requests is one of the core requirements for most Cloud-Native, distributed applications. Azure Container Apps is purpose-built with this and other Cloud-Native tenants in mind.

    What can you build with Azure Container Apps?

    Dapr in Azure Container Apps

    As a quick personal note before we dive into this section I will say I am a bit bias about Dapr. When Dapr was first released, I had an opportunity to immediately get involved and became an early advocate for the project. It is created by developers for developers, and solves tangible problems customers architecting distributed systems face:

    HOW DO I
    • integrate with external systems that my app has to react and respond to?
    • create event driven apps which reliably send events from one service to another?
    • observe the calls and events between my services to diagnose issues in production?
    • access secrets securely from within my application?
    • discover other services and call methods on them?
    • prevent committing to a technology early and have the flexibility to swap out an alternative based on project or environment changes?

    While existing solutions were in the market which could be used to address some of the concerns above, there was not a lightweight, CNCF-backed project which could provide a unified approach to solve the more fundamental ask from customers: "How do I make it easy for developers to build microservices based on Cloud-Native best practices?"

    Enter Dapr!

    The Distributed Application Runtime (Dapr) provides APIs that simplify microservice connectivity. Whether your communication pattern is service to service invocation or pub/sub messaging, Dapr helps you write resilient and secured microservices. By letting Dapr’s sidecar take care of the complex challenges such as service discovery, message broker integration, encryption, observability, and secret management, you can focus on business logic and keep your code simple."

    The Container Apps platform provides a managed and supported Dapr integration which eliminates the need for deploying and managing the Dapr OSS project. In addition to providing managed upgrades, the platform also exposes a simplified Dapr interaction model to increase developer productivity and reduce the friction required to leverage Dapr capabilities. While the Dapr integration makes it easier for customers to adopt Cloud-Native best practices in container apps it is not required to make use of the container apps platform.

    Image on Dapr

    For additional insight into the dapr integration visit aka.ms/aca-dapr.

    Conclusion

    Backed by and integrated with powerful Cloud-Native technologies, Azure Container Apps strives to make developers productive, while reducing the operational overhead and learning curve that typically accompanies adopting a cloud-native strategy.

    If you are interested in building resilient, portable and highly-scalable apps visit Azure Container Apps | Microsoft Azure today!

    ASK THE EXPERT: LIVE Q&A

    The Azure Container Apps team will answer questions live on September 29.

    - + \ No newline at end of file diff --git a/blog/tags/azure-container-apps/page/2/index.html b/blog/tags/azure-container-apps/page/2/index.html index 0417248fc4..b621e7effa 100644 --- a/blog/tags/azure-container-apps/page/2/index.html +++ b/blog/tags/azure-container-apps/page/2/index.html @@ -14,13 +14,13 @@ - +

    20 posts tagged with "azure-container-apps"

    View All Tags

    · 7 min read
    Brian Benz

    Welcome to Day 25 of #30DaysOfServerless!

    Azure Container Apps enable application code packaged in containers to run and scale without the overhead of managing cloud infrastructure and container orchestration. In this post I'll show you how to deploy a Java application running on Spring Boot in a container to Azure Container Registry and Azure Container Apps.


    What We'll Cover

    • Introduction to Deploying Java containers in the cloud
    • Step-by-step: Deploying to Azure Container Registry
    • Step-by-step: Deploying and running on Azure Container Apps
    • Resources: For self-study!


    Deploy Java containers to cloud

    We'll deploy a Java application running on Spring Boot in a container to Azure Container Registry and Azure Container Apps. Here are the main steps:

    • Create Azure Container Registry (ACR) on Azure portal
    • Create Azure Container App (ACA) on Azure portal.
    • Deploy code to Azure Container Registry from the Azure CLI.
    • Deploy container from ACR to ACA using the Azure portal.
    PRE-REQUISITES

    Sign in to Azure from the CLI using the az login command, and follow the prompts in your browser to complete the authentication process. Also, ensure you're running the latest version of the CLI by using the az upgrade command.

    1. Get Sample Code

    Fork and clone the sample GitHub repo to your local machine. Navigate to the and click Fork in the top-right corner of the page.

    The example code that we're using is a very basic containerized Spring Boot example. There are a lot more details to learn about Spring boot apps in docker, for a deep dive check out this Spring Boot Guide

    2. Run Sample Locally (Optional)

    If you have docker installed locally, you can optionally test the code on your local machine. Navigate to the root directory of the forked repository and run the following commands:

    docker build -t spring-boot-docker-aca .
    docker run -p 8080:8080 spring-boot-docker-aca

    Open a browser and go to https://localhost:8080. You should see this message:

    Hello Docker World

    That indicates the the Spring Boot app is successfully running locally in a docker container.

    Next, let's set up an Azure Container Registry an an Azure Container App and deploy this container to the cloud!


    3. Step-by-step: Deploy to ACR

    To create a container registry from the portal dashboard, Select Create a resource > Containers > Container Registry.

    Navigate to container registry in portal

    In the Basics tab, enter values for Resource group and Registry name. The registry name must be unique within Azure, and contain 5-50 alphanumeric characters. Create a new resource group in the West US location named spring-boot-docker-aca. Select the 'Basic' SKU.

    Keep the default values for the remaining settings. Then select Review + create, then Create. When the Deployment succeeded message appears, select the container registry in the portal.

    Note the registry server name ending with azurecr.io. You will use this in the following steps when you push and pull images with Docker.

    3.1 Log into registry using the Azure CLI

    Before pushing and pulling container images, you must log in to the registry instance. Sign into the Azure CLI on your local machine, then run the az acr login command. For this step, use the registry name, not the server name ending with azurecr.io.

    From the command line, type:

    az acr login --name myregistryname

    The command returns Login Succeeded once completed.

    3.2 Build & deploy with az acr build

    Next, we're going to deploy the docker container we created earlier using the AZ ACR Build command. AZ ACR Build creates a docker build from local code and pushes the container to Azure Container Registry if the build is successful.

    Go to your local clone of the spring-boot-docker-aca repo in the command line, type:

    az acr build --registry myregistryname --image spring-boot-docker-aca:v1 .

    3.3 List container images

    Once the AZ ACR Build command is complete, you should be able to view the container as a repository in the registry. In the portal, open your registry and select Repositories, then select the spring-boot-docker-aca repository you created with docker push. You should also see the v1 image under Tags.

    4. Deploy on ACA

    Now that we have an image in the Azure Container Registry, we can deploy it to Azure Container Apps. For the first deployment, we'll pull the container from our ACR as part of the ACA setup.

    4.1 Create a container app

    We'll create the container app at the same place that we created the container registry in the Azure portal. From the portal, select Create a resource > Containers > Container App. In the Basics tab, set these values:

    4.2 Enter project details

    SettingAction
    SubscriptionYour Azure subscription.
    Resource groupUse the spring-boot-docker-aca resource group
    Container app nameEnter spring-boot-docker-aca.

    4.3 Create an environment

    1. In the Create Container App environment field, select Create new.

    2. In the Create Container App Environment page on the Basics tab, enter the following values:

      SettingValue
      Environment nameEnter my-environment.
      RegionSelect westus3.
    3. Select OK.

    4. Select the Create button at the bottom of the Create Container App Environment page.

    5. Select the Next: App settings button at the bottom of the page.

    5. App settings tab

    The App settings tab is where you connect to the ACR and pull the repository image:

    SettingAction
    Use quickstart imageUncheck the checkbox.
    NameEnter spring-boot-docker-aca.
    Image sourceSelect Azure Container Registry
    RegistrySelect your ACR from the list.
    ImageSelect spring-boot-docker-aca from the list.
    Image TagSelect v1 from the list.

    5.1 Application ingress settings

    SettingAction
    IngressSelect Enabled.
    Ingress visibilitySelect External to publicly expose your container app.
    Target portEnter 8080.

    5.2 Deploy the container app

    1. Select the Review and create button at the bottom of the page.
    2. Select Create.

    Once the deployment is successfully completed, you'll see the message: Your deployment is complete.

    5.3 Verify deployment

    In the portal, go to the Overview of your spring-boot-docker-aca Azure Container App, and click on the Application Url. You should see this message in the browser:

    Hello Docker World

    That indicates the the Spring Boot app is running in a docker container in your spring-boot-docker-aca Azure Container App.

    Resources: For self-study!

    Once you have an understanding of the basics in ths post, there is so much more to learn!

    Thanks for stopping by!

    - + \ No newline at end of file diff --git a/blog/tags/azure-container-apps/page/20/index.html b/blog/tags/azure-container-apps/page/20/index.html index daf279ab72..e94f9ecbbd 100644 --- a/blog/tags/azure-container-apps/page/20/index.html +++ b/blog/tags/azure-container-apps/page/20/index.html @@ -14,13 +14,13 @@ - +

    20 posts tagged with "azure-container-apps"

    View All Tags

    · 7 min read
    Aaron Powell

    Welcome to Day 5 of #30DaysOfServerless!

    Yesterday we looked at Azure Functions from the perspective of a Java developer. Today, we'll do a similar walkthrough from the perspective of a JavaScript developer.

    And, we'll use this to explore another popular usage scenario for Azure Functions: building a serverless HTTP API using JavaScript.

    Ready? Let's go.


    What We'll Cover

    • Developer Guidance
    • Create Azure Function with CLI
    • Calling an external API
    • Azure Samples & Scenarios for JS
    • Exercise: Support searching
    • Resources: For self-study!


    Developer Guidance

    If you're a JavaScript developer new to serverless on Azure, start by exploring the Azure Functions JavaScript Developers Guide. It covers:

    • Quickstarts for Node.js - using Visual Code, CLI or Azure Portal
    • Guidance on hosting options and performance considerations
    • Azure Functions bindings and (code samples) for JavaScript
    • Scenario examples - integrations with other Azure Services

    Node.js 18 Support

    Node.js 18 Support (Public Preview)

    Azure Functions support for Node.js 18 entered Public Preview on Aug 31, 2022 and is supported by the Azure Functions v.4.x runtime!

    As we continue to explore how we can use Azure Functions, today we're going to look at using JavaScript to create one, and we're going to be using the newly released Node.js 18 support for Azure Functions to make the most out of the platform.

    Ensure you have Node.js 18 and Azure Functions v4.x versions installed, along with a text editor (I'll use VS Code in this post), and a terminal, then we're ready to go.

    Scenario: Calling The GitHub API

    The application we're going to be building today will use the GitHub API to return a random commit message, so that we don't need to come up with one ourselves! After all, naming things can be really hard! 🤣

    Creating the Azure Function

    To create our Azure Function, we're going to use the Azure Functions CLI, which we can install using npm:

    npm install --global azure-function-core-tools

    Once that's installed, we can use the new func command to initalise our project:

    func init --worker-runtime node --language javascript

    When running func init we can either provide the worker-runtime and language as arguments, or use the menu system that the tool will provide us. For brevity's stake, I've used the arguments here, specifying that we want node as the runtime and javascript as the language, but you could change that to typescript if you'd prefer to use TypeScript.

    Once the init command is completed, you should have a .vscode folder, and the files .gitignore, host.json, local.settings.json, and package.json.

    Files generated by func initFiles generated by func init

    Adding a HTTP Trigger

    We have an empty Functions app so far, what we need to do next is create a Function that it will run, and we're going to make a HTTP Trigger Function, which is a Function that responds to HTTP requests. We'll use the func new command to create that:

    func new --template "HTTP Trigger" --name "get-commit-message"

    When this completes, we'll have a folder for the Function, using the name we provided, that contains the filesfunction.json and index.js. Let's open the function.json to understand it a little bit:

    {
    "bindings": [
    {
    "authLevel": "function",
    "type": "httpTrigger",
    "direction": "in",
    "name": "req",
    "methods": [
    "get",
    "post"
    ]
    },
    {
    "type": "http",
    "direction": "out",
    "name": "res"
    }
    ]
    }

    This file is used to tell Functions about the Function that we've created and what it does, so it knows to handle the appropriate events. We have a bindings node which contains the event bindings for our Azure Function. The first binding is using the type httpTrigger, which indicates that it'll be executed, or triggered, by a HTTP event, and the methods indicates that it's listening to both GET and POST (you can change this for the right HTTP methods that you want to support). The HTTP request information will be bound to a property in the Functions context called req, so we can access query strings, the request body, etc.

    The other binding we have has the direction of out, meaning that it's something that the Function will return to the called, and since this is a HTTP API, the type is http, indicating that we'll return a HTTP response, and that response will be on a property called res that we add to the Functions context.

    Let's go ahead and start the Function and call it:

    func start

    Starting the FunctionStarting the Function

    With the Function started, access the endpoint http://localhost:7071/api/get-commit-message via a browser or using cURL:

    curl http://localhost:7071/api/get-commit-message\?name\=ServerlessSeptember

    Hello from Azure FunctionsHello from Azure Functions

    🎉 CONGRATULATIONS

    You created and ran a JavaScript function app locally!

    Calling an external API

    It's time to update the Function to do what we want to do - call the GitHub Search API and get some commit messages. The endpoint that we'll be calling is https://api.github.com/search/commits?q=language:javascript.

    Note: The GitHub API is rate limited and this sample will call it unauthenticated, so be aware of that in your own testing.

    To call this API, we'll leverage the newly released fetch support in Node 18 and async/await, to make for a very clean Function.

    Open up the index.js file, and delete the contents of the existing Function, so we have a empty one:

    module.exports = async function (context, req) {

    }

    The default template uses CommonJS, but you can use ES Modules with Azure Functions if you prefer.

    Now we'll use fetch to call the API, and unpack the JSON response:

    module.exports = async function (context, req) {
    const res = await fetch("https://api.github.com/search/commits?q=language:javascript");
    const json = await res.json();
    const messages = json.items.map(item => item.commit.message);
    context.res = {
    body: {
    messages
    }
    };
    }

    To send a response to the client, we're setting the context.res property, where res is the name of the output binding in our function.json, and giving it a body that contains the commit messages.

    Run func start again, and call the endpoint:

    curl http://localhost:7071/api/get-commit-message

    The you'll get some commit messages:

    A series of commit messages from the GitHub Search APIA series of commit messages from the GitHub Search API

    🎉 CONGRATULATIONS

    There we go, we've created an Azure Function which is used as a proxy to another API, that we call (using native fetch in Node.js 18) and from which we return a subset of the JSON payload.

    Next Steps

    Other Triggers, Bindings

    This article focused on using the HTTPTrigger and relevant bindings, to build a serverless API using Azure Functions. How can you explore other supported bindings, with code samples to illustrate usage?

    Scenarios with Integrations

    Once you've tried out the samples, try building an end-to-end scenario by using these triggers to integrate seamlessly with other services. Here are some suggestions:

    Exercise: Support searching

    The GitHub Search API allows you to provide search parameters via the q query string. In this sample, we hard-coded it to be language:javascript, but as a follow-on exercise, expand the Function to allow the caller to provide the search terms as a query string to the Azure Function, which is passed to the GitHub Search API. Hint - have a look at the req argument.

    Resources

    - + \ No newline at end of file diff --git a/blog/tags/azure-container-apps/page/3/index.html b/blog/tags/azure-container-apps/page/3/index.html index 3375026213..8cb2aff525 100644 --- a/blog/tags/azure-container-apps/page/3/index.html +++ b/blog/tags/azure-container-apps/page/3/index.html @@ -14,13 +14,13 @@ - +

    20 posts tagged with "azure-container-apps"

    View All Tags

    · 19 min read
    Alex Wolf

    Welcome to Day 24 of #30DaysOfServerless!

    We continue exploring E2E scenarios with this tutorial where you'll deploy a containerized ASP.NET Core 6.0 application to Azure Container Apps.

    The application consists of a front-end web app built using Blazor Server, as well as two Web API projects to manage data. These projects will exist as three separate containers inside of a shared container apps environment.


    What We'll Cover

    • Deploy ASP.NET Core 6.0 app to Azure Container Apps
    • Automate deployment workflows using GitHub Actions
    • Provision and deploy resources using Azure Bicep
    • Exercise: Try this yourself!
    • Resources: For self-study!


    Introduction

    Azure Container Apps enables you to run microservices and containerized applications on a serverless platform. With Container Apps, you enjoy the benefits of running containers while leaving behind the concerns of manually configuring cloud infrastructure and complex container orchestrators.

    In this tutorial, you'll deploy a containerized ASP.NET Core 6.0 application to Azure Container Apps. The application consists of a front-end web app built using Blazor Server, as well as two Web API projects to manage data. These projects will exist as three separate containers inside of a shared container apps environment.

    You will use GitHub Actions in combination with Bicep to deploy the application. These tools provide an approachable and sustainable solution for building CI/CD pipelines and working with Container Apps.

    PRE-REQUISITES

    Architecture

    In this tutorial, we'll setup a container app environment with a separate container for each project in the sample store app. The major components of the sample project include:

    • A Blazor Server front-end web app to display product information
    • A products API to list available products
    • An inventory API to determine how many products are in stock
    • GitHub Actions and Bicep templates to provision Azure resources and then build and deploy the sample app.

    You will explore these templates later in the tutorial.

    Public internet traffic should be proxied to the Blazor app. The back-end APIs should only be reachable via requests from the Blazor app inside the container apps environment. This setup can be achieved using container apps environment ingress configurations during deployment.

    An architecture diagram of the shopping app


    Project Sources

    Want to follow along? Fork the sample below. The tutorial can be completed with or without Dapr integration. Pick the path you feel comfortable in. Dapr provides various benefits that make working with Microservices easier - you can learn more in the docs. For this tutorial you will need GitHub and Azure CLI.

    PICK YOUR PATH

    To follow along with this tutorial, fork the relevant sample project below.

    You can run the app locally from Visual Studio:

    • Right click on the Blazor Store project and select Set as Startup Project.
    • Press the start button at the top of Visual Studio to run the app.
    • (Once running) start each API in the background by
    • right-clicking on the project node
    • selecting Debug --> Start without debugging.

    Once the Blazor app is running, you should see something like this:

    An architecture diagram of the shopping app


    Configuring Azure credentials

    In order to deploy the application to Azure through GitHub Actions, you first need to create a service principal. The service principal will allow the GitHub Actions process to authenticate to your Azure subscription to create resources and deploy code. You can learn more about Service Principals in the Azure CLI documentation. For this step you'll need to be logged into the Azure CLI.

    1) If you have not done so already, make sure to fork the sample project to your own GitHub account or organization.

    1) Once you have completed this step, create a service principal using the Azure CLI command below:

    ```azurecli
    $subscriptionId=$(az account show --query id --output tsv)
    az ad sp create-for-rbac --sdk-auth --name WebAndApiSample --role Contributor --scopes /subscriptions/$subscriptionId
    ```

    1) Copy the JSON output of the CLI command to your clipboard

    1) Under the settings tab of your forked GitHub repo, create a new secret named AzureSPN. The name is important to match the Bicep templates included in the project, which we'll review later. Paste the copied service principal values on your clipboard into the secret and save your changes. This new secret will be used by the GitHub Actions workflow to authenticate to Azure.

    :::image type="content" source="./img/dotnet/github-secrets.png" alt-text="A screenshot of adding GitHub secrets.":::

    Deploy using Github Actions

    You are now ready to deploy the application to Azure Container Apps using GitHub Actions. The sample application includes a GitHub Actions template that is configured to build and deploy any changes to a branch named deploy. The deploy branch does not exist in your forked repository by default, but you can easily create it through the GitHub user interface.

    1) Switch to the Actions tab along the top navigation of your GitHub repository. If you have not done so already, ensure that workflows are enabled by clicking the button in the center of the page.

    A screenshot showing how to enable GitHub actions

    1) Navigate to the main Code tab of your repository and select the main dropdown. Enter deploy into the branch input box, and then select Create branch: deploy from 'main'.

    A screenshot showing how to create the deploy branch

    1) On the new deploy branch, navigate down into the .github/workflows folder. You should see a file called deploy.yml, which contains the main GitHub Actions workflow script. Click on the file to view its content. You'll learn more about this file later in the tutorial.

    1) Click the pencil icon in the upper right to edit the document.

    1) Change the RESOURCE_GROUP_NAME: value to msdocswebappapis or another valid resource group name of your choosing.

    1) In the upper right of the screen, select Start commit and then Commit changes to commit your edit. This will persist the change to the file and trigger the GitHub Actions workflow to build and deploy the app.

    A screenshot showing how to commit changes

    1) Switch to the Actions tab along the top navigation again. You should see the workflow running to create the necessary resources and deploy the app. The workflow may take several minutes to run. When it completes successfully, all of the jobs should have a green checkmark icon next to them.

    The completed GitHub workflow.

    Explore the Azure resources

    Once the GitHub Actions workflow has completed successfully you can browse the created resources in the Azure portal.

    1) On the left navigation, select Resource Groups. Next,choose the msdocswebappapis resource group that was created by the GitHub Actions workflow.

    2) You should see seven resources available that match the screenshot and table descriptions below.

    The resources created in Azure.

    Resource nameTypeDescription
    inventoryContainer appThe containerized inventory API.
    msdocswebappapisacrContainer registryA registry that stores the built Container images for your apps.
    msdocswebappapisaiApplication insightsApplication insights provides advanced monitoring, logging and metrics for your apps.
    msdocswebappapisenvContainer apps environmentA container environment that manages networking, security and resource concerns. All of your containers live in this environment.
    msdocswebappapislogsLog Analytics workspaceA workspace environment for managing logging and analytics for the container apps environment
    productsContainer appThe containerized products API.
    storeContainer appThe Blazor front-end web app.

    3) You can view your running app in the browser by clicking on the store container app. On the overview page, click the Application Url link on the upper right of the screen.

    :::image type="content" source="./img/dotnet/application-url.png" alt-text="The link to browse the app.":::

    Understanding the GitHub Actions workflow

    The GitHub Actions workflow created and deployed resources to Azure using the deploy.yml file in the .github folder at the root of the project. The primary purpose of this file is to respond to events - such as commits to a branch - and run jobs to accomplish tasks. The deploy.yml file in the sample project has three main jobs:

    • Provision: Create the necessary resources in Azure, such as the container apps environment. This step leverages Bicep templates to create the Azure resources, which you'll explore in a moment.
    • Build: Create the container images for the three apps in the project and store them in the container registry.
    • Deploy: Deploy the container images to the different container apps created during the provisioning job.

    The deploy.yml file also accepts parameters to make the workflow more dynamic, such as setting the resource group name or the Azure region resources will be provisioned to.

    Below is a commented version of the deploy.yml file that highlights the essential steps.

    name: Build and deploy .NET application to Container Apps

    # Trigger the workflow on pushes to the deploy branch
    on:
    push:
    branches:
    - deploy

    env:
    # Set workflow variables
    RESOURCE_GROUP_NAME: msdocswebappapis

    REGION: eastus

    STORE_DOCKER: Store/Dockerfile
    STORE_IMAGE: store

    INVENTORY_DOCKER: Store.InventoryApi/Dockerfile
    INVENTORY_IMAGE: inventory

    PRODUCTS_DOCKER: Store.ProductApi/Dockerfile
    PRODUCTS_IMAGE: products

    jobs:
    # Create the required Azure resources
    provision:
    runs-on: ubuntu-latest

    steps:

    - name: Checkout to the branch
    uses: actions/checkout@v2

    - name: Azure Login
    uses: azure/login@v1
    with:
    creds: ${{ secrets.AzureSPN }}

    - name: Create resource group
    uses: azure/CLI@v1
    with:
    inlineScript: >
    echo "Creating resource group in Azure"
    echo "Executing 'az group create -l ${{ env.REGION }} -n ${{ env.RESOURCE_GROUP_NAME }}'"
    az group create -l ${{ env.REGION }} -n ${{ env.RESOURCE_GROUP_NAME }}

    # Use Bicep templates to create the resources in Azure
    - name: Creating resources
    uses: azure/CLI@v1
    with:
    inlineScript: >
    echo "Creating resources"
    az deployment group create --resource-group ${{ env.RESOURCE_GROUP_NAME }} --template-file '/github/workspace/Azure/main.bicep' --debug

    # Build the three app container images
    build:
    runs-on: ubuntu-latest
    needs: provision

    steps:

    - name: Checkout to the branch
    uses: actions/checkout@v2

    - name: Azure Login
    uses: azure/login@v1
    with:
    creds: ${{ secrets.AzureSPN }}

    - name: Set up Docker Buildx
    uses: docker/setup-buildx-action@v1

    - name: Login to ACR
    run: |
    set -euo pipefail
    access_token=$(az account get-access-token --query accessToken -o tsv)
    refresh_token=$(curl https://${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io/oauth2/exchange -v -d "grant_type=access_token&service=${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io&access_token=$access_token" | jq -r .refresh_token)
    docker login -u 00000000-0000-0000-0000-000000000000 --password-stdin ${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io <<< "$refresh_token"

    - name: Build the products api image and push it to ACR
    uses: docker/build-push-action@v2
    with:
    push: true
    tags: ${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io/${{ env.PRODUCTS_IMAGE }}:${{ github.sha }}
    file: ${{ env.PRODUCTS_DOCKER }}

    - name: Build the inventory api image and push it to ACR
    uses: docker/build-push-action@v2
    with:
    push: true
    tags: ${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io/${{ env.INVENTORY_IMAGE }}:${{ github.sha }}
    file: ${{ env.INVENTORY_DOCKER }}

    - name: Build the frontend image and push it to ACR
    uses: docker/build-push-action@v2
    with:
    push: true
    tags: ${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io/${{ env.STORE_IMAGE }}:${{ github.sha }}
    file: ${{ env.STORE_DOCKER }}

    # Deploy the three container images
    deploy:
    runs-on: ubuntu-latest
    needs: build

    steps:

    - name: Checkout to the branch
    uses: actions/checkout@v2

    - name: Azure Login
    uses: azure/login@v1
    with:
    creds: ${{ secrets.AzureSPN }}

    - name: Installing Container Apps extension
    uses: azure/CLI@v1
    with:
    inlineScript: >
    az config set extension.use_dynamic_install=yes_without_prompt

    az extension add --name containerapp --yes

    - name: Login to ACR
    run: |
    set -euo pipefail
    access_token=$(az account get-access-token --query accessToken -o tsv)
    refresh_token=$(curl https://${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io/oauth2/exchange -v -d "grant_type=access_token&service=${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io&access_token=$access_token" | jq -r .refresh_token)
    docker login -u 00000000-0000-0000-0000-000000000000 --password-stdin ${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io <<< "$refresh_token"

    - name: Deploy Container Apps
    uses: azure/CLI@v1
    with:
    inlineScript: >
    az containerapp registry set -n products -g ${{ env.RESOURCE_GROUP_NAME }} --server ${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io

    az containerapp update -n products -g ${{ env.RESOURCE_GROUP_NAME }} -i ${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io/${{ env.PRODUCTS_IMAGE }}:${{ github.sha }}

    az containerapp registry set -n inventory -g ${{ env.RESOURCE_GROUP_NAME }} --server ${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io

    az containerapp update -n inventory -g ${{ env.RESOURCE_GROUP_NAME }} -i ${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io/${{ env.INVENTORY_IMAGE }}:${{ github.sha }}

    az containerapp registry set -n store -g ${{ env.RESOURCE_GROUP_NAME }} --server ${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io

    az containerapp update -n store -g ${{ env.RESOURCE_GROUP_NAME }} -i ${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io/${{ env.STORE_IMAGE }}:${{ github.sha }}

    - name: logout
    run: >
    az logout

    Understanding the Bicep templates

    During the provisioning stage of the GitHub Actions workflow, the main.bicep file is processed. Bicep files provide a declarative way of generating resources in Azure and are ideal for managing infrastructure as code. You can learn more about Bicep in the related documentation. The main.bicep file in the sample project creates the following resources:

    • The container registry to store images of the containerized apps.
    • The container apps environment, which handles networking and resource management for the container apps.
    • Three container apps - one for the Blazor front-end and two for the back-end product and inventory APIs.
    • Configuration values to connect these services together

    main.bicep without Dapr

    param location string = resourceGroup().location

    # create the azure container registry
    resource acr 'Microsoft.ContainerRegistry/registries@2021-09-01' = {
    name: toLower('${resourceGroup().name}acr')
    location: location
    sku: {
    name: 'Basic'
    }
    properties: {
    adminUserEnabled: true
    }
    }

    # create the aca environment
    module env 'environment.bicep' = {
    name: 'containerAppEnvironment'
    params: {
    location: location
    }
    }

    # create the various configuration pairs
    var shared_config = [
    {
    name: 'ASPNETCORE_ENVIRONMENT'
    value: 'Development'
    }
    {
    name: 'APPINSIGHTS_INSTRUMENTATIONKEY'
    value: env.outputs.appInsightsInstrumentationKey
    }
    {
    name: 'APPLICATIONINSIGHTS_CONNECTION_STRING'
    value: env.outputs.appInsightsConnectionString
    }
    ]

    # create the products api container app
    module products 'container_app.bicep' = {
    name: 'products'
    params: {
    name: 'products'
    location: location
    registryPassword: acr.listCredentials().passwords[0].value
    registryUsername: acr.listCredentials().username
    containerAppEnvironmentId: env.outputs.id
    registry: acr.name
    envVars: shared_config
    externalIngress: false
    }
    }

    # create the inventory api container app
    module inventory 'container_app.bicep' = {
    name: 'inventory'
    params: {
    name: 'inventory'
    location: location
    registryPassword: acr.listCredentials().passwords[0].value
    registryUsername: acr.listCredentials().username
    containerAppEnvironmentId: env.outputs.id
    registry: acr.name
    envVars: shared_config
    externalIngress: false
    }
    }

    # create the store api container app
    var frontend_config = [
    {
    name: 'ProductsApi'
    value: 'http://${products.outputs.fqdn}'
    }
    {
    name: 'InventoryApi'
    value: 'http://${inventory.outputs.fqdn}'
    }
    ]

    module store 'container_app.bicep' = {
    name: 'store'
    params: {
    name: 'store'
    location: location
    registryPassword: acr.listCredentials().passwords[0].value
    registryUsername: acr.listCredentials().username
    containerAppEnvironmentId: env.outputs.id
    registry: acr.name
    envVars: union(shared_config, frontend_config)
    externalIngress: true
    }
    }

    main.bicep with Dapr


    param location string = resourceGroup().location

    # create the azure container registry
    resource acr 'Microsoft.ContainerRegistry/registries@2021-09-01' = {
    name: toLower('${resourceGroup().name}acr')
    location: location
    sku: {
    name: 'Basic'
    }
    properties: {
    adminUserEnabled: true
    }
    }

    # create the aca environment
    module env 'environment.bicep' = {
    name: 'containerAppEnvironment'
    params: {
    location: location
    }
    }

    # create the various config pairs
    var shared_config = [
    {
    name: 'ASPNETCORE_ENVIRONMENT'
    value: 'Development'
    }
    {
    name: 'APPINSIGHTS_INSTRUMENTATIONKEY'
    value: env.outputs.appInsightsInstrumentationKey
    }
    {
    name: 'APPLICATIONINSIGHTS_CONNECTION_STRING'
    value: env.outputs.appInsightsConnectionString
    }
    ]

    # create the products api container app
    module products 'container_app.bicep' = {
    name: 'products'
    params: {
    name: 'products'
    location: location
    registryPassword: acr.listCredentials().passwords[0].value
    registryUsername: acr.listCredentials().username
    containerAppEnvironmentId: env.outputs.id
    registry: acr.name
    envVars: shared_config
    externalIngress: false
    }
    }

    # create the inventory api container app
    module inventory 'container_app.bicep' = {
    name: 'inventory'
    params: {
    name: 'inventory'
    location: location
    registryPassword: acr.listCredentials().passwords[0].value
    registryUsername: acr.listCredentials().username
    containerAppEnvironmentId: env.outputs.id
    registry: acr.name
    envVars: shared_config
    externalIngress: false
    }
    }

    # create the store api container app
    module store 'container_app.bicep' = {
    name: 'store'
    params: {
    name: 'store'
    location: location
    registryPassword: acr.listCredentials().passwords[0].value
    registryUsername: acr.listCredentials().username
    containerAppEnvironmentId: env.outputs.id
    registry: acr.name
    envVars: shared_config
    externalIngress: true
    }
    }


    Bicep Modules

    The main.bicep file references modules to create resources, such as module products. Modules are a feature of Bicep templates that enable you to abstract resource declarations into their own files or sub-templates. As the main.bicep file is processed, the defined modules are also evaluated. Modules allow you to create resources in a more organized and reusable way. They can also define input and output parameters that are passed to and from the parent template, such as the name of a resource.

    For example, the environment.bicep module extracts the details of creating a container apps environment into a reusable template. The module defines necessary resource dependencies such as Log Analytics Workspaces and an Application Insights instance.

    environment.bicep without Dapr

    param baseName string = resourceGroup().name
    param location string = resourceGroup().location

    resource logs 'Microsoft.OperationalInsights/workspaces@2021-06-01' = {
    name: '${baseName}logs'
    location: location
    properties: any({
    retentionInDays: 30
    features: {
    searchVersion: 1
    }
    sku: {
    name: 'PerGB2018'
    }
    })
    }

    resource appInsights 'Microsoft.Insights/components@2020-02-02' = {
    name: '${baseName}ai'
    location: location
    kind: 'web'
    properties: {
    Application_Type: 'web'
    WorkspaceResourceId: logs.id
    }
    }

    resource env 'Microsoft.App/managedEnvironments@2022-01-01-preview' = {
    name: '${baseName}env'
    location: location
    properties: {
    appLogsConfiguration: {
    destination: 'log-analytics'
    logAnalyticsConfiguration: {
    customerId: logs.properties.customerId
    sharedKey: logs.listKeys().primarySharedKey
    }
    }
    }
    }

    output id string = env.id
    output appInsightsInstrumentationKey string = appInsights.properties.InstrumentationKey
    output appInsightsConnectionString string = appInsights.properties.ConnectionString

    environment.bicep with Dapr


    param baseName string = resourceGroup().name
    param location string = resourceGroup().location

    resource logs 'Microsoft.OperationalInsights/workspaces@2021-06-01' = {
    name: '${baseName}logs'
    location: location
    properties: any({
    retentionInDays: 30
    features: {
    searchVersion: 1
    }
    sku: {
    name: 'PerGB2018'
    }
    })
    }

    resource appInsights 'Microsoft.Insights/components@2020-02-02' = {
    name: '${baseName}ai'
    location: location
    kind: 'web'
    properties: {
    Application_Type: 'web'
    WorkspaceResourceId: logs.id
    }
    }

    resource env 'Microsoft.App/managedEnvironments@2022-01-01-preview' = {
    name: '${baseName}env'
    location: location
    properties: {
    appLogsConfiguration: {
    destination: 'log-analytics'
    logAnalyticsConfiguration: {
    customerId: logs.properties.customerId
    sharedKey: logs.listKeys().primarySharedKey
    }
    }
    }
    }

    output id string = env.id
    output appInsightsInstrumentationKey string = appInsights.properties.InstrumentationKey
    output appInsightsConnectionString string = appInsights.properties.ConnectionString


    The container_apps.bicep template defines numerous parameters to provide a reusable template for creating container apps. This allows the module to be used in other CI/CD pipelines as well.

    container_app.bicep without Dapr

    param name string
    param location string = resourceGroup().location
    param containerAppEnvironmentId string
    param repositoryImage string = 'mcr.microsoft.com/azuredocs/containerapps-helloworld:latest'
    param envVars array = []
    param registry string
    param minReplicas int = 1
    param maxReplicas int = 1
    param port int = 80
    param externalIngress bool = false
    param allowInsecure bool = true
    param transport string = 'http'
    param registryUsername string
    @secure()
    param registryPassword string

    resource containerApp 'Microsoft.App/containerApps@2022-01-01-preview' ={
    name: name
    location: location
    properties:{
    managedEnvironmentId: containerAppEnvironmentId
    configuration: {
    activeRevisionsMode: 'single'
    secrets: [
    {
    name: 'container-registry-password'
    value: registryPassword
    }
    ]
    registries: [
    {
    server: registry
    username: registryUsername
    passwordSecretRef: 'container-registry-password'
    }
    ]
    ingress: {
    external: externalIngress
    targetPort: port
    transport: transport
    allowInsecure: allowInsecure
    }
    }
    template: {
    containers: [
    {
    image: repositoryImage
    name: name
    env: envVars
    }
    ]
    scale: {
    minReplicas: minReplicas
    maxReplicas: maxReplicas
    }
    }
    }
    }

    output fqdn string = containerApp.properties.configuration.ingress.fqdn

    container_app.bicep with Dapr


    param name string
    param location string = resourceGroup().location
    param containerAppEnvironmentId string
    param repositoryImage string = 'mcr.microsoft.com/azuredocs/containerapps-helloworld:latest'
    param envVars array = []
    param registry string
    param minReplicas int = 1
    param maxReplicas int = 1
    param port int = 80
    param externalIngress bool = false
    param allowInsecure bool = true
    param transport string = 'http'
    param appProtocol string = 'http'
    param registryUsername string
    @secure()
    param registryPassword string

    resource containerApp 'Microsoft.App/containerApps@2022-01-01-preview' ={
    name: name
    location: location
    properties:{
    managedEnvironmentId: containerAppEnvironmentId
    configuration: {
    dapr: {
    enabled: true
    appId: name
    appPort: port
    appProtocol: appProtocol
    }
    activeRevisionsMode: 'single'
    secrets: [
    {
    name: 'container-registry-password'
    value: registryPassword
    }
    ]
    registries: [
    {
    server: registry
    username: registryUsername
    passwordSecretRef: 'container-registry-password'
    }
    ]
    ingress: {
    external: externalIngress
    targetPort: port
    transport: transport
    allowInsecure: allowInsecure
    }
    }
    template: {
    containers: [
    {
    image: repositoryImage
    name: name
    env: envVars
    }
    ]
    scale: {
    minReplicas: minReplicas
    maxReplicas: maxReplicas
    }
    }
    }
    }

    output fqdn string = containerApp.properties.configuration.ingress.fqdn


    Understanding configuration differences with Dapr

    The code for this specific sample application is largely the same whether or not Dapr is integrated. However, even with this simple app, there are a few benefits and configuration differences when using Dapr that are worth exploring.

    In this scenario most of the changes are related to communication between the container apps. However, you can explore the full range of Dapr benefits by reading the Dapr integration with Azure Container Apps article in the conceptual documentation.

    Without Dapr

    Without Dapr the main.bicep template handles wiring up the front-end store app to communicate with the back-end apis by manually managing environment variables. The bicep template retrieves the fully qualified domains (fqdn) of the API apps as output parameters when they are created. Those configurations are then set as environment variables on the store container app.


    # Retrieve environment variables from API container creation
    var frontend_config = [
    {
    name: 'ProductsApi'
    value: 'http://${products.outputs.fqdn}'
    }
    {
    name: 'InventoryApi'
    value: 'http://${inventory.outputs.fqdn}'
    }
    ]

    # create the store api container app, passing in config
    module store 'container_app.bicep' = {
    name: 'store'
    params: {
    name: 'store'
    location: location
    registryPassword: acr.listCredentials().passwords[0].value
    registryUsername: acr.listCredentials().username
    containerAppEnvironmentId: env.outputs.id
    registry: acr.name
    envVars: union(shared_config, frontend_config)
    externalIngress: true
    }
    }

    The environment variables are then retrieved inside of the program class and used to configure the base URLs of the corresponding HTTP clients.


    builder.Services.AddHttpClient("Products", (httpClient) => httpClient.BaseAddress = new Uri(builder.Configuration.GetValue<string>("ProductsApi")));
    builder.Services.AddHttpClient("Inventory", (httpClient) => httpClient.BaseAddress = new Uri(builder.Configuration.GetValue<string>("InventoryApi")));

    With Dapr

    Dapr can be enabled on a container app when it is created, as seen below. This configuration adds a Dapr sidecar to the app to streamline discovery and communication features between the different container apps in your environment.


    # Create the container app with Dapr enabled
    resource containerApp 'Microsoft.App/containerApps@2022-01-01-preview' ={
    name: name
    location: location
    properties:{
    managedEnvironmentId: containerAppEnvironmentId
    configuration: {
    dapr: {
    enabled: true
    appId: name
    appPort: port
    appProtocol: appProtocol
    }
    activeRevisionsMode: 'single'
    secrets: [
    {
    name: 'container-registry-password'
    value: registryPassword
    }
    ]

    # Rest of template omitted for brevity...
    }
    }

    Some of these Dapr features can be surfaced through the program file. You can configure your HttpClient to leverage Dapr configurations when communicating with other apps in your environment.


    // reconfigure code to make requests to Dapr sidecar
    var baseURL = (Environment.GetEnvironmentVariable("BASE_URL") ?? "http://localhost") + ":" + (Environment.GetEnvironmentVariable("DAPR_HTTP_PORT") ?? "3500");
    builder.Services.AddHttpClient("Products", (httpClient) =>
    {
    httpClient.BaseAddress = new Uri(baseURL);
    httpClient.DefaultRequestHeaders.Add("dapr-app-id", "Products");
    });

    builder.Services.AddHttpClient("Inventory", (httpClient) =>
    {
    httpClient.BaseAddress = new Uri(baseURL);
    httpClient.DefaultRequestHeaders.Add("dapr-app-id", "Inventory");
    });


    Clean up resources

    If you're not going to continue to use this application, you can delete the Azure Container Apps and all the associated services by removing the resource group.

    Follow these steps in the Azure portal to remove the resources you created:

    1. In the Azure portal, navigate to the msdocswebappsapi resource group using the left navigation or search bar.
    2. Select the Delete resource group button at the top of the resource group Overview.
    3. Enter the resource group name msdocswebappsapi in the Are you sure you want to delete "msdocswebappsapi" confirmation dialog.
    4. Select Delete.
      The process to delete the resource group may take a few minutes to complete.
    - + \ No newline at end of file diff --git a/blog/tags/azure-container-apps/page/4/index.html b/blog/tags/azure-container-apps/page/4/index.html index 005afbdc58..f5c02f26c7 100644 --- a/blog/tags/azure-container-apps/page/4/index.html +++ b/blog/tags/azure-container-apps/page/4/index.html @@ -14,7 +14,7 @@ - + @@ -22,7 +22,7 @@

    20 posts tagged with "azure-container-apps"

    View All Tags

    · 10 min read
    Ayca Bas

    Welcome to Day 20 of #30DaysOfServerless!

    Every day millions of people spend their precious time in productivity tools. What if you use data and intelligence behind the Microsoft applications (Microsoft Teams, Outlook, and many other Office apps) to build seamless automations and custom apps to boost productivity?

    In this post, we'll learn how to build a seamless onboarding experience for new employees joining a company with the power of Microsoft Graph, integrated with Event Hubs and Logic Apps!


    What We'll Cover

    • ✨ The power of Microsoft Graph
    • 🖇️ How do Microsoft Graph and Event Hubs work together?
    • 🛠 Let's Build an Onboarding Workflow!
      • 1️⃣ Setup Azure Event Hubs + Key Vault
      • 2️⃣ Subscribe to users, receive change notifications from Logic Apps
      • 3️⃣ Create Onboarding workflow in the Logic Apps
    • 🚀 Debug: Your onboarding experience
    • ✋ Exercise: Try this tutorial out yourself!
    • 📚 Resources: For Self-Study


    ✨ The Power of Microsoft Graph

    Microsoft Graph is the gateway to data and intelligence in Microsoft 365 platform. Microsoft Graph exploses Rest APIs and client libraries to access data across Microsoft 365 core services such as Calendar, Teams, To Do, Outlook, People, Planner, OneDrive, OneNote and more.

    Overview of Microsoft Graph

    You can build custom experiences by using Microsoft Graph such as automating the onboarding process for new employees. When new employees are created in the Azure Active Directory, they will be automatically added in the Onboarding team on Microsoft Teams.

    Solution architecture


    🖇️ Microsoft Graph with Event Hubs

    Microsoft Graph uses a webhook mechanism to track changes in resources and deliver change notifications to the clients. For example, with Microsoft Graph Change Notifications, you can receive change notifications when:

    • a new task is added in the to-do list
    • a user changes the presence status from busy to available
    • an event is deleted/cancelled from the calendar

    If you'd like to track a large set of resources at a high frequency, use Azure Events Hubs instead of traditional webhooks to receive change notifications. Azure Event Hubs is a popular real-time events ingestion and distribution service built for scale.

    EVENT GRID - PARTNER EVENTS

    Microsoft Graph Change Notifications can be also received by using Azure Event Grid -- currently available for Microsoft Partners! Read the Partner Events Overview documentation for details.

    Setup Azure Event Hubs + Key Vault.

    To get Microsoft Graph Change Notifications delivered to Azure Event Hubs, we'll have to setup Azure Event Hubs and Azure Key Vault. We'll use Azure Key Vault to access to Event Hubs connection string.

    1️⃣ Create Azure Event Hubs

    1. Go to Azure Portal and select Create a resource, type Event Hubs and select click Create.
    2. Fill in the Event Hubs namespace creation details, and then click Create.
    3. Go to the newly created Event Hubs namespace page, select Event Hubs tab from the left pane and + Event Hub:
      • Name your Event Hub as Event Hub
      • Click Create.
    4. Click the name of the Event Hub, and then select Shared access policies and + Add to add a new policy:
      • Give a name to the policy
      • Check Send and Listen
      • Click Create.
    5. After the policy has been created, click the name of the policy to open the details panel, and then copy the Connection string-primary key value. Write it down; you'll need it for the next step.
    6. Go to Consumer groups tab in the left pane and select + Consumer group, give a name for your consumer group as onboarding and select Create.

    2️⃣ Create Azure Key Vault

    1. Go to Azure Portal and select Create a resource, type Key Vault and select Create.
    2. Fill in the Key Vault creation details, and then click Review + Create.
    3. Go to newly created Key Vault and select Secrets tab from the left pane and click + Generate/Import:
      • Give a name to the secret
      • For the value, paste in the connection string you generated at the Event Hubs step
      • Click Create
      • Copy the name of the secret.
    4. Select Access Policies from the left pane and + Add Access Policy:
      • For Secret permissions, select Get
      • For Principal, select Microsoft Graph Change Tracking
      • Click Add.
    5. Select Overview tab from the left pane and copy the Vault URI.

    Subscribe for Logic Apps change notifications

    To start receiving Microsoft Graph Change Notifications, we'll need to create subscription to the resource that we'd like to track - here, 'users'. We'll use Azure Logic Apps to create subscription.

    To create subscription for Microsoft Graph Change Notifications, we'll need to make a http post request to https://graph.microsoft.com/v1.0/subscriptions. Microsoft Graph requires Azure Active Directory authentication make API calls. First, we'll need to register an app to Azure Active Directory, and then we will make the Microsoft Graph Subscription API call with Azure Logic Apps.

    1️⃣ Create an app in Azure Active Directory

    1. In the Azure Portal, go to Azure Active Directory and select App registrations from the left pane and select + New registration. Fill in the details for the new App registration form as below:
      • Name: Graph Subscription Flow Auth
      • Supported account types: Accounts in any organizational directory (Any Azure AD directory - Multitenant) and personal Microsoft accounts (e.g. Skype, Xbox)
      • Select Register.
    2. Go to newly registered app in Azure Active Directory, select API permissions:
      • Select + Add a permission and Microsoft Graph
      • Select Application permissions and add User.Read.All and Directory.Read.All.
      • Select Grant admin consent for the organization
    3. Select Certificates & secrets tab from the left pane, select + New client secret:
      • Choose desired expiry duration
      • Select Add
      • Copy the value of the secret.
    4. Go to Overview from the left pane, copy Application (client) ID and Directory (tenant) ID.

    2️⃣ Create subscription with Azure Logic Apps

    1. Go to Azure Portal and select Create a resource, type Logic apps and select click Create.

    2. Fill in the Logic Apps creation details, and then click Create.

    3. Go to the newly created Logic Apps page, select Workflows tab from the left pane and select + Add:

      • Give a name to the new workflow as graph-subscription-flow
      • Select Stateful as a state type
      • Click Create.
    4. Go to graph-subscription-flow, and then select Designer tab.

    5. In the Choose an operation section, search for Schedule and select Recurrence as a trigger. Fill in the parameters as below:

      • Interval: 61
      • Frequency: Minute
      • Time zone: Select your own time zone
      • Start time: Set a start time
    6. Select + button in the flow and select add an action. Search for HTTP and select HTTP as an action. Fill in the parameters as below:

      • Method: POST
      • URI: https://graph.microsoft.com/v1.0/subscriptions
      • Headers:
        • Key: Content-type
        • Value: application/json
      • Body:
      {
      "changeType": "created, updated",
      "clientState": "secretClientValue",
      "expirationDateTime": "@{addHours(utcNow(), 1)}",
      "notificationUrl": "EventHub:https://<YOUR-VAULT-URI>/secrets/<YOUR-KEY-VAULT-SECRET-NAME>?tenantId=72f988bf-86f1-41af-91ab-2d7cd011db47",
      "resource": "users"
      }

      In notificationUrl, make sure to replace <YOUR-VAULT-URI> with the vault uri and <YOUR-KEY-VAULT-SECRET-NAME> with the secret name that you copied from the Key Vault.

      In resource, define the resource type you'd like to track changes. For our example, we will track changes for users resource.

      • Authentication:
        • Authentication type: Active Directory OAuth
        • Authority: https://login.microsoft.com
        • Tenant: Directory (tenant) ID copied from AAD app
        • Audience: https://graph.microsoft.com
        • Client ID: Application (client) ID copied from AAD app
        • Credential Type: Secret
        • Secret: value of the secret copied from AAD app
    7. Select Save and run your workflow from the Overview tab.

      Check your subscription in Graph Explorer: If you'd like to make sure that your subscription is created successfully by Logic Apps, you can go to Graph Explorer, login with your Microsoft 365 account and make GET request to https://graph.microsoft.com/v1.0/subscriptions. Your subscription should appear in the response after it's created successfully.

    Subscription workflow success

    After subscription is created successfully by Logic Apps, Azure Event Hubs will receive notifications whenever there is a new user created in Azure Active Directory.


    Create Onboarding workflow in Logic Apps

    We'll create a second workflow in the Logic Apps to receive change notifications from Event Hubs when there is a new user created in the Azure Active Directory and add new user in Onboarding team on Microsoft Teams.

    1. Go to the Logic Apps you created in the previous steps, select Workflows tab and create a new workflow by selecting + Add:
      • Give a name to the new workflow as teams-onboarding-flow
      • Select Stateful as a state type
      • Click Create.
    2. Go to teams-onboarding-flow, and then select Designer tab.
    3. In the Choose an operation section, search for Event Hub, select When events are available in Event Hub as a trigger. Setup Event Hub connection as below:
      • Create Connection:
        • Connection name: Connection
        • Authentication Type: Connection String
        • Connection String: Go to Event Hubs > Shared Access Policies > RootManageSharedAccessKey and copy Connection string–primary key
        • Select Create.
      • Parameters:
        • Event Hub Name: Event Hub
        • Consumer Group Name: onboarding
    4. Select + in the flow and add an action, search for Control and add For each as an action. Fill in For each action as below:
      • Select output from previous steps: Events
    5. Inside For each, select + in the flow and add an action, search for Data operations and select Parse JSON. Fill in Parse JSON action as below:
      • Content: Events Content
      • Schema: Copy the json content from schema-parse.json and paste as a schema
    6. Select + in the flow and add an action, search for Control and add For each as an action. Fill in For each action as below:
      • Select output from previous steps: value
      1. Inside For each, select + in the flow and add an action, search for Microsoft Teams and select Add a member to a team. Login with your Microsoft 365 account to create a connection and fill in Add a member to a team action as below:
      • Team: Create an Onboarding team on Microsoft Teams and select
      • A user AAD ID for the user to add to a team: id
    7. Select Save.

    🚀 Debug your onboarding experience

    To debug our onboarding experience, we'll need to create a new user in Azure Active Directory and see if it's added in Microsoft Teams Onboarding team automatically.

    1. Go to Azure Portal and select Azure Active Directory from the left pane and go to Users. Select + New user and Create new user. Fill in the details as below:

      • User name: JaneDoe
      • Name: Jane Doe

      new user in Azure Active Directory

    2. When you added Jane Doe as a new user, it should trigger the teams-onboarding-flow to run. teams onboarding flow success

    3. Once the teams-onboarding-flow runs successfully, you should be able to see Jane Doe as a member of the Onboarding team on Microsoft Teams! 🥳 new member in Onboarding team on Microsoft Teams

    Congratulations! 🎉

    You just built an onboarding experience using Azure Logic Apps, Azure Event Hubs and Azure Key Vault.


    📚 Resources

    - + \ No newline at end of file diff --git a/blog/tags/azure-container-apps/page/5/index.html b/blog/tags/azure-container-apps/page/5/index.html index dc4d00d065..f62c932ef4 100644 --- a/blog/tags/azure-container-apps/page/5/index.html +++ b/blog/tags/azure-container-apps/page/5/index.html @@ -14,14 +14,14 @@ - +

    20 posts tagged with "azure-container-apps"

    View All Tags

    · 5 min read
    Mike Morton

    Welcome to Day 19 of #30DaysOfServerless!

    Today, we have a special set of posts from our Zero To Hero 🚀 initiative, featuring blog posts authored by our Product Engineering teams for #ServerlessSeptember. Posts were originally published on the Apps on Azure blog on Microsoft Tech Community.


    What We'll Cover

    • Log Streaming - in Azure Portal
    • Console Connect - in Azure Portal
    • Metrics - using Azure Monitor
    • Log Analytics - using Azure Monitor
    • Metric Alerts and Log Alerts - using Azure Monitor


    In past weeks, @kendallroden wrote about what it means to be Cloud-Native and @Anthony Chu the various ways to get your apps running on Azure Container Apps. Today, we will talk about the observability tools you can use to observe, debug, and diagnose your Azure Container Apps.

    Azure Container Apps provides several observability features to help you debug and diagnose your apps. There are both Azure portal and CLI options you can use to help understand the health of your apps and help identify when issues arise.

    While these features are helpful throughout your container app’s lifetime, there are two that are especially helpful. Log streaming and console connect can be a huge help in the initial stages when issues often rear their ugly head. Let's dig into both of these a little.

    Log Streaming

    Log streaming allows you to use the Azure portal to view the streaming logs from your app. You’ll see the logs written from the app to the container’s console (stderr and stdout). If your app is running multiple revisions, you can choose from which revision to view logs. You can also select a specific replica if your app is configured to scale. Lastly, you can choose from which container to view the log output. This is useful when you are running a custom or Dapr sidecar container. view streaming logs

    Here’s an example CLI command to view the logs of a container app.

    az containerapp logs show -n MyContainerapp -g MyResourceGroup

    You can find more information about the different options in our CLI docs.

    Console Connect

    In the Azure portal, you can connect to the console of a container in your app. Like log streaming, you can select the revision, replica, and container if applicable. After connecting to the console of the container, you can execute shell commands and utilities that you have installed in your container. You can view files and their contents, monitor processes, and perform other debugging tasks.

    This can be great for checking configuration files or even modifying a setting or library your container is using. Of course, updating a container in this fashion is not something you should do to a production app, but tweaking and re-testing an app in a non-production environment can speed up development.

    Here’s an example CLI command to connect to the console of a container app.

    az containerapp exec -n MyContainerapp -g MyResourceGroup

    You can find more information about the different options in our CLI docs.

    Metrics

    Azure Monitor collects metric data from your container app at regular intervals to help you gain insights into the performance and health of your container app. Container apps provide these metrics:

    • CPU usage
    • Memory working set bytes
    • Network in bytes
    • Network out bytes
    • Requests
    • Replica count
    • Replica restart count

    Here you can see the metrics explorer showing the replica count for an app as it scaled from one replica to fifteen, and then back down to one.

    You can also retrieve metric data through the Azure CLI.

    Log Analytics

    Azure Monitor Log Analytics is great for viewing your historical logs emitted from your container apps. There are two custom tables of interest, the ContainerAppConsoleLogs_CL which contains all the log messages written by your app (stdout and stderr), and the ContainerAppSystemLogs_CL which contain the system messages from the Azure Container Apps service.

    You can also query Log Analytics through the Azure CLI.

    Alerts

    Azure Monitor alerts notify you so that you can respond quickly to critical issues. There are two types of alerts that you can define:

    You can create alert rules from metric charts in the metric explorer and from queries in Log Analytics. You can also define and manage alerts from the Monitor|Alerts page.

    Here is what creating an alert looks like in the Azure portal. In this case we are setting an alert rule from the metric explorer to trigger an alert if the replica restart count for a specific container app is greater than two within the last fifteen minutes.

    To learn more about alerts, refer to Overview of alerts in Microsoft Azure.

    Conclusion

    In this article, we looked at the several ways to observe, debug, and diagnose your Azure Container Apps. As you can see there are rich portal tools and a complete set of CLI commands to use. All the tools are helpful throughout the lifecycle of your app, be sure to take advantage of them when having an issue and/or to prevent issues.

    To learn more, visit Azure Container Apps | Microsoft Azure today!

    ASK THE EXPERT: LIVE Q&A

    The Azure Container Apps team will answer questions live on September 29.

    - + \ No newline at end of file diff --git a/blog/tags/azure-container-apps/page/6/index.html b/blog/tags/azure-container-apps/page/6/index.html index c85212dd7e..ec1c97b557 100644 --- a/blog/tags/azure-container-apps/page/6/index.html +++ b/blog/tags/azure-container-apps/page/6/index.html @@ -14,14 +14,14 @@ - +

    20 posts tagged with "azure-container-apps"

    View All Tags

    · 10 min read
    Brian Benz

    Welcome to Day 18 of #30DaysOfServerless!

    Yesterday my Serverless September post introduced you to making Azure Logic Apps and Azure Cosmos DB work together with a sample application that collects weather data. Today I'm sharing a more robust solution that actually reads my mail. Let's learn about Teaching the cloud to read your mail!

    Ready? Let's go!


    What We'll Cover

    • Introduction to the ReadMail solution
    • Setting up Azure storage, Cosmos DB and Computer Vision
    • Connecting it all together with a Logic App
    • Resources: For self-study!


    Introducing the ReadMail solution

    The US Postal system offers a subscription service that sends you images of mail it will be delivering to your home. I decided it would be cool to try getting Azure to collect data based on these images, so that I could categorize my mail and track the types of mail that I received.

    To do this, I used Azure storage, Cosmos DB, Logic Apps, and computer vision. When a new email comes in from the US Postal service (USPS), it triggers a logic app that:

    • Posts attachments to Azure storage
    • Triggers Azure Computer vision to perform an OCR function on attachments
    • Extracts any results into a JSON document
    • Writes the JSON document to Cosmos DB

    workflow for the readmail solution

    In this post I'll walk you through setting up the solution for yourself.

    Prerequisites

    Setup Azure Services

    First, we'll create all of the target environments we need to be used by our Logic App, then we;ll create the Logic App.

    1. Azure Storage

    We'll be using Azure storage to collect attached images from emails as they arrive. Adding images to Azure storage will also trigger a workflow that performs OCR on new attached images and stores the OCR data in Cosmos DB.

    To create a new Azure storage account from the portal dashboard, Select Create a resource > Storage account > Create.

    The Basics tab covers all of the features and information that we will need for this solution:

    SectionFieldRequired or optionalDescription
    Project detailsSubscriptionRequiredSelect the subscription for the new storage account.
    Project detailsResource groupRequiredCreate a new resource group that you will use for storage, Cosmos DB, Computer Vision and the Logic App.
    Instance detailsStorage account nameRequiredChoose a unique name for your storage account. Storage account names must be between 3 and 24 characters in length and may contain numbers and lowercase letters only.
    Instance detailsRegionRequiredSelect the appropriate region for your storage account.
    Instance detailsPerformanceRequiredSelect Standard performance for general-purpose v2 storage accounts (default).
    Instance detailsRedundancyRequiredSelect locally-redundant Storage (LRS) for this example.

    Select Review + create to accept the remaining default options, then validate and create the account.

    2. Azure CosmosDB

    CosmosDB will be used to store the JSON documents returned by the COmputer Vision OCR process.

    See more details and screen shots for setting up CosmosDB in yesterday's Serverless September post - Using Logic Apps with Cosmos DB

    To get started with Cosmos DB, you create an account, then a database, then a container to store JSON documents. To create a new Cosmos DB account from the portal dashboard, Select Create a resource > Azure Cosmos DB > Create. Choose core SQL for the API.

    Select your subscription, then for simplicity use the same resource group you created when you set up storage. Enter an account name and choose a location, select provisioned throughput capacity mode and apply the free tier discount. From here you can select Review and Create, then Create

    Next, create a new database and container. Go to the Data Explorer in your new Cosmos DB account, and choose New Container. Name the database, and keep all the other defaults except:

    SettingAction
    Container IDid
    Container partition/id

    Press OK to create a database and container

    3. Azure Computer Vision

    Azure Cognitive Services' Computer Vision will perform an OCR process on each image attachment that is stored in Azure storage.

    From the portal dashboard, Select Create a resource > AI + Machine Learning > Computer Vision > Create.

    The Basics and Identity tabs cover all of the features and information that we will need for this solution:

    Basics Tab

    SectionFieldRequired or optionalDescription
    Project detailsSubscriptionRequiredSelect the subscription for the new service.
    Project detailsResource groupRequiredUse the same resource group that you used for Azure storage and Cosmos DB.
    Instance detailsRegionRequiredSelect the appropriate region for your Computer Vision service.
    Instance detailsNameRequiredChoose a unique name for your Computer Vision service.
    Instance detailsPricingRequiredSelect the free tier for this example.

    Identity Tab

    SectionFieldRequired or optionalDescription
    System assigned managed identityStatusRequiredEnable system assigned identity to grant the resource access to other existing resources.

    Select Review + create to accept the remaining default options, then validate and create the account.


    Connect it all with a Logic App

    Now we're ready to put this all together in a Logic App workflow!

    1. Create Logic App

    From the portal dashboard, Select Create a resource > Integration > Logic App > Create. Name your Logic App and select a location, the rest of the settings can be left at their defaults.

    2. Create Workflow: Add Trigger

    Once the Logic App is created, select Create a workflow from designer.

    A workflow is a series of steps that defines a task or process. Each workflow starts with a single trigger, after which you must add one or more actions.

    When in designer, search for outlook.com on the right under Add a trigger. Choose outlook.com. Choose When a new email arrives as the trigger.

    A trigger is always the first step in any workflow and specifies the condition for running any further steps in that workflow.

    Set the following values:

    ParameterValue
    FolderInbox
    ImportanceAny
    Only With AttachmentsYes
    Include AttachmentsYes

    Then add a new parameter:

    ParameterValue
    FromAdd the email address that sends you the email with attachments
    3. Create Workflow: Add Action (for Trigger)

    Choose add an action and choose control > for-each.

    logic app for each

    Inside the for-each action, in Select an output from previous steps, choose attachments. Then, again inside the for-each action, add the create blob action:

    Set the following values:

    ParameterValue
    Folder Path/mailreaderinbox
    Blob NameAttachments Name
    Blob ContentAttachments Content

    This extracts attachments from the email and created a new blob for each attachment.

    Next, inside the same for-each action, add the get blob content action.

    Set the following values:

    ParameterValue
    Blobid
    Infer content typeYes

    We create and read from a blob for each attachment because Computer Vision needs a non-virtual source to read from when performing an OCR process. Because we enabled system assigned identity to grant Computer Vision to other existing resources, it can access the blob but not the outlook.com attachment. Also, we pass the ID of the blob to use as a unique ID when writing to Cosmos DB.

    create blob from attachments

    Next, inside the same for-each action, choose add an action and choose control > condition. Set the value to Media Type > is equal to > image/JPEG

    The USPS sends attachments of multiple types, but we only want to scan attachments that have images of our mail, which are always JPEG images. If the condition is true, we will process the image with Computer Vision OCR and write the results to a JSON document in CosmosDB.

    In the True section of the condition, add an action and choose Computer Vision API > Optical Character Recognition (OCR) to JSON.

    Set the following values:

    ParameterValue
    Image SourceImage Content
    Image contentFile Content

    In the same True section of the condition, choose add an action and choose Cosmos DB. Choose Create or Update Document from the actions. Select Access Key, and provide the primary read-write key (found under keys in Cosmos DB), and the Cosmos DB account ID (without 'documents.azure.com').

    Next, fill in your Cosmos DB Database ID and Collection ID. Create a JSON document by selecting dynamic content elements and wrapping JSON formatting around them.

    Be sure to use the ID passed from blob storage as your unique ID for CosmosDB. That way you can troubleshoot and JSON or OCR issues by tracing back the JSON document in Cosmos Db to the blob in Azure storage. Also, include the Computer Vision JSON response, as it contains the results of the Computer Vision OCR scan. all other elements are optional.

    4. TEST WORKFLOW

    When complete, you should have an action the Logic App designer that looks something like this:

    Logic App workflow create or update document in cosmosdb

    Save the workflow and test the connections by clicking Run Trigger > Run. If connections are working, you should see documents flowing into Cosmos DB each time that an email arrives with image attachments.

    Check the data in Cosmos Db by opening the Data explorer, then choosing the container you created and selecting items. You should see documents similar to this:

    Logic App workflow with trigger and action

    1. Congratulations

    You just built your personal ReadMail solution with Logic Apps! 🎉


    Resources: For self-study!

    Once you have an understanding of the basics in ths post, there is so much more to learn!

    Thanks for stopping by!

    - + \ No newline at end of file diff --git a/blog/tags/azure-container-apps/page/7/index.html b/blog/tags/azure-container-apps/page/7/index.html index 188af24431..131e83144d 100644 --- a/blog/tags/azure-container-apps/page/7/index.html +++ b/blog/tags/azure-container-apps/page/7/index.html @@ -14,14 +14,14 @@ - +

    20 posts tagged with "azure-container-apps"

    View All Tags

    · 6 min read
    Brian Benz

    Welcome to Day 17 of #30DaysOfServerless!

    In past weeks, we've covered serverless technologies that provide core capabilities (functions, containers, microservices) for building serverless solutions. This week we're looking at technologies that make service integrations more seamless, starting with Logic Apps. Let's look at one usage example today!

    Ready? Let's Go!


    What We'll Cover

    • Introduction to Logic Apps
    • Settng up Cosmos DB for Logic Apps
    • Setting up a Logic App connection and event
    • Writing data to Cosmos DB from a Logic app
    • Resources: For self-study!


    Introduction to Logic Apps

    Previously in Serverless September, we've covered Azure Functions, where the event triggers code. In Logic Apps, the event triggers a workflow that you design. Logic Apps enable serverless applications to connect to external sources for data then automate business processes via workflows.

    In this post I'll walk you through setting up a Logic App that works with Cosmos DB. For this example, we'll connect to the MSN weather service, an design a logic app workflow that collects data when weather changes, and writes the data to Cosmos DB.

    PREREQUISITES

    Setup Cosmos DB for Logic Apps

    Cosmos DB has many APIs to choose from, but to use the default Logic App connection, we need to choose the a Cosmos DB SQL API. We'll set this up via the Azure Portal.

    To get started with Cosmos DB, you create an account, then a database, then a container to store JSON documents. To create a new Cosmos DB account from the portal dashboard, Select Create a resource > Azure Cosmos DB > Create. Choose core SQL for the API.

    Select your subscription, then create a new resource group called CosmosWeather. Enter an account name and choose a location, select provisioned throughput capacity mode and apply the free tier discount. From here you can select Review and Create, then Create

    Azure Cosmos DB is available in two different capacity modes: provisioned throughput and serverless. You can perform the same database operations in both modes, but the way you get billed for these operations is different. We wil be using provisioned throughput and the free tier for this example.

    Setup the CosmosDB account

    Next, create a new database and container. Go to the Data Explorer in your new Cosmos DB account, and choose New Container. Name the database, and keep all the orher defaults except:

    SettingAction
    Container IDid
    Container partition/id

    Press OK to create a database and container

    A database is analogous to a traditional DBMS namespace. It's used to organize one or more containers.

    Setup the CosmosDB Container

    Now we're ready to set up our logic app an write to Cosmos DB!

    Setup Logic App connection + event

    Once the Cosmos DB SQL API account is created, we can set up our Logic App. From the portal dashboard, Select Create a resource > Integration > Logic App > Create. Name your Logic App and select a location, the rest fo the settings can be left at their defaults. Once you new Logic App is created, select Create a workflow from designer to get started.

    A workflow is a series of steps that defines a task or process. Each workflow starts with a single trigger, after which you must add one or more actions.

    When in designer, search for weather on the right under Add a trigger. Choose MSN Weather. Choose When the current conditions change as the trigger.

    A trigger is always the first step in any workflow and specifies the condition for running any further steps in that workflow.

    Add a location. Valid locations are City, Region, State, Country, Landmark, Postal Code, latitude and longitude. This triggers a new workflow when the conditions change for a location.

    Write data from Logic App to Cosmos DB

    Now we are ready to set up the action to write data to Cosmos DB. Choose add an action and choose Cosmos DB.

    An action is each step in a workflow after the trigger. Every action runs some operation in a workflow.

    In this case, we will be writing a JSON document to the Cosmos DB container we created earlier. Choose Create or Update Document from the actions. At this point you should have a workflow in designer that looks something like this:

    Logic App workflow with trigger

    Start wth the connection for set up the Cosmos DB action. Select Access Key, and provide the primary read-write key (found under keys in Cosmos DB), and the Cosmos DB account ID (without 'documents.azure.com').

    Next, fill in your Cosmos DB Database ID and Collection ID. Create a JSON document bt selecting dynamic content elements and wrapping JSON formatting around them.

    You will need a unique ID for each document that you write to Cosmos DB, for that you can use an expression. Because we declared id to be our unique ID in Cosmos DB, we will use use that for the name. Under expressions, type guid() and press enter to add a unique ID to the JSON document. When complete, you should have a workflow in designer that looks something like this:

    Logic App workflow with trigger and action

    Save the workflow and test the connections by clicking Run Trigger > Run. If connections are working, you should see documents flowing into Cosmos DB over the next few minutes.

    Check the data in Cosmos Db by opening the Data explorer, then choosing the container you created and selecting items. You should see documents similar to this:

    Logic App workflow with trigger and action

    Resources: For self-study!

    Once you've grasped the basics in ths post, there is so much more to learn!

    Thanks for stopping by!

    - + \ No newline at end of file diff --git a/blog/tags/azure-container-apps/page/8/index.html b/blog/tags/azure-container-apps/page/8/index.html index cb7839d283..ac2c8804aa 100644 --- a/blog/tags/azure-container-apps/page/8/index.html +++ b/blog/tags/azure-container-apps/page/8/index.html @@ -14,13 +14,13 @@ - +

    20 posts tagged with "azure-container-apps"

    View All Tags

    · 4 min read
    Nitya Narasimhan
    Devanshi Joshi

    Welcome to Day 15 of #30DaysOfServerless!

    This post marks the midpoint of our Serverless on Azure journey! Our Week 2 Roadmap showcased two key technologies - Azure Container Apps (ACA) and Dapr - for building serverless microservices. We'll also look at what happened elsewhere in #ServerlessSeptember, then set the stage for our next week's focus: Serverless Integrations.

    Ready? Let's Go!


    What We'll Cover

    • ICYMI: This Week on #ServerlessSeptember
    • Recap: Microservices, Azure Container Apps & Dapr
    • Coming Next: Serverless Integrations
    • Exercise: Take the Cloud Skills Challenge
    • Resources: For self-study!

    This Week In Events

    We had a number of activities happen this week - here's a quick summary:

    This Week in #30Days

    In our #30Days series we focused on Azure Container Apps and Dapr.

    • In Hello Container Apps we learned how Azure Container Apps helps you run microservices and containerized apps on serverless platforms. And we build and deployed our first ACA.
    • In Microservices Communication we explored concepts like environments and virtual networking, with a hands-on example to show how two microservices communicate in a deployed ACA.
    • In Scaling Your Container Apps we learned about KEDA (Kubernetes Event-Driven Autoscaler) and how to configure autoscaling for your ACA based on KEDA-supported triggers.
    • In Build with Dapr we introduced the Distributed Application Runtime (Dapr) and learned how its Building Block APIs and sidecar architecture make it easier to develop microservices with ACA.
    • In Secure ACA Access we learned how to secure ACA access to external services with - and without - Dapr, covering Secret Stores and Managed Identity.
    • Finally, Build ACA with Dapr tied it all together with a enterprise app scenario where an orders processor (ACA) uses Dapr APIs (PubSub, State Management) to receive and store order messages from Azure Service Bus.

    Here's a visual recap:

    Self Study: Code Samples & Tutorials

    There's no better way to get familiar with the concepts, than to dive in and play with code samples and hands-on tutorials. Here are 4 resources to bookmark and try out:

    1. Dapr Quickstarts - these walk you through samples showcasing individual Building Block APIs - with multiple language options available.
    2. Dapr Tutorials provides more complex examples of microservices applications and tools usage, including a Distributed Calculator polyglot app.
    3. Next, try to Deploy a Dapr application to Azure Container Apps to get familiar with the process of setting up the environment, then deploying the app.
    4. Or, explore the many Azure Container Apps samples showcasing various features and more complex architectures tied to real world scenarios.

    What's Next: Serverless Integrations!

    So far we've talked about core technologies (Azure Functions, Azure Container Apps, Dapr) that provide foundational support for your serverless solution. Next, we'll look at Serverless Integrations - specifically at technologies like Azure Logic Apps and Azure Event Grid that automate workflows and create seamless end-to-end solutions that integrate other Azure services in serverless-friendly ways.

    Take the Challenge!

    The Cloud Skills Challenge is still going on, and we've already had hundreds of participants join and complete the learning modules to skill up on Serverless.

    There's still time to join and get yourself on the leaderboard. Get familiar with Azure Functions, SignalR, Logic Apps, Azure SQL and more - in serverless contexts!!


    - + \ No newline at end of file diff --git a/blog/tags/azure-container-apps/page/9/index.html b/blog/tags/azure-container-apps/page/9/index.html index 5f896849b5..3df4ec6f9e 100644 --- a/blog/tags/azure-container-apps/page/9/index.html +++ b/blog/tags/azure-container-apps/page/9/index.html @@ -14,7 +14,7 @@ - + @@ -24,7 +24,7 @@ Image showing container apps role assignment

  • Lastly, we need to restart the container app revision, to do so run the command below:

     ##Get revision name and assign it to a variable
    $REVISION_NAME = (az containerapp revision list `
    --name $BACKEND_SVC_NAME `
    --resource-group $RESOURCE_GROUP `
    --query [0].name)

    ##Restart revision by name
    az containerapp revision restart `
    --resource-group $RESOURCE_GROUP `
    --name $BACKEND_SVC_NAME `
    --revision $REVISION_NAME
  • Run end-to-end Test on Azure

    From the Azure Portal, select the Azure Container App orders-processor and navigate to Log stream under Monitoring tab, leave the stream connected and opened. From the Azure Portal, select the Azure Service Bus Namespace ordersservices, select the topic orderreceivedtopic, select the subscription named orders-processor-subscription, then click on Service Bus Explorer (preview). From there we need to publish/send a message. Use the JSON payload below

    ```json
    {
    "data": {
    "reference": "Order 150",
    "quantity": 150,
    "createdOn": "2022-05-10T12:45:22.0983978Z"
    }
    }
    ```

    If all is configured correctly, you should start seeing the information logs in Container Apps Log stream, similar to the images below Image showing publishing messages from Azure Service

    Information logs on the Log stream of the deployed Azure Container App Image showing ACA Log Stream

    🎉 CONGRATULATIONS

    You have successfully deployed to the cloud an Azure Container App and configured Dapr Pub/Sub API with Azure Service Bus.

    9. Clean up

    If you are done with the tutorial, use the following command to delete the resource group and all its contained resources to avoid incurring further costs.

    az group delete --name $RESOURCE_GROUP

    Exercise

    I left for you the configuration of the Dapr State Store API with Azure Cosmos DB :)

    When you look at the action method OrderReceived in controller ExternalOrdersController, you will see that I left a line with ToDo: note, this line is responsible to save the received message (OrderModel) into Azure Cosmos DB.

    There is no need to change anything on the code base (other than removing this commented line), that's the beauty of Dapr Building Blocks and how easy it allows us to plug components to our microservice application without any plumping and brining external SDKs.

    For sure you need to work on the configuration part of Dapr State Store by creating a new component file like what we have done with the Pub/Sub API, things that you need to work on are:

    • Provision Azure Cosmos DB Account and obtain its masterKey.
    • Create a Dapr Component file adhering to Dapr Specs.
    • Create an Azure Container Apps component file adhering to ACA component specs.
    • Test locally on your dev machine using Dapr Component file.
    • Register the new Dapr State Store component with Azure Container Apps Environment and set the Cosmos Db masterKey from the Azure Portal. If you want to challenge yourself more, use the Managed Identity approach as done in this post! The right way to protect your keys and you will not worry about managing CosmosDb keys anymore!
    • Build a new image of the application and push it to Azure Container Registry.
    • Update Azure Container Apps and create a new revision which contains the updated code.
    • Verify the results by checking Azure Cosmos DB, you should see the Order Model stored in Cosmos DB.

    If you need help, you can always refer to my blog post Azure Container Apps State Store With Dapr State Management API which contains exactly what you need to implement here, so I'm very confident you will be able to complete this exercise with no issues, happy coding :)

    What's Next?

    If you enjoyed working with Dapr and Azure Container Apps, and you want to have a deep dive with more complex scenarios (Dapr bindings, service discovery, auto scaling with KEDA, sync services communication, distributed tracing, health probes, etc...) where multiple services deployed to a single Container App Environment; I have created a detailed tutorial which should walk you through step by step with through details to build the application.

    So far, the published posts below, and I'm publishing more posts on weekly basis, so stay tuned :)

    Resources

    - + \ No newline at end of file diff --git a/blog/tags/azure-developer-cli/index.html b/blog/tags/azure-developer-cli/index.html index b2c88a2f43..5109bae450 100644 --- a/blog/tags/azure-developer-cli/index.html +++ b/blog/tags/azure-developer-cli/index.html @@ -14,13 +14,13 @@ - +

    2 posts tagged with "azure-developer-cli"

    View All Tags

    · 5 min read
    Savannah Ostrowski

    Welcome to Beyond #30DaysOfServerless! in October!

    Yes, it's October!! And since we ended #ServerlessSeptember with a focus on End-to-End Development for Serverless on Azure, we thought it would be good to share updates in October that can help you skill up even further.

    Today, we're following up on the Code to Cloud with azd blog post (Day #29) where we introduced the Azure Developer CLI (azd), an open-source tool for streamlining your end-to-end developer experience going from local development environment to Azure cloud. In today's post, we celebrate the October 2022 release of the tool, with three cool new features.

    And if it's October, it must be #Hacktoberfest!! Read on to learn about how you can take advantage of one of the new features, to contribute to the azd open-source community and ecosystem!

    Ready? Let's go!


    What We'll Cover

    • Azure Friday: Introducing the Azure Developer CLI (Video)
    • October 2022 Release: What's New in the Azure Developer CLI?
      • Azure Pipelines for CI/CD: Learn more
      • Improved Infrastructure as Code structure via Bicep modules: Learn more
      • A new azd template gallery: The new azd-templates gallery for community use! Learn more
    • Awesome-Azd: The new azd-templates gallery for Community use
      • Features: discover, create, contribute, request - templates
      • Hacktoberfest: opportunities to contribute in October - and beyond!


    Azure Friday

    This post is a follow-up to our #ServerlessSeptember post on Code to Cloud with Azure Developer CLI where we introduced azd, a new open-source tool that makes it quick and simple for you to move your application from a local development environment to Azure, streamlining your end-to-end developer workflow in the process.

    Prefer to watch a video overview? I have you covered! Check out my recent conversation with Scott Hanselman on Azure Friday where we:

    • talked about the code-to-cloud developer journey
    • walkthrough the ins and outs of an azd template
    • explored Azure Developer CLI commands in the terminal and VS Code, and
    • (probably most importantly) got a web app up and running on Azure with a database, Key Vault and monitoring all in a couple of minutes

    October Release

    We're pleased to announce the October 2022 release of the Azure Developer CLI (currently 0.3.0-beta.2). Read the release announcement for more details. Here are the highlights:

    • Azure Pipelines for CI/CD: This addresses azure-dev#101, adding support for Azure Pipelines (alongside GitHub Actions) as a CI/CD provider. Learn more about usage and related documentation.
    • Improved Infrastructure as Code structure via Bicep modules: This addresses azure-dev#543, which recognized the complexity of using a single resources.bicep file for all resources. With this release, azd templates now come with Bicep modules organized by purpose making it easier to edit and understand. Learn more about this structure, and how to use it.
    • New Templates Gallery - awesome-azd: This addresses azure-dev#398, which aimed to make templates more discoverable and easier to contribute. Learn more about how the new gallery improves the template discovery experience.

    In the next section, we'll dive briefly into the last feature, introducing the new awesome-azd site and resource for templates discovery and contribution. And, since it's #Hacktoberfest season, we'll talk about the Contributor Guide and the many ways you can contribute to this project - with, or without, code.


    It's awesome-azd

    Welcome to awesome-azd a new template gallery hosted on GitHub Pages, and meant to be a destination site for discovering, requesting, and contributing azd-templates for community use!

    In addition, it's README reflects the awesome-list resource format, providing a location for the community to share "best of" resources for Azure Developer CLI - from blog posts and videos, to full-scale tutorials and templates.

    The Gallery is organized into three main areas:

    Take a minute to explore the Gallery and note the features:

    • Search for templates by name
    • Requested Templates - indicating asks from the community
    • Featured Templates - highlighting high-quality templates
    • Filters - to discover templates by and/or query combinations

    Check back often to see the latest contributed templates and requests!


    Hacktoberfest

    So, why is this a good time to talk about the Gallery? Because October means it's time for #Hacktoberfest - a month-long celebration of open-source projects and their maintainers, and an opportunity for first-time contributors to get support and guidance making their first pull-requests! Check out the #Hacktoberfest topic on GitHub for projects you can contribute to.

    And we hope you think of awesome-azd as another possible project to contribute to.

    Check out the FAQ section to learn how to create, discover, and contribute templates. Or take a couple of minutes to watch this video walkthrough from Jon Gallant:

    And don't hesitate to reach out to us - either via Issues on the repo, or in the Discussions section of this site, to give us feedback!

    Happy Hacking! 🎃


    - + \ No newline at end of file diff --git a/blog/tags/azure-developer-cli/page/2/index.html b/blog/tags/azure-developer-cli/page/2/index.html index f266e531fa..c2476d2cd0 100644 --- a/blog/tags/azure-developer-cli/page/2/index.html +++ b/blog/tags/azure-developer-cli/page/2/index.html @@ -14,7 +14,7 @@ - + @@ -26,7 +26,7 @@

    ...and that's it! We've successfully deployed our application on Azure!

    But there's more!

    Best practices: Monitoring and CI/CD!

    In my opinion, it's not enough to just set up the application on Azure! I want to know that my web app is performant and serving my users reliably! I also want to make sure that I'm not inadvertently breaking my application as I continue to make changes to it. Thankfully, the Azure Developer CLI also handles all of this via two additional commands - azd monitor and azd pipeline config.

    Application Monitoring

    When we provisioned all of our infrastructure, we also set up application monitoring via a Bicep file in our .infra/ directory that spec'd out an Application Insights dashboard. By running azd monitor we can see the dashboard with live metrics that was configured for the application.

    We can also navigate to the Application Dashboard by clicking on the resource group name, where you can set a specific refresh rate for the dashboard, and see usage, reliability, and performance metrics over time.

    I don't know about everyone else but I have spent a ton of time building out similar dashboards. It can be super time-consuming to write all the queries and create the visualizations so this feels like a real time saver.

    CI/CD

    Finally let's talk about setting up CI/CD! This might be my favorite azd feature. As I mentioned before, the Azure Developer CLI has a command, azd pipeline config, which uses the files in the .github/ directory to set up a GitHub Action. More than that, if there is no upstream repo, the Developer CLI will actually help you create one. But what does this mean exactly? Because our GitHub Action is using the same commands you'd run in the CLI under the hood, we're actually going to have CI/CD set up to run on every commit into the repo, against real Azure resources. What a sweet collaboration feature!

    That's it! We've gone end-to-end with the Azure Developer CLI - initialized a project, provisioned the resources on Azure, deployed our code on Azure, set up monitoring logs and dashboards, and set up a CI/CD pipeline with GitHub Actions to run on every commit into the repo (on real Azure resources!).

    Exercise: Try it yourself or create your own template!

    As an exercise, try out the workflow above with any template on GitHub!

    Or, try turning your own project into an Azure Developer CLI-enabled template by following this guidance. If you create your own template, don't forget to tag the repo with the azd-templates topic on GitHub to help others find it (unfamiliar with GitHub topics? Learn how to add topics to your repo)! We'd also love to chat with you about your experience creating an azd template - if you're open to providing feedback around this, please fill out this form!

    Resources

    - + \ No newline at end of file diff --git a/blog/tags/azure-event-grid/index.html b/blog/tags/azure-event-grid/index.html index 28629d794b..4015729f08 100644 --- a/blog/tags/azure-event-grid/index.html +++ b/blog/tags/azure-event-grid/index.html @@ -14,14 +14,14 @@ - +

    3 posts tagged with "azure-event-grid"

    View All Tags

    · 7 min read
    Devanshi Joshi

    It's Serverless September in a Nutshell! Join us as we unpack our month-long learning journey exploring the core technology pillars for Serverless architectures on Azure. Then end with a look at next steps to build your Cloud-native applications on Azure.


    What We'll Cover

    • Functions-as-a-Service (FaaS)
    • Microservices and Containers
    • Serverless Integrations
    • End-to-End Solutions
    • Developer Tools & #Hacktoberfest

    Banner for Serverless September


    Building Cloud-native Apps

    By definition, cloud-native technologies empower organizations to build and run scalable applications in modern, dynamic environments such as public, private, and hybrid clouds. You can learn more about cloud-native in Kendall Roden's #ServerlessSeptember post on Going Cloud-native with Azure Container Apps.

    Serveless technologies accelerate productivity and minimize costs for deploying applications at cloud scale. So, what can we build with serverless technologies in cloud-native on Azure? Anything that is event-driven - examples include:

    • Microservices - scaled by KEDA-compliant triggers
    • Public API Endpoints - scaled by #concurrent HTTP requests
    • Event-Driven Applications - scaled by length of message queue
    • Web Applications - scaled by #concurrent HTTP requests
    • Background Process - scaled by CPU and Memory usage

    Great - but as developers, we really want to know how we can get started building and deploying serverless solutions on Azure. That was the focus of our #ServerlessSeptember journey. Let's take a quick look at the four key themes.

    Functions-as-a-Service (FaaS)

    Functions-as-a-Service (FaaS) is the epitome of developer productivity for full-stack modern apps. As developers, you don't manage infrastructure and focus only on business logic and application code. And, with Serverless Compute you only pay for when your code runs - making this the simplest first step to begin migrating your application to cloud-native.

    In Azure, FaaS is provided by Azure Functions. Check out our Functions + Serverless on Azure to go from learning core concepts, to building your first Functions app in your programming language of choice. Azure functions support multiple programming languages including C#, F#, Java, JavaScript, Python, Typescript, and PowerShell.

    Want to get extended language support for languages like Go, and Rust? You can Use Custom Handlers to make this happen! But what if you want to have long-running functions, or create complex workflows involving more than one function? Read our post on Durable Entities to learn how you can orchestrate this with Azure Functions.

    Check out this recent AskTheExpert Q&A session with the Azure Functions team to get answers to popular community questions on Azure Functions features and usage.

    Microservices and Containers

    Functions-as-a-Service is an ideal first step towards serverless development. But Functions are just one of the 5 pillars of cloud-native. This week we'll look at two of the other pillars: microservices and containers - with specific focus on two core technologies: Azure Container Apps and Dapr (Distributed Application Runtime).

    In this 6-part series of posts, we walk through each technology independently, before looking at the value of building Azure Container Apps with Dapr.

    • In Hello Container Apps we learned core concepts & deployed our first ACA.
    • In Microservices Communication we learned about ACA environments and virtual networks, and how microservices communicate in ACA with a hands-on tutorial.
    • In Scaling Your Container Apps we learned about KEDA (Kubernetes Event-Driven Autoscaler) and configuring ACA for autoscaling with KEDA-compliant triggers.
    • In Build with Dapr we introduced the Distributed Application Runtime (Dapr), exploring its Building Block APIs and sidecar architecture for working with ACA.
    • In Secure ACA Access we learned how to secure ACA access to external services with - and without - Dapr, covering Secret Stores and Managed Identity.
    • Finally, Build ACA with Dapr tied it all together with a enterprise app scenario where an orders processor (ACA) uses Dapr APIs (PubSub, State Management) to receive and store order messages from Azure Service Bus.

    Build ACA with Dapr

    Check out this recent AskTheExpert Q&A session with the Azure Container Apps team for answers to popular community questions on core features and usage.

    Serverless Integrations

    In the first half of the month we looked at compute resources for building and deploying serverless applications. In the second half, we look at integration tools and resources that automate developer workflows to streamline the end-to-end developer experience.

    In Azure, this is enabled by services like Azure Logic Apps and Azure Event Grid. Azure Logic Apps provides a visual designer to create and automate workflows with little or no code involved. Azure Event Grid provides a highly-scable event broker with support for pub/sub communications to drive async event-driven architectures.

    • In Tracking Weather Data Changes With Logic Apps we look at how you can use Logic Apps to integrate the MSN weather service with Azure CosmosDB, allowing automated collection of weather data on changes.

    • In Teach the Cloud to Read & Categorize Mail we take it a step further, using Logic Apps to automate a workflow that includes a Computer Vision service to "read" images and store the results to CosmosDB.

    • In Integrate with Microsoft Graph we explore a multi-cloud scenario (Azure + M365) where change notifications from Microsoft Graph can be integrated using Logic Apps and Event Hubs to power an onboarding workflow.

    • In Cloud Events with Event Grid we learn about the CloudEvents specification (for consistently describing event data) - and learn how Event Grid brokers events in this format. Azure Logic Apps can be an Event handler (subscriber) that uses the event to trigger an automated workflow on receipt.

      Azure Event Grid And Logic Apps

    Want to explore other such integrations? Browse Azure Architectures and filter by selected Azure services for more real-world scenarios.


    End-to-End Solutions

    We've covered serverless compute solutions (for building your serverless applications) and serverless integration services to automate end-to-end workflows in synchronous or asynchronous event-driven architectures. In this final week, we want to leave you with a sense of end-to-end development tools and use cases that can be enabled by Serverless on Azure. Here are some key examples:

    ArticleDescription
    In this tutorial, you'll learn to deploy a containerized ASP.NET Core 6.0 application to Azure Container Apps - with a Blazor front-end and two Web API projects
    Deploy Java containers to cloudIn this tutorial you learn to build and deploy a Java application running on Spring Boot, by publishing it in a container to Azure Container Registry, then deploying to Azure Container Apps,, from ACR, via the Azure Portal.
    **Where am I? My GPS Location with Serverless Power Platform Custom Connector**In this step-by-step tutorial you learn to integrate a serverless application (built on Azure Functions and OpenAPI) with Power Platforms custom connectors via Azure API Management (API-M).This pattern can empower a new ecosystem of fusion apps for cases like inventory management.
    And in our Serverless Hacks initiative, we walked through an 8-step hack to build a serverless tollbooth. Check out this 12-part video walkthrough of a reference solution using .NET.

    Developer Tools

    But wait - there's more. Those are a sample of the end-to-end application scenarios that are built on serverless on Azure. But what about the developer experience? In this article, we say hello to the Azure Developer CLI - an open-source tool that streamlines your develop-deploy workflow, with simple commands that map to core stages of your development journey. Go from code to cloud with one CLI

    And watch this space for more such tutorials and content through October, including a special #Hacktoberfest focused initiative to encourage and support first-time contributors to open-source. Here's a sneak peek at the project we plan to share - the new awesome-azd templates gallery.


    Join us at Microsoft Ignite!

    Want to continue your learning journey, and learn about what's next for Serverless on Azure? Microsoft Ignite happens Oct 12-14 this year and has multiple sessions on relevant technologies and tools. Check out the Session Catalog and register here to attend online.

    - + \ No newline at end of file diff --git a/blog/tags/azure-event-grid/page/2/index.html b/blog/tags/azure-event-grid/page/2/index.html index 4faa14f1bf..a94a19a3e5 100644 --- a/blog/tags/azure-event-grid/page/2/index.html +++ b/blog/tags/azure-event-grid/page/2/index.html @@ -14,13 +14,13 @@ - +

    3 posts tagged with "azure-event-grid"

    View All Tags

    · 9 min read
    Justin Yoo

    Welcome to Day 21 of #30DaysOfServerless!

    We've so far walked through what Azure Event Grid is and how it generally works. Today, let's discuss how Azure Event Grid deals with CloudEvents.


    What We'll Cover


    OK. Let's get started!

    What is CloudEvents?

    Needless to say, events are everywhere. Events come not only from event-driven systems but also from many different systems and devices, including IoT ones like Raspberry PI.

    But the problem is that every event publisher (system/device that creates events) describes their events differently, meaning there is no standard way of describing events. It has caused many issues between systems, mainly from the interoperability perspective.

    1. Consistency: No standard way of describing events resulted in developers having to write their own event handling logic for each event source.
    2. Accessibility: There were no common libraries, tooling and infrastructure to deliver events across systems.
    3. Productivity: The overall productivity decreases because of the lack of the standard format of events.

    Cloud Events Logo

    Therefore, CNCF (Cloud-Native Computing Foundation) has brought up the concept, called CloudEvents. CloudEvents is a specification that commonly describes event data. Conforming any event data to this spec will simplify the event declaration and delivery across systems and platforms and more, resulting in a huge productivity increase.

    How Azure Event Grid brokers CloudEvents

    Before CloudEvents, Azure Event Grid described events in their own way. Therefore, if you want to use Azure Event Grid, you should follow the event format/schema that Azure Event Grid declares. However, not every system/service/application follows the Azure Event Grid schema. Therefore, Azure Event Grid now supports CloudEvents spec as input and output formats.

    Azure Event Grid for Azure

    Take a look at the simple diagram below, which describes how Azure Event Grid captures events raised from various Azure services. In this diagram, Azure Key Vault takes the role of the event source or event publisher, and Azure Logic Apps takes the role of the event handler (I'll discuss Azure Logic Apps as the event handler later in this post). We use Azure Event Grid System Topic for Azure.

    Azure Event Grid for Azure

    Therefore, let's create an Azure Event Grid System Topic that captures events raised from Azure Key Vault when a new version of a secret is added.

    Azure Event Grid System Topic for Key Vault

    As Azure Event Grid makes use of the pub/sub pattern, you need to create the Azure Event Grid Subscription to consume the events. Here's the subscription that uses the Event Grid data format:

    ![Azure Event Grid System Subscription for Key Vault in Event Grid Format][./img/21-cloudevents-via-event-grid-03.png]

    Once you create the subscription, create a new version of the secret on Azure Key Vault. Then, Azure Key Vault raises an event, which is captured in the Event Grid format:

    [
    {
    "id": "6f44b9c0-d37e-40e7-89be-f70a6da291cc",
    "topic": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/rg-aegce-krc/providers/Microsoft.KeyVault/vaults/kv-xxxxxxxx",
    "subject": "hello",
    "eventType": "Microsoft.KeyVault.SecretNewVersionCreated",
    "data": {
    "Id": "https://kv-xxxxxxxx.vault.azure.net/secrets/hello/064dfc082fec463f8d4610ed6118811d",
    "VaultName": "kv-xxxxxxxx",
    "ObjectType": "Secret",
    "ObjectName": "hello",
    "Version": "064dfc082fec463f8d4610ed6118811d",
    "NBF": null,
    "EXP": null
    },
    "dataVersion": "1",
    "metadataVersion": "1",
    "eventTime": "2022-09-21T07:08:09.1234567Z"
    }
    ]

    So, how is it different from the CloudEvents format? Let's take a look. According to the spec, the JSON data in CloudEvents might look like this:

    {
    "id" : "C234-1234-1234",
    "source" : "/mycontext",
    "specversion" : "1.0",
    "type" : "com.example.someevent",
    "comexampleextension1" : "value",
    "time" : "2018-04-05T17:31:00Z",
    "datacontenttype" : "application/cloudevents+json",
    "data" : {
    "appinfoA" : "abc",
    "appinfoB" : 123,
    "appinfoC" : true
    }
    }

    This time, let's create another subscription using the CloudEvents schema. Here's how to create the subscription against the system topic:

    Azure Event Grid System Subscription for Key Vault in CloudEvents Format

    Therefore, Azure Key Vault emits the event data in the CloudEvents format:

    {
    "id": "6f44b9c0-d37e-40e7-89be-f70a6da291cc",
    "source": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/rg-aegce-krc/providers/Microsoft.KeyVault/vaults/kv-xxxxxxxx",
    "specversion": "1.0",
    "type": "Microsoft.KeyVault.SecretNewVersionCreated",
    "subject": "hello",
    "time": "2022-09-21T07:08:09.1234567Z",
    "data": {
    "Id": "https://kv-xxxxxxxx.vault.azure.net/secrets/hello/064dfc082fec463f8d4610ed6118811d",
    "VaultName": "kv-xxxxxxxx",
    "ObjectType": "Secret",
    "ObjectName": "hello",
    "Version": "064dfc082fec463f8d4610ed6118811d",
    "NBF": null,
    "EXP": null
    }
    }

    Can you identify some differences between the Event Grid format and the CloudEvents format? Fortunately, both Event Grid schema and CloudEvents schema look similar to each other. But they might be significantly different if you use a different event source outside Azure.

    Azure Event Grid for Systems outside Azure

    As mentioned above, the event data described outside Azure or your own applications within Azure might not be understandable by Azure Event Grid. In this case, we need to use Azure Event Grid Custom Topic. Here's the diagram for it:

    Azure Event Grid for Applications outside Azure

    Let's create the Azure Event Grid Custom Topic. When you create the topic, make sure that you use the CloudEvent schema during the provisioning process:

    Azure Event Grid Custom Topic

    If your application needs to publish events to Azure Event Grid Custom Topic, your application should build the event data in the CloudEvents format. If you use a .NET application, add the NuGet package first.

    dotnet add package Azure.Messaging.EventGrid

    Then, create the publisher instance. You've already got the topic endpoint URL and the access key.

    var topicEndpoint = new Uri("<Azure Event Grid Custom Topic Endpoint URL>");
    var credential = new AzureKeyCredential("<Azure Event Grid Custom Topic Access Key>");
    var publisher = new EventGridPublisherClient(topicEndpoint, credential);

    Now, build the event data like below. Make sure that you follow the CloudEvents schema that requires additional metadata like event source, event type and content type.

    var source = "/your/event/source";
    var type = "com.source.event.your/OnEventOccurs";

    var data = new MyEventData() { Hello = "World" };

    var @event = new CloudEvent(source, type, data);

    And finally, send the event to Azure Event Grid Custom Topic.

    await publisher.SendEventAsync(@event);

    The captured event data looks like the following:

    {
    "id": "cc2b2775-52b8-43b8-a7cc-c1c33c2b2e59",
    "source": "/your/event/source",
    "type": "com.source.event.my/OnEventOccurs",
    "data": {
    "Hello": "World"
    },
    "time": "2022-09-21T07:08:09.1234567+00:00",
    "specversion": "1.0"
    }

    However, due to limitations, someone might insist that their existing application doesn't or can't emit the event data in the CloudEvents format. In this case, what should we do? There's no standard way of sending the event data in the CloudEvents format to Azure Event Grid Custom Topic. One of the approaches we may be able to apply is to put a converter between the existing application and Azure Event Grid Custom Topic like below:

    Azure Event Grid for Applications outside Azure with Converter

    Once the Function app (or any converter app) receives legacy event data, it internally converts the CloudEvents format and publishes it to Azure Event Grid.

    var data = default(MyRequestData);
    using (var reader = new StreamReader(req.Body))
    {
    var serialised = await reader.ReadToEndAsync();
    data = JsonConvert.DeserializeObject<MyRequestData>(serialised);
    }

    var converted = new MyEventData() { Hello = data.Lorem };
    var @event = new CloudEvent(source, type, converted);

    The converted event data is captured like this:

    {
    "id": "df296da3-77cd-4da2-8122-91f631941610",
    "source": "/your/event/source",
    "type": "com.source.event.my/OnEventOccurs",
    "data": {
    "Hello": "ipsum"
    },
    "time": "2022-09-21T07:08:09.1234567+00:00",
    "specversion": "1.0"
    }

    This approach is beneficial in many integration scenarios to make all the event data canonicalised.

    How Azure Logic Apps consumes CloudEvents

    I put Azure Logic Apps as the event handler in the previous diagrams. According to the CloudEvents spec, each event handler must implement request validation to avoid abuse. One good thing about using Azure Logic Apps is that it has already implemented this request validation feature. It implies that we just subscribe to the topic and consume the event data.

    Create a new Logic Apps instance and add the HTTP Request trigger. Once it saves, you will get the endpoint URL.

    Azure Logic Apps with HTTP Request Trigger

    Then, create the Azure Event Grid Subscription with:

    • Endpoint type: Webhook
    • Endpoint URL: The Logic Apps URL from above.

    Azure Logic Apps with HTTP Request Trigger

    Once the subscription is ready, this Logic Apps works well as the event handler. Here's how it receives the CloudEvents data from the subscription.

    Azure Logic Apps that Received CloudEvents data

    Now you've got the CloudEvents data. It's entirely up to you to handle that event data however you want!

    Exercise: Try this yourself!

    You can fork this GitHub repository to your account and play around with it to see how Azure Event Grid with CloudEvents works. Alternatively, the "Deploy to Azure" button below will provision all necessary Azure resources and deploy an Azure Functions app to mimic the event publisher.

    Deploy To Azure

    Resources: For self-study!

    Want to know more about CloudEvents in real-life examples? Here are several resources you can take a look at:

    - + \ No newline at end of file diff --git a/blog/tags/azure-event-grid/page/3/index.html b/blog/tags/azure-event-grid/page/3/index.html index 7df059fcf2..cf83d78456 100644 --- a/blog/tags/azure-event-grid/page/3/index.html +++ b/blog/tags/azure-event-grid/page/3/index.html @@ -14,13 +14,13 @@ - +

    3 posts tagged with "azure-event-grid"

    View All Tags

    · 5 min read
    Nitya Narasimhan
    Devanshi Joshi

    SEP 08: CHANGE IN PUBLISHING SCHEDULE

    Starting from Week 2 (Sep 8), we'll be publishing blog posts in batches rather than on a daily basis, so you can read a series of related posts together. Don't want to miss updates? Just subscribe to the feed


    Welcome to Day 8 of #30DaysOfServerless!

    This marks the end of our Week 1 Roadmap focused on Azure Functions!! Today, we'll do a quick recap of all #ServerlessSeptember activities in Week 1, set the stage for Week 2 - and leave you with some excellent tutorials you should explore to build more advanced scenarios with Azure Functions.

    Ready? Let's go.


    What We'll Cover

    • Azure Functions: Week 1 Recap
    • Advanced Functions: Explore Samples
    • End-to-End: Serverless Hacks & Cloud Skills
    • What's Next: Hello, Containers & Microservices
    • Challenge: Complete the Learning Path


    Week 1 Recap: #30Days & Functions

    Congratulations!! We made it to the end of Week 1 of #ServerlessSeptember. Let's recap what we learned so far:

    • In Core Concepts we looked at where Azure Functions fits into the serverless options available on Azure. And we learned about key concepts like Triggers, Bindings, Custom Handlers and Durable Functions.
    • In Build Your First Function we looked at the tooling options for creating Functions apps, testing them locally, and deploying them to Azure - as we built and deployed our first Functions app.
    • In the next 4 posts, we explored new Triggers, Integrations, and Scenarios - as we looked at building Functions Apps in Java, JavaScript, .NET and Python.
    • And in the Zero-To-Hero series, we learned about Durable Entities - and how we can use them to create stateful serverless solutions using a Chirper Sample as an example scenario.

    The illustrated roadmap below summarizes what we covered each day this week, as we bring our Functions-as-a-Service exploration to a close.


    Advanced Functions: Code Samples

    So, now that we've got our first Functions app under our belt, and validated our local development setup for tooling, where can we go next? A good next step is to explore different triggers and bindings, that drive richer end-to-end scenarios. For example:

    • Integrate Functions with Azure Logic Apps - we'll discuss Azure Logic Apps in Week 3. For now, think of it as a workflow automation tool that lets you integrate seamlessly with other supported Azure services to drive an end-to-end scenario. In this tutorial, we set up a workflow connecting Twitter (get tweet) to Azure Cognitive Services (analyze sentiment) - and use that to trigger an Azure Functions app to send email about the result.
    • Integrate Functions with Event Grid - we'll discuss Azure Event Grid in Week 3. For now, think of it as an eventing service connecting event sources (publishers) to event handlers (subscribers) at cloud scale. In this tutorial, we handle a common use case - a workflow where loading an image to Blob Storage triggers an Azure Functions app that implements a resize function, helping automatically generate thumbnails for the uploaded image.
    • Integrate Functions with CosmosDB and SignalR to bring real-time push-based notifications to your web app. It achieves this by using a Functions app that is triggered by changes in a CosmosDB backend, causing it to broadcast that update (push notification to connected web clients over SignalR, in real time.

    Want more ideas? Check out the Azure Samples for Functions for implementations, and browse the Azure Architecture Center for reference architectures from real-world scenarios that involve Azure Functions usage.


    E2E Scenarios: Hacks & Cloud Skills

    Want to systematically work your way through a single End-to-End scenario involving Azure Functions alongside other serverless support technologies? Check out the Serverless Hacks activity happening during #ServerlessSeptember, and learn to build this "Serverless Tollbooth Application" in a series of 10 challenges. Check out the video series for a reference solution in .NET and sign up for weekly office hours to join peers and discuss your solutions or challenges.

    Or perhaps you prefer to learn core concepts with code in a structured learning path? We have that covered. Check out the 12-module "Create Serverless Applications" course from Microsoft Learn which walks your through concepts, one at a time, with code. Even better - sign up for the free Cloud Skills Challenge and complete the same path (in under 30 days) but this time, with the added fun of competing against your peers for a spot on a leaderboard, and swag.


    What's Next? Hello, Cloud-Native!

    So where to next? In Week 2 we turn our attention from Functions-as-a-Service to building more complex backends using Containers and Microservices. We'll focus on two core technologies - Azure Container Apps and Dapr (Distributed Application Runtime) - both key components of a broader vision around Building Cloud-Native Applications in Azure.

    What is Cloud-Native you ask?

    Fortunately for you, we have an excellent introduction in our Zero-to-Hero article on Go Cloud-Native with Azure Container Apps - that explains the 5 pillars of Cloud-Native and highlights the value of Azure Container Apps (scenarios) and Dapr (sidecar architecture) for simplified microservices-based solution with auto-scale capability. Prefer a visual summary? Here's an illustrate guide to that article for convenience.

    Go Cloud-Native Download a higher resolution version of the image


    Take The Challenge

    We typically end each post with an exercise or activity to reinforce what you learned. For Week 1, we encourage you to take the Cloud Skills Challenge and work your way through at least a subset of the modules, for hands-on experience with the different Azure Functions concepts, integrations, and usage.

    See you in Week 2!

    - + \ No newline at end of file diff --git a/blog/tags/azure-functions/index.html b/blog/tags/azure-functions/index.html index 700937348b..8d45e9cb22 100644 --- a/blog/tags/azure-functions/index.html +++ b/blog/tags/azure-functions/index.html @@ -14,14 +14,14 @@ - +

    16 posts tagged with "azure-functions"

    View All Tags

    · 7 min read
    Devanshi Joshi

    It's Serverless September in a Nutshell! Join us as we unpack our month-long learning journey exploring the core technology pillars for Serverless architectures on Azure. Then end with a look at next steps to build your Cloud-native applications on Azure.


    What We'll Cover

    • Functions-as-a-Service (FaaS)
    • Microservices and Containers
    • Serverless Integrations
    • End-to-End Solutions
    • Developer Tools & #Hacktoberfest

    Banner for Serverless September


    Building Cloud-native Apps

    By definition, cloud-native technologies empower organizations to build and run scalable applications in modern, dynamic environments such as public, private, and hybrid clouds. You can learn more about cloud-native in Kendall Roden's #ServerlessSeptember post on Going Cloud-native with Azure Container Apps.

    Serveless technologies accelerate productivity and minimize costs for deploying applications at cloud scale. So, what can we build with serverless technologies in cloud-native on Azure? Anything that is event-driven - examples include:

    • Microservices - scaled by KEDA-compliant triggers
    • Public API Endpoints - scaled by #concurrent HTTP requests
    • Event-Driven Applications - scaled by length of message queue
    • Web Applications - scaled by #concurrent HTTP requests
    • Background Process - scaled by CPU and Memory usage

    Great - but as developers, we really want to know how we can get started building and deploying serverless solutions on Azure. That was the focus of our #ServerlessSeptember journey. Let's take a quick look at the four key themes.

    Functions-as-a-Service (FaaS)

    Functions-as-a-Service (FaaS) is the epitome of developer productivity for full-stack modern apps. As developers, you don't manage infrastructure and focus only on business logic and application code. And, with Serverless Compute you only pay for when your code runs - making this the simplest first step to begin migrating your application to cloud-native.

    In Azure, FaaS is provided by Azure Functions. Check out our Functions + Serverless on Azure to go from learning core concepts, to building your first Functions app in your programming language of choice. Azure functions support multiple programming languages including C#, F#, Java, JavaScript, Python, Typescript, and PowerShell.

    Want to get extended language support for languages like Go, and Rust? You can Use Custom Handlers to make this happen! But what if you want to have long-running functions, or create complex workflows involving more than one function? Read our post on Durable Entities to learn how you can orchestrate this with Azure Functions.

    Check out this recent AskTheExpert Q&A session with the Azure Functions team to get answers to popular community questions on Azure Functions features and usage.

    Microservices and Containers

    Functions-as-a-Service is an ideal first step towards serverless development. But Functions are just one of the 5 pillars of cloud-native. This week we'll look at two of the other pillars: microservices and containers - with specific focus on two core technologies: Azure Container Apps and Dapr (Distributed Application Runtime).

    In this 6-part series of posts, we walk through each technology independently, before looking at the value of building Azure Container Apps with Dapr.

    • In Hello Container Apps we learned core concepts & deployed our first ACA.
    • In Microservices Communication we learned about ACA environments and virtual networks, and how microservices communicate in ACA with a hands-on tutorial.
    • In Scaling Your Container Apps we learned about KEDA (Kubernetes Event-Driven Autoscaler) and configuring ACA for autoscaling with KEDA-compliant triggers.
    • In Build with Dapr we introduced the Distributed Application Runtime (Dapr), exploring its Building Block APIs and sidecar architecture for working with ACA.
    • In Secure ACA Access we learned how to secure ACA access to external services with - and without - Dapr, covering Secret Stores and Managed Identity.
    • Finally, Build ACA with Dapr tied it all together with a enterprise app scenario where an orders processor (ACA) uses Dapr APIs (PubSub, State Management) to receive and store order messages from Azure Service Bus.

    Build ACA with Dapr

    Check out this recent AskTheExpert Q&A session with the Azure Container Apps team for answers to popular community questions on core features and usage.

    Serverless Integrations

    In the first half of the month we looked at compute resources for building and deploying serverless applications. In the second half, we look at integration tools and resources that automate developer workflows to streamline the end-to-end developer experience.

    In Azure, this is enabled by services like Azure Logic Apps and Azure Event Grid. Azure Logic Apps provides a visual designer to create and automate workflows with little or no code involved. Azure Event Grid provides a highly-scable event broker with support for pub/sub communications to drive async event-driven architectures.

    • In Tracking Weather Data Changes With Logic Apps we look at how you can use Logic Apps to integrate the MSN weather service with Azure CosmosDB, allowing automated collection of weather data on changes.

    • In Teach the Cloud to Read & Categorize Mail we take it a step further, using Logic Apps to automate a workflow that includes a Computer Vision service to "read" images and store the results to CosmosDB.

    • In Integrate with Microsoft Graph we explore a multi-cloud scenario (Azure + M365) where change notifications from Microsoft Graph can be integrated using Logic Apps and Event Hubs to power an onboarding workflow.

    • In Cloud Events with Event Grid we learn about the CloudEvents specification (for consistently describing event data) - and learn how Event Grid brokers events in this format. Azure Logic Apps can be an Event handler (subscriber) that uses the event to trigger an automated workflow on receipt.

      Azure Event Grid And Logic Apps

    Want to explore other such integrations? Browse Azure Architectures and filter by selected Azure services for more real-world scenarios.


    End-to-End Solutions

    We've covered serverless compute solutions (for building your serverless applications) and serverless integration services to automate end-to-end workflows in synchronous or asynchronous event-driven architectures. In this final week, we want to leave you with a sense of end-to-end development tools and use cases that can be enabled by Serverless on Azure. Here are some key examples:

    ArticleDescription
    In this tutorial, you'll learn to deploy a containerized ASP.NET Core 6.0 application to Azure Container Apps - with a Blazor front-end and two Web API projects
    Deploy Java containers to cloudIn this tutorial you learn to build and deploy a Java application running on Spring Boot, by publishing it in a container to Azure Container Registry, then deploying to Azure Container Apps,, from ACR, via the Azure Portal.
    **Where am I? My GPS Location with Serverless Power Platform Custom Connector**In this step-by-step tutorial you learn to integrate a serverless application (built on Azure Functions and OpenAPI) with Power Platforms custom connectors via Azure API Management (API-M).This pattern can empower a new ecosystem of fusion apps for cases like inventory management.
    And in our Serverless Hacks initiative, we walked through an 8-step hack to build a serverless tollbooth. Check out this 12-part video walkthrough of a reference solution using .NET.

    Developer Tools

    But wait - there's more. Those are a sample of the end-to-end application scenarios that are built on serverless on Azure. But what about the developer experience? In this article, we say hello to the Azure Developer CLI - an open-source tool that streamlines your develop-deploy workflow, with simple commands that map to core stages of your development journey. Go from code to cloud with one CLI

    And watch this space for more such tutorials and content through October, including a special #Hacktoberfest focused initiative to encourage and support first-time contributors to open-source. Here's a sneak peek at the project we plan to share - the new awesome-azd templates gallery.


    Join us at Microsoft Ignite!

    Want to continue your learning journey, and learn about what's next for Serverless on Azure? Microsoft Ignite happens Oct 12-14 this year and has multiple sessions on relevant technologies and tools. Check out the Session Catalog and register here to attend online.

    - + \ No newline at end of file diff --git a/blog/tags/azure-functions/page/10/index.html b/blog/tags/azure-functions/page/10/index.html index 655c516c99..317d2c2229 100644 --- a/blog/tags/azure-functions/page/10/index.html +++ b/blog/tags/azure-functions/page/10/index.html @@ -14,13 +14,13 @@ - +

    16 posts tagged with "azure-functions"

    View All Tags

    · 10 min read
    Mike James
    Matt Soucoup

    Welcome to Day 6 of #30DaysOfServerless!

    The theme for this week is Azure Functions. Today we're going to talk about why Azure Functions are a great fit for .NET developers.


    What We'll Cover

    • What is serverless computing?
    • How does Azure Functions fit in?
    • Let's build a simple Azure Function in .NET
    • Developer Guide, Samples & Scenarios
    • Exercise: Explore the Create Serverless Applications path.
    • Resources: For self-study!

    A banner image that has the title of this article with the author&#39;s photo and a drawing that summarizes the demo application.


    The leaves are changing colors and there's a chill in the air, or for those lucky folks in the Southern Hemisphere, the leaves are budding and a warmth is in the air. Either way, that can only mean one thing - it's Serverless September!🍂 So today, we're going to take a look at Azure Functions - what they are, and why they're a great fit for .NET developers.

    What is serverless computing?

    For developers, serverless computing means you write highly compact individual functions that do one thing - and run in the cloud. These functions are triggered by some external event. That event could be a record being inserted into a database, a file uploaded into BLOB storage, a timer interval elapsed, or even a simple HTTP request.

    But... servers are still definitely involved! What has changed from other types of cloud computing is that the idea and ownership of the server has been abstracted away.

    A lot of the time you'll hear folks refer to this as Functions as a Service or FaaS. The defining characteristic is all you need to do is put together your application logic. Your code is going to be invoked in response to events - and the cloud provider takes care of everything else. You literally get to focus on only the business logic you need to run in response to something of interest - no worries about hosting.

    You do not need to worry about wiring up the plumbing between the service that originates the event and the serverless runtime environment. The cloud provider will handle the mechanism to call your function in response to whatever event you chose to have the function react to. And it passes along any data that is relevant to the event to your code.

    And here's a really neat thing. You only pay for the time the serverless function is running. So, if you have a function that is triggered by an HTTP request, and you rarely get requests to your function, you would rarely pay.

    How does Azure Functions fit in?

    Microsoft's Azure Functions is a modern serverless architecture, offering event-driven cloud computing that is easy for developers to use. It provides a way to run small pieces of code or Functions in the cloud without developers having to worry themselves about the infrastructure or platform the Function is running on.

    That means we're only concerned about writing the logic of the Function. And we can write that logic in our choice of languages... like C#. We are also able to add packages from NuGet to Azure Functions—this way, we don't have to reinvent the wheel and can use well-tested libraries.

    And the Azure Functions runtime takes care of a ton of neat stuff for us, like passing in information about the event that caused it to kick off - in a strongly typed variable. It also "binds" to other services, like Azure Storage, we can easily access those services from our code without having to worry about new'ing them up.

    Let's build an Azure Function!

    Scaffold the Function

    Don't worry about having an Azure subscription or even being connected to the internet—we can develop and debug Azure Functions locally using either Visual Studio or Visual Studio Code!

    For this example, I'm going to use Visual Studio Code to build up a Function that responds to an HTTP trigger and then writes a message to an Azure Storage Queue.

    Diagram of the how the Azure Function will use the HTTP trigger and the Azure Storage Queue Binding

    The incoming HTTP call is the trigger and the message queue the Function writes to is an output binding. Let's have at it!

    info

    You do need to have some tools downloaded and installed to get started. First and foremost, you'll need Visual Studio Code. Then you'll need the Azure Functions extension for VS Code to do the development with. Finally, you'll need the Azurite Emulator installed as well—this will allow us to write to a message queue locally.

    Oh! And of course, .NET 6!

    Now with all of the tooling out of the way, let's write a Function!

    1. Fire up Visual Studio Code. Then, from the command palette, type: Azure Functions: Create New Project

      Screenshot of create a new function dialog in VS Code

    2. Follow the steps as to which directory you want to create the project in and which .NET runtime and language you want to use.

      Screenshot of VS Code prompting which directory and language to use

    3. Pick .NET 6 and C#.

      It will then prompt you to pick the folder in which your Function app resides and then select a template.

      Screenshot of VS Code prompting you to pick the Function trigger template

      Pick the HTTP trigger template. When prompted for a name, call it: PostToAQueue.

    Execute the Function Locally

    1. After giving it a namespace, it prompts for an authorization level—pick Anonymous. Now we have a Function! Let's go ahead and hit F5 and see it run!
    info

    After the templates have finished installing, you may get a prompt to download additional components—these are NuGet packages. Go ahead and do that.

    When it runs, you'll see the Azure Functions logo appear in the Terminal window with the URL the Function is located at. Copy that link.

    Screenshot of the Azure Functions local runtime starting up

    1. Type the link into a browser, adding a name parameter as shown in this example: http://localhost:7071/api/PostToAQueue?name=Matt. The Function will respond with a message. You can even set breakpoints in Visual Studio Code and step through the code!

    Write To Azure Storage Queue

    Next, we'll get this HTTP trigger Function to write to a local Azure Storage Queue. First we need to add the Storage NuGet package to our project. In the terminal, type:

    dotnet add package Microsoft.Azure.WebJobs.Extensions.Storage

    Then set a configuration setting to tell the Function runtime where to find the Storage. Open up local.settings.json and set "AzureWebJobsStorage" to "UseDevelopmentStorage=true". The full file will look like:

    {
    "IsEncrypted": false,
    "Values": {
    "AzureWebJobsStorage": "UseDevelopmentStorage=true",
    "AzureWebJobsDashboard": ""
    }
    }

    Then create a new class within your project. This class will hold nothing but properties. Call it whatever you want and add whatever properties you want to it. I called mine TheMessage and added an Id and Name properties to it.

    public class TheMessage
    {
    public string Id { get; set; }
    public string Name { get; set; }
    }

    Finally, change your PostToAQueue Function, so it looks like the following:


    public static class PostToAQueue
    {
    [FunctionName("PostToAQueue")]
    public static async Task<IActionResult> Run(
    [HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)] HttpRequest req,
    [Queue("demoqueue", Connection = "AzureWebJobsStorage")] IAsyncCollector<TheMessage> messages,
    ILogger log)
    {
    string name = req.Query["name"];

    await messages.AddAsync(new TheMessage { Id = System.Guid.NewGuid().ToString(), Name = name });

    return new OkResult();
    }
    }

    Note the addition of the messages variable. This is telling the Function to use the storage connection we specified before via the Connection property. And it is also specifying which queue to use in that storage account, in this case demoqueue.

    All the code is doing is pulling out the name from the query string, new'ing up a new TheMessage class and adding that to the IAsyncCollector variable.

    That will add the new message to the queue!

    Make sure Azurite is started within VS Code (both the queue and blob emulators). Run the app and send the same GET request as before: http://localhost:7071/api/PostToAQueue?name=Matt.

    If you have the Azure Storage Explorer installed, you can browse your local Queue and see the new message in there!

    Screenshot of Azure Storage Explorer with the new message in the queue

    Summing Up

    We had a quick look at what Microsoft's serverless offering, Azure Functions, is comprised of. It's a full-featured FaaS offering that enables you to write functions in your language of choice, including reusing packages such as those from NuGet.

    A highlight of Azure Functions is the way they are triggered and bound. The triggers define how a Function starts, and bindings are akin to input and output parameters on it that correspond to external services. The best part is that the Azure Function runtime takes care of maintaining the connection to the external services so you don't have to worry about new'ing up or disposing of the connections yourself.

    We then wrote a quick Function that gets triggered off an HTTP request and then writes a query string parameters from that request into a local Azure Storage Queue.

    What's Next

    So, where can you go from here?

    Think about how you can build real-world scenarios by integrating other Azure services. For example, you could use serverless integrations to build a workflow where the input payload received using an HTTP Trigger, is now stored in Blob Storage (output binding), which in turn triggers another service (e.g., Cognitive Services) that processes the blob and returns an enhanced result.

    Keep an eye out for an update to this post where we walk through a scenario like this with code. Check out the resources below to help you get started on your own.

    Exercise

    This brings us close to the end of Week 1 with Azure Functions. We've learned core concepts, built and deployed our first Functions app, and explored quickstarts and scenarios for different programming languages. So, what can you do to explore this topic on your own?

    • Explore the Create Serverless Applications learning path which has several modules that explore Azure Functions integrations with various services.
    • Take up the Cloud Skills Challenge and complete those modules in a fun setting where you compete with peers for a spot on the leaderboard!

    Then come back tomorrow as we wrap up the week with a discussion on end-to-end scenarios, a recap of what we covered this week, and a look at what's ahead next week.

    Resources

    Start here for developer guidance in getting started with Azure Functions as a .NET/C# developer:

    Then learn about supported Triggers and Bindings for C#, with code snippets to show how they are used.

    Finally, explore Azure Functions samples for C# and learn to implement serverless solutions. Examples include:

    - + \ No newline at end of file diff --git a/blog/tags/azure-functions/page/11/index.html b/blog/tags/azure-functions/page/11/index.html index 789ec80f7d..dd4c6ef2ba 100644 --- a/blog/tags/azure-functions/page/11/index.html +++ b/blog/tags/azure-functions/page/11/index.html @@ -14,7 +14,7 @@ - + @@ -22,7 +22,7 @@

    16 posts tagged with "azure-functions"

    View All Tags

    · 8 min read
    David Justo

    Welcome to Day 6 of #30DaysOfServerless!

    Today, we have a special set of posts from our Zero To Hero 🚀 initiative, featuring blog posts authored by our Product Engineering teams for #ServerlessSeptember. Posts were originally published on the Apps on Azure blog on Microsoft Tech Community.


    What We'll Cover

    • What are Durable Entities
    • Some Background
    • A Programming Model
    • Entities for a Micro-Blogging Platform


    Durable Entities are a special type of Azure Function that allow you to implement stateful objects in a serverless environment. They make it easy to introduce stateful components to your app without needing to manually persist data to external storage, so you can focus on your business logic. We’ll demonstrate their power with a real-life example in the last section.

    Entities 101: Some Background

    Programming Durable Entities feels a lot like object-oriented programming, except that these “objects” exist in a distributed system. Like objects, each Entity instance has a unique identifier, i.e. an entity ID that can be used to read and manipulate their internal state. Entities define a list of operations that constrain how their internal state is managed, like an object interface.

    Some experienced readers may realize that Entities sound a lot like an implementation of the Actor Pattern. For a discussion of the relationship between Entities and Actors, please refer to this documentation.

    Entities are a part of the Durable Functions Extension, an extension of Azure Functions that empowers programmers with stateful abstractions for serverless, such as Orchestrations (i.e. workflows).

    Durable Functions is available in most Azure Functions runtime environments: .NET, Node.js, Python, PowerShell, and Java (preview). For this article, we’ll focus on the C# experience, but note that Entities are also available in Node.js and Python; their availability in other languages is underway.

    Entities 102: The programming model

    Imagine you want to implement a simple Entity that just counts things. Its interface allows you to get the current count, add to the current count, and to reset the count to zero.

    If you implement this in an object-oriented way, you’d probably define a class (say “Counter”) with a method to get the current count (say “Counter.Get”), another to add to the count (say “Counter.Add”), and another to reset the count (say “Counter.Reset”). Well, the implementation of an Entity in C# is not that different from this sketch:

    [JsonObject(MemberSerialization.OptIn)] 
    public class Counter
    {
    [JsonProperty("value")]
    public int Value { get; set; }

    public void Add(int amount)
    {
    this.Value += amount;
    }

    public Task Reset()
    {
    this.Value = 0;
    return Task.CompletedTask;
    }

    public Task<int> Get()
    {
    return Task.FromResult(this.Value);
    }
    [FunctionName(nameof(Counter))]
    public static Task Run([EntityTrigger] IDurableEntityContext ctx)
    => ctx.DispatchAsync<Counter>();

    }

    We’ve defined a class named Counter, with an internal count stored in the variable “Value” which is manipulated through the “Add” and “Reset” methods, and which can be read via “Get”.

    The “Run” method is simply boilerplate required for the Azure Functions framework to interact with the object we’ve defined – it’s the method that the framework calls internally when it needs to load the Entity object. When DispatchAsync is called, the Entity and its corresponded state (the last count in “Value”) is loaded from storage. Again, this is mostly just boilerplate: your Entity’s business logic lies in the rest of the class.

    Finally, the Json annotation on top of the class and the Value field tells the Durable Functions framework that the “Value” field is to be durably persisted as part of the durable state on each Entity invocation. If you were to annotate other class variables with JsonProperty, they would also become part of the managed state.

    Entities for a micro-blogging platform

    We’ll try to implement a simple micro-blogging platform, a la Twitter. Let’s call it “Chirper”. In Chirper, users write chirps (i.e tweets), they can follow, and unfollow other users, and they can read the chirps of users they follow.

    Defining Entity

    Just like in OOP, it’s useful to begin by identifying what are the stateful agents of this scenario. In this case, users have state (who they follow and their chirps), and chirps have state in the form of their content. So, we could model these stateful agents as Entities!

    Below is a potential way to implement a User for Chirper as an Entity:

      [JsonObject(MemberSerialization = MemberSerialization.OptIn)] 
    public class User: IUser
    {
    [JsonProperty]
    public List<string> FollowedUsers { get; set; } = new List<string>();

    public void Add(string user)
    {
    FollowedUsers.Add(user);
    }

    public void Remove(string user)
    {
    FollowedUsers.Remove(user);
    }

    public Task<List<string>> Get()
    {
    return Task.FromResult(FollowedUsers);
    }
    // note: removed boilerplate “Run” method, for conciseness.
    }

    In this case, our Entity’s internal state is stored in “FollowedUsers” which is an array of accounts followed by this user. The operations exposed by this entity allow clients to read and modify this data: it can be read by “Get”, a new follower can be added via “Add”, and a user can be unfollowed via “Remove”.

    With that, we’ve modeled a Chirper’s user as an Entity! Recall that Entity instances each has a unique ID, so we can consider that unique ID to correspond to a specific user account.

    What about chirps? Should we represent them as Entities as well? That would certainly be valid. However, we would then need to create a mapping between an entity ID and every chirp entity ID that this user wrote.

    For demonstration purposes, a simpler approach would be to create an Entity that stores the list of all chirps authored by a given user; call it UserChirps. Then, we could fix each User Entity to share the same entity ID as its corresponding UserChirps Entity, making client operations easier.

    Below is a simple implementation of UserChirps:

      [JsonObject(MemberSerialization = MemberSerialization.OptIn)] 
    public class UserChirps : IUserChirps
    {
    [JsonProperty]
    public List<Chirp> Chirps { get; set; } = new List<Chirp>();

    public void Add(Chirp chirp)
    {
    Chirps.Add(chirp);
    }

    public void Remove(DateTime timestamp)
    {
    Chirps.RemoveAll(chirp => chirp.Timestamp == timestamp);
    }

    public Task<List<Chirp>> Get()
    {
    return Task.FromResult(Chirps);
    }

    // Omitted boilerplate “Run” function
    }

    Here, our state is stored in Chirps, a list of user posts. Our operations are the same as before: Get, Read, and Add. It’s the same pattern as before, but we’re representing different data.

    To put it all together, let’s set up Entity clients to generate and manipulate these Entities according to some REST API.

    Interacting with Entity

    Before going there, let’s talk briefly about how you can interact with an Entity. Entity interactions take one of two forms -- calls and signals:

    Calling an entity is a two-way communication. You send an operation message to the entity and then wait for the response message before you continue. The response can be a result value or an error. Signaling an entity is a one-way (fire-and-forget) communication. You send an operation message but don’t wait for a response. You have the reassurance that the message will be delivered eventually, but you don’t know when and don’t know what the response is. For example, when you read the state of an Entity, you are performing a “call” interaction. When you record that a user has followed another, you may choose to simply signal it.

    Now say user with a given userId (say “durableFan99” ) wants to post a chirp. For this, you can write an HTTP endpoint to signal the UserChips entity to record that chirp. We can leverage the HTTP Trigger functionality from Azure Functions and pair it with an entity client binding that signals the Add operation of our Chirp Entity:

    [FunctionName("UserChirpsPost")] 
    public static async Task<HttpResponseMessage> UserChirpsPost(
    [HttpTrigger(AuthorizationLevel.Function, "post", Route = "user/{userId}/chirps")]
    HttpRequestMessage req,
    DurableClient] IDurableClient client,
    ILogger log,
    string userId)
    {
    Authenticate(req, userId);
    var chirp = new Chirp()
    {
    UserId = userId,
    Timestamp = DateTime.UtcNow,
    Content = await req.Content.ReadAsStringAsync(),
    };
    await client.SignalEntityAsync<IUserChirps>(userId, x => x.Add(chirp));
    return req.CreateResponse(HttpStatusCode.Accepted, chirp);
    }

    Following the same pattern as above, to get all the chirps from a user, you could read the status of your Entity via ReadEntityStateAsync, which follows the call-interaction pattern as your client expects a response:

    [FunctionName("UserChirpsGet")] 
    public static async Task<HttpResponseMessage> UserChirpsGet(
    [HttpTrigger(AuthorizationLevel.Function, "get", Route = "user/{userId}/chirps")] HttpRequestMessage req,
    [DurableClient] IDurableClient client,
    ILogger log,
    string userId)
    {

    Authenticate(req, userId);
    var target = new EntityId(nameof(UserChirps), userId);
    var chirps = await client.ReadEntityStateAsync<UserChirps>(target);
    return chirps.EntityExists
    ? req.CreateResponse(HttpStatusCode.OK, chirps.EntityState.Chirps)
    : req.CreateResponse(HttpStatusCode.NotFound);
    }

    And there you have it! To play with a complete implementation of Chirper, you can try out our sample in the Durable Functions extension repo.

    Thank you!

    info

    Thanks for following along, and we hope you find Entities as useful as we do! If you have questions or feedback, please file issues in the repo above or tag us @AzureFunctions on Twitter

    - + \ No newline at end of file diff --git a/blog/tags/azure-functions/page/12/index.html b/blog/tags/azure-functions/page/12/index.html index acb28a959f..4557f2a498 100644 --- a/blog/tags/azure-functions/page/12/index.html +++ b/blog/tags/azure-functions/page/12/index.html @@ -14,13 +14,13 @@ - +

    16 posts tagged with "azure-functions"

    View All Tags

    · 8 min read
    Kendall Roden

    Welcome to Day 6 of #30DaysOfServerless!

    Today, we have a special set of posts from our Zero To Hero 🚀 initiative, featuring blog posts authored by our Product Engineering teams for #ServerlessSeptember. Posts were originally published on the Apps on Azure blog on Microsoft Tech Community.


    What We'll Cover

    • Defining Cloud-Native
    • Introduction to Azure Container Apps
    • Dapr In Azure Container Apps
    • Conclusion


    Defining Cloud-Native

    While I’m positive I’m not the first person to ask this, I think it’s an appropriate way for us to kick off this article: “How many developers does it take to define Cloud-Native?” I hope you aren’t waiting for a punch line because I seriously want to know your thoughts (drop your perspectives in the comments..) but if you ask me, the limit does not exist!

    A quick online search of the topic returns a laundry list of articles, e-books, twitter threads, etc. all trying to nail down the one true definition. While diving into the rabbit hole of Cloud-Native, you will inevitably find yourself on the Cloud-Native Computing Foundation (CNCF) site. The CNCF is part of the Linux Foundation and aims to make "Cloud-Native computing ubiquitous" through deep open source project and community involvement. The CNCF has also published arguably the most popularized definition of Cloud-Native which begins with the following statement:

    “Cloud-Native technologies empower organizations to build and run scalable applications in modern, dynamic environments such as public, private, and hybrid clouds."

    Over the past four years, my day-to-day work has been driven primarily by the surging demand for application containerization and the drastic adoption of Kubernetes as the de-facto container orchestrator. Customers are eager to learn and leverage patterns, practices and technologies that enable building "loosely coupled systems that are resilient, manageable, and observable". Enterprise developers at these organizations are being tasked with rapidly deploying event-driven, horizontally-scalable, polyglot services via repeatable, code-to-cloud pipelines.

    While building Cloud-Native solutions can enable rapid innovation, the transition to adopting a Cloud-Native architectural approach comes with a steep learning curve and a new set of considerations. In a document published by Microsoft called What is Cloud-Native?, there are a few key areas highlighted to aid customers in the adoption of best practices for building modern, portable applications which I will summarize below:

    Cloud infrastructure

    • Cloud-Native applications leverage cloud infrastructure and make use of Platform-as-a-service offerings
    • Cloud-Native applications depend on highly-elastic infrastructure with automatic scaling, self-healing, and monitoring capabilities

    Modern application design

    • Cloud-Native applications should be constructed using principles outlined in the 12 factor methodology

    Microservices

    • Cloud-Native applications are typically composed of microservices where each core function, or service, is built and deployed independently

    Containers

    • Cloud-Native applications are typically deployed using containers as a packaging mechanism where an application's code and dependencies are bundled together for consistency of deployment
    • Cloud-Native applications leverage container orchestration technologies- primarily Kubernetes- for achieving capabilities such as workload scheduling, self-healing, auto-scale, etc.

    Backing services

    • Cloud-Native applications are ideally stateless workloads which retrieve and store data in data stores external to the application hosting infrastructure. Cloud providers like Azure provide an array of backing data services which can be securely accessed from application code and provide capabilities for ensuring application data is highly-available

    Automation

    • Cloud-Native solutions should use deployment automation for backing cloud infrastructure via versioned, parameterized Infrastructure as Code (IaC) templates which provide a consistent, repeatable process for provisioning cloud resources.
    • Cloud-Native solutions should make use of modern CI/CD practices and pipelines to ensure successful, reliable infrastructure and application deployment.

    Azure Container Apps

    In many of the conversations I've had with customers that involve talk of Kubernetes and containers, the topics of cost-optimization, security, networking, and reducing infrastructure and operations inevitably arise. I personally have yet to meet with any customers eager to have their developers get more involved with infrastructure concerns.

    One of my former colleagues, Jeff Hollan, made a statement while appearing on a 2019 episode of The Cloud-Native Show where he shared his perspective on Cloud-Native:

    "When I think about Cloud-Native... it's writing applications in a way where you are specifically thinking about the benefits the cloud can provide... to me, serverless is the perfect realization of that because the only reason you can write serverless applications is because the cloud exists."

    I must say that I agree with Jeff's perspective. In addition to optimizing development practices for the Cloud-Native world, reducing infrastructure exposure and operations is equally as important to many organizations and can be achieved as a result of cloud platform innovation.

    In May of 2022, Microsoft announced the general availability of Azure Container Apps. Azure Container Apps provides customers with the ability to run microservices and containerized applications on a serverless, consumption-based platform.

    For those interested in taking advantage of the open source ecosystem while reaping the benefits of a managed platform experience, Container Apps run on Kubernetes and provides a set of managed open source projects embedded directly into the platform including the Kubernetes Event Driven Autoscaler (KEDA), the Distributed Application Runtime (Dapr) and Envoy.

    Azure Kubernetes Service vs. Azure Container Apps

    Container apps provides other Cloud-Native features and capabilities in addition to those above including, but not limited to:

    The ability to dynamically scale and support growing numbers of users, events, and requests is one of the core requirements for most Cloud-Native, distributed applications. Azure Container Apps is purpose-built with this and other Cloud-Native tenants in mind.

    What can you build with Azure Container Apps?

    Dapr in Azure Container Apps

    As a quick personal note before we dive into this section I will say I am a bit bias about Dapr. When Dapr was first released, I had an opportunity to immediately get involved and became an early advocate for the project. It is created by developers for developers, and solves tangible problems customers architecting distributed systems face:

    HOW DO I
    • integrate with external systems that my app has to react and respond to?
    • create event driven apps which reliably send events from one service to another?
    • observe the calls and events between my services to diagnose issues in production?
    • access secrets securely from within my application?
    • discover other services and call methods on them?
    • prevent committing to a technology early and have the flexibility to swap out an alternative based on project or environment changes?

    While existing solutions were in the market which could be used to address some of the concerns above, there was not a lightweight, CNCF-backed project which could provide a unified approach to solve the more fundamental ask from customers: "How do I make it easy for developers to build microservices based on Cloud-Native best practices?"

    Enter Dapr!

    The Distributed Application Runtime (Dapr) provides APIs that simplify microservice connectivity. Whether your communication pattern is service to service invocation or pub/sub messaging, Dapr helps you write resilient and secured microservices. By letting Dapr’s sidecar take care of the complex challenges such as service discovery, message broker integration, encryption, observability, and secret management, you can focus on business logic and keep your code simple."

    The Container Apps platform provides a managed and supported Dapr integration which eliminates the need for deploying and managing the Dapr OSS project. In addition to providing managed upgrades, the platform also exposes a simplified Dapr interaction model to increase developer productivity and reduce the friction required to leverage Dapr capabilities. While the Dapr integration makes it easier for customers to adopt Cloud-Native best practices in container apps it is not required to make use of the container apps platform.

    Image on Dapr

    For additional insight into the dapr integration visit aka.ms/aca-dapr.

    Conclusion

    Backed by and integrated with powerful Cloud-Native technologies, Azure Container Apps strives to make developers productive, while reducing the operational overhead and learning curve that typically accompanies adopting a cloud-native strategy.

    If you are interested in building resilient, portable and highly-scalable apps visit Azure Container Apps | Microsoft Azure today!

    ASK THE EXPERT: LIVE Q&A

    The Azure Container Apps team will answer questions live on September 29.

    - + \ No newline at end of file diff --git a/blog/tags/azure-functions/page/13/index.html b/blog/tags/azure-functions/page/13/index.html index 114a5b495e..2b90aa8ad3 100644 --- a/blog/tags/azure-functions/page/13/index.html +++ b/blog/tags/azure-functions/page/13/index.html @@ -14,13 +14,13 @@ - +

    16 posts tagged with "azure-functions"

    View All Tags

    · 7 min read
    Aaron Powell

    Welcome to Day 5 of #30DaysOfServerless!

    Yesterday we looked at Azure Functions from the perspective of a Java developer. Today, we'll do a similar walkthrough from the perspective of a JavaScript developer.

    And, we'll use this to explore another popular usage scenario for Azure Functions: building a serverless HTTP API using JavaScript.

    Ready? Let's go.


    What We'll Cover

    • Developer Guidance
    • Create Azure Function with CLI
    • Calling an external API
    • Azure Samples & Scenarios for JS
    • Exercise: Support searching
    • Resources: For self-study!


    Developer Guidance

    If you're a JavaScript developer new to serverless on Azure, start by exploring the Azure Functions JavaScript Developers Guide. It covers:

    • Quickstarts for Node.js - using Visual Code, CLI or Azure Portal
    • Guidance on hosting options and performance considerations
    • Azure Functions bindings and (code samples) for JavaScript
    • Scenario examples - integrations with other Azure Services

    Node.js 18 Support

    Node.js 18 Support (Public Preview)

    Azure Functions support for Node.js 18 entered Public Preview on Aug 31, 2022 and is supported by the Azure Functions v.4.x runtime!

    As we continue to explore how we can use Azure Functions, today we're going to look at using JavaScript to create one, and we're going to be using the newly released Node.js 18 support for Azure Functions to make the most out of the platform.

    Ensure you have Node.js 18 and Azure Functions v4.x versions installed, along with a text editor (I'll use VS Code in this post), and a terminal, then we're ready to go.

    Scenario: Calling The GitHub API

    The application we're going to be building today will use the GitHub API to return a random commit message, so that we don't need to come up with one ourselves! After all, naming things can be really hard! 🤣

    Creating the Azure Function

    To create our Azure Function, we're going to use the Azure Functions CLI, which we can install using npm:

    npm install --global azure-function-core-tools

    Once that's installed, we can use the new func command to initalise our project:

    func init --worker-runtime node --language javascript

    When running func init we can either provide the worker-runtime and language as arguments, or use the menu system that the tool will provide us. For brevity's stake, I've used the arguments here, specifying that we want node as the runtime and javascript as the language, but you could change that to typescript if you'd prefer to use TypeScript.

    Once the init command is completed, you should have a .vscode folder, and the files .gitignore, host.json, local.settings.json, and package.json.

    Files generated by func initFiles generated by func init

    Adding a HTTP Trigger

    We have an empty Functions app so far, what we need to do next is create a Function that it will run, and we're going to make a HTTP Trigger Function, which is a Function that responds to HTTP requests. We'll use the func new command to create that:

    func new --template "HTTP Trigger" --name "get-commit-message"

    When this completes, we'll have a folder for the Function, using the name we provided, that contains the filesfunction.json and index.js. Let's open the function.json to understand it a little bit:

    {
    "bindings": [
    {
    "authLevel": "function",
    "type": "httpTrigger",
    "direction": "in",
    "name": "req",
    "methods": [
    "get",
    "post"
    ]
    },
    {
    "type": "http",
    "direction": "out",
    "name": "res"
    }
    ]
    }

    This file is used to tell Functions about the Function that we've created and what it does, so it knows to handle the appropriate events. We have a bindings node which contains the event bindings for our Azure Function. The first binding is using the type httpTrigger, which indicates that it'll be executed, or triggered, by a HTTP event, and the methods indicates that it's listening to both GET and POST (you can change this for the right HTTP methods that you want to support). The HTTP request information will be bound to a property in the Functions context called req, so we can access query strings, the request body, etc.

    The other binding we have has the direction of out, meaning that it's something that the Function will return to the called, and since this is a HTTP API, the type is http, indicating that we'll return a HTTP response, and that response will be on a property called res that we add to the Functions context.

    Let's go ahead and start the Function and call it:

    func start

    Starting the FunctionStarting the Function

    With the Function started, access the endpoint http://localhost:7071/api/get-commit-message via a browser or using cURL:

    curl http://localhost:7071/api/get-commit-message\?name\=ServerlessSeptember

    Hello from Azure FunctionsHello from Azure Functions

    🎉 CONGRATULATIONS

    You created and ran a JavaScript function app locally!

    Calling an external API

    It's time to update the Function to do what we want to do - call the GitHub Search API and get some commit messages. The endpoint that we'll be calling is https://api.github.com/search/commits?q=language:javascript.

    Note: The GitHub API is rate limited and this sample will call it unauthenticated, so be aware of that in your own testing.

    To call this API, we'll leverage the newly released fetch support in Node 18 and async/await, to make for a very clean Function.

    Open up the index.js file, and delete the contents of the existing Function, so we have a empty one:

    module.exports = async function (context, req) {

    }

    The default template uses CommonJS, but you can use ES Modules with Azure Functions if you prefer.

    Now we'll use fetch to call the API, and unpack the JSON response:

    module.exports = async function (context, req) {
    const res = await fetch("https://api.github.com/search/commits?q=language:javascript");
    const json = await res.json();
    const messages = json.items.map(item => item.commit.message);
    context.res = {
    body: {
    messages
    }
    };
    }

    To send a response to the client, we're setting the context.res property, where res is the name of the output binding in our function.json, and giving it a body that contains the commit messages.

    Run func start again, and call the endpoint:

    curl http://localhost:7071/api/get-commit-message

    The you'll get some commit messages:

    A series of commit messages from the GitHub Search APIA series of commit messages from the GitHub Search API

    🎉 CONGRATULATIONS

    There we go, we've created an Azure Function which is used as a proxy to another API, that we call (using native fetch in Node.js 18) and from which we return a subset of the JSON payload.

    Next Steps

    Other Triggers, Bindings

    This article focused on using the HTTPTrigger and relevant bindings, to build a serverless API using Azure Functions. How can you explore other supported bindings, with code samples to illustrate usage?

    Scenarios with Integrations

    Once you've tried out the samples, try building an end-to-end scenario by using these triggers to integrate seamlessly with other services. Here are some suggestions:

    Exercise: Support searching

    The GitHub Search API allows you to provide search parameters via the q query string. In this sample, we hard-coded it to be language:javascript, but as a follow-on exercise, expand the Function to allow the caller to provide the search terms as a query string to the Azure Function, which is passed to the GitHub Search API. Hint - have a look at the req argument.

    Resources

    - + \ No newline at end of file diff --git a/blog/tags/azure-functions/page/14/index.html b/blog/tags/azure-functions/page/14/index.html index e645c66a07..d342e8b25f 100644 --- a/blog/tags/azure-functions/page/14/index.html +++ b/blog/tags/azure-functions/page/14/index.html @@ -14,13 +14,13 @@ - +

    16 posts tagged with "azure-functions"

    View All Tags

    · 8 min read
    Rory Preddy

    Welcome to Day 4 of #30DaysOfServerless!

    Yesterday we walked through an Azure Functions Quickstart with JavaScript, and used it to understand the general Functions App structure, tooling and developer experience.

    Today we'll look at developing Functions app with a different programming language - namely, Java - and explore developer guidance, tools and resources to build serverless Java solutions on Azure.


    What We'll Cover


    Developer Guidance

    If you're a Java developer new to serverless on Azure, start by exploring the Azure Functions Java Developer Guide. It covers:

    In this blog post, we'll dive into one quickstart, and discuss other resources briefly, for awareness! Do check out the recommended exercises and resources for self-study!


    My First Java Functions App

    In today's post, we'll walk through the Quickstart: Azure Functions tutorial using Visual Studio Code. In the process, we'll setup our development environment with the relevant command-line tools and VS Code extensions to make building Functions app simpler.

    Note: Completing this exercise may incur a a cost of a few USD cents based on your Azure subscription. Explore pricing details to learn more.

    First, make sure you have your development environment setup and configured.

    PRE-REQUISITES
    1. An Azure account with an active subscription - Create an account for free
    2. The Java Development Kit, version 11 or 8. - Install
    3. Apache Maven, version 3.0 or above. - Install
    4. Visual Studio Code. - Install
    5. The Java extension pack - Install
    6. The Azure Functions extension for Visual Studio Code - Install

    VS Code Setup

    NEW TO VISUAL STUDIO CODE?

    Start with the Java in Visual Studio Code tutorial to jumpstart your learning!

    Install the Extension Pack for Java (shown below) to install 6 popular extensions to help development workflow from creation to testing, debugging, and deployment.

    Extension Pack for Java

    Now, it's time to get started on our first Java-based Functions app.

    1. Create App

    1. Open a command-line terminal and create a folder for your project. Use the code command to launch Visual Studio Code from that directory as shown:

      $ mkdir java-function-resource-group-api
      $ cd java-function-resource-group-api
      $ code .
    2. Open the Visual Studio Command Palette (Ctrl + Shift + p) and select Azure Functions: create new project to kickstart the create workflow. Alternatively, you can click the Azure icon (on activity sidebar), to get the Workspace window, click "+" and pick the "Create Function" option as shown below.

      Screenshot of creating function in Azure from Visual Studio Code.

    3. This triggers a multi-step workflow. Fill in the information for each step as shown in the following prompts. Important: Start this process from an empty folder - the workflow will populate it with the scaffold for your Java-based Functions app.

      PromptValue
      Choose the directory location.You should either create a new folder or choose an empty folder for the project workspace. Don't choose a project folder that is already part of a workspace.
      Select a languageChoose Java.
      Select a version of JavaChoose Java 11 or Java 8, the Java version on which your functions run in Azure. Choose a Java version that you've verified locally.
      Provide a group IDChoose com.function.
      Provide an artifact IDEnter myFunction.
      Provide a versionChoose 1.0-SNAPSHOT.
      Provide a package nameChoose com.function.
      Provide an app nameEnter HttpExample.
      Select the build tool for Java projectChoose Maven.

    Visual Studio Code uses the provided information and generates an Azure Functions project. You can view the local project files in the Explorer - it should look like this:

    Azure Functions Scaffold For Java

    2. Preview App

    Visual Studio Code integrates with the Azure Functions Core tools to let you run this project on your local development computer before you publish to Azure.

    1. To build and run the application, use the following Maven command. You should see output similar to that shown below.

      $ mvn clean package azure-functions:run
      ..
      ..
      Now listening on: http://0.0.0.0:7071
      Application started. Press Ctrl+C to shut down.

      Http Functions:

      HttpExample: [GET,POST] http://localhost:7071/api/HttpExample
      ...
    2. Copy the URL of your HttpExample function from this output to a browser and append the query string ?name=<YOUR_NAME>, making the full URL something like http://localhost:7071/api/HttpExample?name=Functions. The browser should display a message that echoes back your query string value. The terminal in which you started your project also shows log output as you make requests.

    🎉 CONGRATULATIONS

    You created and ran a function app locally!

    With the Terminal panel focused, press Ctrl + C to stop Core Tools and disconnect the debugger. After you've verified that the function runs correctly on your local computer, it's time to use Visual Studio Code and Maven to publish and test the project on Azure.

    3. Sign into Azure

    Before you can deploy, sign in to your Azure subscription.

    az login

    The az login command signs you into your Azure account.

    Use the following command to deploy your project to a new function app.

    mvn clean package azure-functions:deploy

    When the creation is complete, the following Azure resources are created in your subscription:

    • Resource group. Named as java-functions-group.
    • Storage account. Required by Functions. The name is generated randomly based on Storage account name requirements.
    • Hosting plan. Serverless hosting for your function app.The name is java-functions-app-service-plan.
    • Function app. A function app is the deployment and execution unit for your functions. The name is randomly generated based on your artifactId, appended with a randomly generated number.

    4. Deploy App

    1. Back in the Resources area in the side bar, expand your subscription, your new function app, and Functions. Right-click (Windows) or Ctrl - click (macOS) the HttpExample function and choose Execute Function Now....

      Screenshot of executing function in Azure from Visual Studio Code.

    2. In Enter request body you see the request message body value of { "name": "Azure" }. Press Enter to send this request message to your function.

    3. When the function executes in Azure and returns a response, a notification is raised in Visual Studio Code.

    You can also copy the complete Invoke URL shown in the output of the publish command into a browser address bar, appending the query parameter ?name=Functions. The browser should display similar output as when you ran the function locally.

    🎉 CONGRATULATIONS

    You deployed your function app to Azure, and invoked it!

    5. Clean up

    Use the following command to delete the resource group and all its contained resources to avoid incurring further costs.

    az group delete --name java-functions-group

    Next Steps

    So, where can you go from here? The example above used a familiar HTTP Trigger scenario with a single Azure service (Azure Functions). Now, think about how you can build richer workflows by using other triggers and integrating with other Azure or third-party services.

    Other Triggers, Bindings

    Check out Azure Functions Samples In Java for samples (and short use cases) that highlight other triggers - with code! This includes triggers to integrate with CosmosDB, Blob Storage, Event Grid, Event Hub, Kafka and more.

    Scenario with Integrations

    Once you've tried out the samples, try building an end-to-end scenario by using these triggers to integrate seamlessly with other Services. Here are a couple of useful tutorials:

    Exercise

    Time to put this into action and validate your development workflow:

    Resources

    - + \ No newline at end of file diff --git a/blog/tags/azure-functions/page/15/index.html b/blog/tags/azure-functions/page/15/index.html index afbcbcc781..fee4b538aa 100644 --- a/blog/tags/azure-functions/page/15/index.html +++ b/blog/tags/azure-functions/page/15/index.html @@ -14,13 +14,13 @@ - +

    16 posts tagged with "azure-functions"

    View All Tags

    · 9 min read
    Nitya Narasimhan

    Welcome to Day 3 of #30DaysOfServerless!

    Yesterday we learned core concepts and terminology for Azure Functions, the signature Functions-as-a-Service option on Azure. Today we take our first steps into building and deploying an Azure Functions app, and validate local development setup.

    Ready? Let's go.


    What We'll Cover


    Developer Guidance

    Before we jump into development, let's familiarize ourselves with language-specific guidance from the Azure Functions Developer Guide. We'll review the JavaScript version but guides for F#, Java, Python, C# and PowerShell are also available.

    1. A function is defined by two things: code (written in a supported programming language) and configuration (specified in a functions.json file, declaring the triggers, bindings and other context for execution).

    2. A function app is the unit of deployment for your functions, and is associated with a single execution context or runtime. It can contain multiple functions, but they must be in the same language.

    3. A host configuration is runtime-specific configuration that affects all functions running in a given function app instance. It is defined in a host.json file.

    4. A recommended folder structure is defined for the function app, but may vary based on the programming language used. Check the documentation on folder structures to learn the default for your preferred language.

    Here's an example of the JavaScript folder structure for a function app containing two functions with some shared dependencies. Note that host.json (runtime configuration) is defined once, in the root directory. And function.json is defined separately for each function.

    FunctionsProject
    | - MyFirstFunction
    | | - index.js
    | | - function.json
    | - MySecondFunction
    | | - index.js
    | | - function.json
    | - SharedCode
    | | - myFirstHelperFunction.js
    | | - mySecondHelperFunction.js
    | - node_modules
    | - host.json
    | - package.json
    | - local.settings.json

    We'll dive into what the contents of these files look like, when we build and deploy the first function. We'll cover local.settings.json in the About Local Testing section at the end.


    My First Function App

    The documentation provides quickstart options for all supported languages. We'll walk through the JavaScript versions in this article. You have two options for development:

    I'm a huge fan of VS Code - so I'll be working through that tutorial today.

    PRE-REQUISITES

    Don't forget to validate your setup by checking the versions of installed software.

    Install VSCode Extension

    Installing the Visual Studio Code extension should automatically open this page in your IDE with similar quickstart instructions, but potentially more recent screenshots.

    Visual Studio Code Extension for VS Code

    Note that it may make sense to install the Azure tools for Visual Studio Code extensions pack if you plan on working through the many projects in Serverless September. This includes the Azure Functions extension by default.

    Create First Function App

    Walk through the Create local [project] steps of the quickstart. The process is quick and painless and scaffolds out this folder structure and files. Note the existence (and locations) of functions.json and host.json files.

    Final screenshot for VS Code workflow

    Explore the Code

    Check out the functions.json configuration file. It shows that the function is activated by an httpTrigger with an input binding (tied to req payload) and an output binding (tied to res payload). And it supports both GET and POST requests on the exposed URL.

    {
    "bindings": [
    {
    "authLevel": "anonymous",
    "type": "httpTrigger",
    "direction": "in",
    "name": "req",
    "methods": [
    "get",
    "post"
    ]
    },
    {
    "type": "http",
    "direction": "out",
    "name": "res"
    }
    ]
    }

    Check out index.js - the function implementation. We see it logs a message to the console when invoked. It then extracts a name value from the input payload (req) and crafts a different responseMessage based on the presence/absence of a valid name. It returns this response in the output payload (res).

    module.exports = async function (context, req) {
    context.log('JavaScript HTTP trigger function processed a request.');

    const name = (req.query.name || (req.body && req.body.name));
    const responseMessage = name
    ? "Hello, " + name + ". This HTTP triggered function executed successfully."
    : "This HTTP triggered function executed successfully. Pass a name in the query string or in the request body for a personalized response.";

    context.res = {
    // status: 200, /* Defaults to 200 */
    body: responseMessage
    };
    }

    Preview Function App Locally

    You can now run this function app locally using Azure Functions Core Tools. VS Code integrates seamlessly with this CLI-based tool, making it possible for you to exploit all its capabilities without leaving the IDE. In fact, the workflow will even prompt you to install those tools if they didn't already exist in your local dev environment.

    Now run the function app locally by clicking on the "Run and Debug" icon in the activity bar (highlighted, left) and pressing the "▶️" (Attach to Node Functions) to start execution. On success, your console output should show something like this.

    Final screenshot for VS Code workflow

    You can test the function locally by visiting the Function Url shown (http://localhost:7071/api/HttpTrigger1) or by opening the Workspace region of the Azure extension, and selecting the Execute Function now menu item as shown.

    Final screenshot for VS Code workflow

    In the latter case, the Enter request body popup will show a pre-populated request of {"name":"Azure"} that you can submit.

    Final screenshot for VS Code workflow

    On successful execution, your VS Code window will show a notification as follows. Take note of the console output - it shows the message encoded in index.js.

    Final screenshot for VS Code workflow

    You can also visit the deployed function URL directly in a local browser - testing the case for a request made with no name payload attached. Note how the response in the browser now shows the non-personalized version of the message!

    Final screenshot for VS Code workflow

    🎉 Congratulations

    You created and ran a function app locally!

    (Re)Deploy to Azure

    Now, just follow the creating a function app in Azure steps to deploy it to Azure, using an active subscription! The deployed app resource should now show up under the Function App Resources where you can click Execute Function Now to test the Azure-deployed version instead. You can also look up the function URL in the portal and visit that link in your local browser to trigger the function without the name context.

    🎉 Congratulations

    You have an Azure-hosted serverless function app!

    Challenge yourself and try to change the code and redeploy to Azure to return something different. You have effectively created a serverless API endpoint!


    About Core Tools

    That was a lot to cover! In the next few days we'll have more examples for Azure Functions app development - focused on different programming languages. So let's wrap today's post by reviewing two helpful resources.

    First, let's talk about Azure Functions Core Tools - the command-line tool that lets you develop, manage, and deploy, Azure Functions projects from your local development environment. It is used transparently by the VS Code extension - but you can use it directly from a terminal for a powerful command-line end-to-end developer experience! The Core Tools commands are organized into the following contexts:

    Learn how to work with Azure Functions Core Tools. Not only can it help with quick command execution, it can also be invaluable for debugging issues that may not always be visible or understandable in an IDE.

    About Local Testing

    You might have noticed that the scaffold also produced a local.settings.json file. What is that and why is it useful? By definition, the local.settings.json file "stores app settings and settings used by local development tools. Settings in the local.settings.json file are used only when you're running your project locally."

    Read the guidance on Code and test Azure Functions Locally to learn more about how to configure development environments locally, for your preferred programming language, to support testing and debugging on the local Functions runtime.

    Exercise

    We made it! Now it's your turn!! Here are a few things you can try to apply what you learned and reinforce your understanding:

    Resources

    Bookmark and visit the #30DaysOfServerless Collection. It's the one-stop collection of resources we will keep updated with links to relevant documentation and learning resources.

    - + \ No newline at end of file diff --git a/blog/tags/azure-functions/page/16/index.html b/blog/tags/azure-functions/page/16/index.html index 30926614e0..cc6c81ded3 100644 --- a/blog/tags/azure-functions/page/16/index.html +++ b/blog/tags/azure-functions/page/16/index.html @@ -14,7 +14,7 @@ - + @@ -22,7 +22,7 @@

    16 posts tagged with "azure-functions"

    View All Tags

    · 9 min read
    Nitya Narasimhan

    Welcome to Day 2️⃣ of #30DaysOfServerless!

    Today, we kickstart our journey into serveless on Azure with a look at Functions As a Service. We'll explore Azure Functions - from core concepts to usage patterns.

    Ready? Let's Go!


    What We'll Cover

    • What is Functions-as-a-Service? (FaaS)
    • What is Azure Functions?
    • Triggers, Bindings and Custom Handlers
    • What is Durable Functions?
    • Orchestrators, Entity Functions and Application Patterns
    • Exercise: Take the Cloud Skills Challenge!
    • Resources: #30DaysOfServerless Collection.


    1. What is FaaS?

    Faas stands for Functions As a Service (FaaS). But what does that mean for us as application developers? We know that building and deploying modern applications at scale can get complicated and it starts with us needing to take decisions on Compute. In other words, we need to answer this question: "where should I host my application given my resource dependencies and scaling requirements?"

    this useful flowchart

    Azure has this useful flowchart (shown below) to guide your decision-making. You'll see that hosting options generally fall into three categories:

    • Infrastructure as a Service (IaaS) - where you provision and manage Virtual Machines yourself (cloud provider manages infra).
    • Platform as a Service (PaaS) - where you use a provider-managed hosting environment like Azure Container Apps.
    • Functions as a Service (FaaS) - where you forget about hosting environments and simply deploy your code for the provider to run.

    Here, "serverless" compute refers to hosting options where we (as developers) can focus on building apps without having to manage the infrastructure. See serverless compute options on Azure for more information.


    2. Azure Functions

    Azure Functions is the Functions-as-a-Service (FaaS) option on Azure. It is the ideal serverless solution if your application is event-driven with short-lived workloads. With Azure Functions, we develop applications as modular blocks of code (functions) that are executed on demand, in response to configured events (triggers). This approach brings us two advantages:

    • It saves us money. We only pay for the time the function runs.
    • It scales with demand. We have 3 hosting plans for flexible scaling behaviors.

    Azure Functions can be programmed in many popular languages (C#, F#, Java, JavaScript, TypeScript, PowerShell or Python), with Azure providing language-specific handlers and default runtimes to execute them.

    Concept: Custom Handlers
    • What if we wanted to program in a non-supported language?
    • Or we wanted to use a different runtime for a supported language?

    Custom Handlers have you covered! These are lightweight webservers that can receive and process input events from the Functions host - and return responses that can be delivered to any output targets. By this definition, custom handlers can be implemented by any language that supports receiving HTTP events. Check out the quickstart for writing a custom handler in Rust or Go.

    Custom Handlers

    Concept: Trigger and Bindings

    We talked about what functions are (code blocks). But when are they invoked or executed? And how do we provide inputs (arguments) and retrieve outputs (results) from this execution?

    This is where triggers and bindings come in.

    • Triggers define how a function is invoked and what associated data it will provide. A function must have exactly one trigger.
    • Bindings declaratively define how a resource is connected to the function. The resource or binding can be of type input, output, or both. Bindings are optional. A Function can have multiple input, output bindings.

    Azure Functions comes with a number of supported bindings that can be used to integrate relevant services to power a specific scenario. For instance:

    • HTTP Triggers - invokes the function in response to an HTTP request. Use this to implement serverless APIs for your application.
    • Event Grid Triggers invokes the function on receiving events from an Event Grid. Use this to process events reactively, and potentially publish responses back to custom Event Grid topics.
    • SignalR Service Trigger invokes the function in response to messages from Azure SignalR, allowing your application to take actions with real-time contexts.

    Triggers and bindings help you abstract your function's interfaces to other components it interacts with, eliminating hardcoded integrations. They are configured differently based on the programming language you use. For example - JavaScript functions are configured in the functions.json file. Here's an example of what that looks like.

    {
    "disabled":false,
    "bindings":[
    // ... bindings here
    {
    "type": "bindingType",
    "direction": "in",
    "name": "myParamName",
    // ... more depending on binding
    }
    ]
    }

    The key thing to remember is that triggers and bindings have a direction property - triggers are always in, input bindings are in and output bindings are out. Some bindings can support a special inout direction.

    The documentation has code examples for bindings to popular Azure services. Here's an example of the bindings and trigger configuration for a BlobStorage use case.

    // function.json configuration

    {
    "bindings": [
    {
    "queueName": "myqueue-items",
    "connection": "MyStorageConnectionAppSetting",
    "name": "myQueueItem",
    "type": "queueTrigger",
    "direction": "in"
    },
    {
    "name": "myInputBlob",
    "type": "blob",
    "path": "samples-workitems/{queueTrigger}",
    "connection": "MyStorageConnectionAppSetting",
    "direction": "in"
    },
    {
    "name": "myOutputBlob",
    "type": "blob",
    "path": "samples-workitems/{queueTrigger}-Copy",
    "connection": "MyStorageConnectionAppSetting",
    "direction": "out"
    }
    ],
    "disabled": false
    }

    The code below shows the function implementation. In this scenario, the function is triggered by a queue message carrying an input payload with a blob name. In response, it copies that data to the resource associated with the output binding.

    // function implementation

    module.exports = async function(context) {
    context.log('Node.js Queue trigger function processed', context.bindings.myQueueItem);
    context.bindings.myOutputBlob = context.bindings.myInputBlob;
    };
    Concept: Custom Bindings

    What if we have a more complex scenario that requires bindings for non-supported resources?

    There is an option create custom bindings if necessary. We don't have time to dive into details here but definitely check out the documentation


    3. Durable Functions

    This sounds great, right?. But now, let's talk about one challenge for Azure Functions. In the use cases so far, the functions are stateless - they take inputs at runtime if necessary, and return output results if required. But they are otherwise self-contained, which is great for scalability!

    But what if I needed to build more complex workflows that need to store and transfer state, and complete operations in a reliable manner? Durable Functions are an extension of Azure Functions that makes stateful workflows possible.

    Concept: Orchestrator Functions

    How can I create workflows that coordinate functions?

    Durable Functions use orchestrator functions to coordinate execution of other Durable functions within a given Functions app. These functions are durable and reliable. Later in this post, we'll talk briefly about some application patterns that showcase popular orchestration scenarios.

    Concept: Entity Functions

    How do I persist and manage state across workflows?

    Entity Functions provide explicit state mangement for Durable Functions, defining operations to read and write state to durable entities. They are associated with a special entity trigger for invocation. These are currently available only for a subset of programming languages so check to see if they are supported for your programming language of choice.

    USAGE: Application Patterns

    Durable Functions are a fascinating topic that would require a separate, longer post, to do justice. For now, let's look at some application patterns that showcase the value of these starting with the simplest one - Function Chaining as shown below:

    Function Chaining

    Here, we want to execute a sequence of named functions in a specific order. As shown in the snippet below, the orchestrator function coordinates invocations on the given functions in the desired sequence - "chaining" inputs and outputs to establish the workflow. Take note of the yield keyword. This triggers a checkpoint, preserving the current state of the function for reliable operation.

    const df = require("durable-functions");

    module.exports = df.orchestrator(function*(context) {
    try {
    const x = yield context.df.callActivity("F1");
    const y = yield context.df.callActivity("F2", x);
    const z = yield context.df.callActivity("F3", y);
    return yield context.df.callActivity("F4", z);
    } catch (error) {
    // Error handling or compensation goes here.
    }
    });

    Other application patterns for durable functions include:

    There's a lot more to explore but we won't have time to do that today. Definitely check the documentation and take a minute to read the comparison with Azure Logic Apps to understand what each technology provides for serverless workflow automation.


    4. Exercise

    That was a lot of information to absorb! Thankfully, there are a lot of examples in the documentation that can help put these in context. Here are a couple of exercises you can do, to reinforce your understanding of these concepts.


    5. What's Next?

    The goal for today was to give you a quick tour of key terminology and concepts related to Azure Functions. Tomorrow, we dive into the developer experience, starting with core tools for local development and ending by deploying our first Functions app.

    Want to do some prep work? Here are a few useful links:


    6. Resources


    - + \ No newline at end of file diff --git a/blog/tags/azure-functions/page/2/index.html b/blog/tags/azure-functions/page/2/index.html index 0aeba185ad..81157b547b 100644 --- a/blog/tags/azure-functions/page/2/index.html +++ b/blog/tags/azure-functions/page/2/index.html @@ -14,13 +14,13 @@ - +

    16 posts tagged with "azure-functions"

    View All Tags

    · 14 min read
    Justin Yoo

    Welcome to Day 28 of #30DaysOfServerless!

    Since it's the serverless end-to-end week, I'm going to discuss how to use a serverless application Azure Functions with OpenAPI extension to be seamlessly integrated with Power Platform custom connector through Azure API Management - in a post I call "Where am I? My GPS Location with Serverless Power Platform Custom Connector"

    OK. Are you ready? Let's get started!


    What We'll Cover

    • What is Power Platform custom connector?
    • Proxy app to Google Maps and Naver Map API
    • API Management integration
    • Two ways of building custom connector
    • Where am I? Power Apps app
    • Exercise: Try this yourself!
    • Resources: For self-study!


    SAMPLE REPO

    Want to follow along? Check out the sample app on GitHub repository used in this post.

    What is Power Platform custom connector?

    Power Platform is a low-code/no-code application development tool for fusion teams that consist of a group of people. Those people come from various disciplines, including field experts (domain experts), IT professionals and professional developers, to draw business values successfully. Within the fusion team, the domain experts become citizen developers or low-code developers by Power Platform. In addition, Making Power Platform more powerful is that it offers hundreds of connectors to other Microsoft 365 and third-party services like SAP, ServiceNow, Salesforce, Google, etc.

    However, what if you want to use your internal APIs or APIs not yet offering their official connectors? Here's an example. If your company has an inventory management system, and you want to use it within your Power Apps or Power Automate. That point is exactly where Power Platform custom connectors is necessary.

    Inventory Management System for Power Apps

    Therefore, Power Platform custom connectors enrich those citizen developers' capabilities because those connectors can connect any API applications for the citizen developers to use.

    In this post, let's build a custom connector that provides a static map image generated by Google Maps API and Naver Map API using your GPS location.

    Proxy app to Google Maps and Naver Map API

    First, let's build an Azure Functions app that connects to Google Maps and Naver Map. Suppose that you've already got the API keys for both services. If you haven't yet, get the keys first by visiting here for Google and here for Naver. Then, store them to local.settings.json within your Azure Functions app.

    {
    "Values": {
    ...
    "Maps__Google__ApiKey": "<GOOGLE_MAPS_API_KEY>",
    "Maps__Naver__ClientId": "<NAVER_MAP_API_CLIENT_ID>",
    "Maps__Naver__ClientSecret": "<NAVER_MAP_API_CLIENT_SECRET>"
    }
    }

    Here's the sample logic to get the static image from Google Maps API. It takes the latitude and longitude of your current location and image zoom level, then returns the static map image. There are a few hard-coded assumptions, though:

    • The image size should be 400x400.
    • The image should be in .png format.
    • The marker should show be red and show my location.
    public class GoogleMapService : IMapService
    {
    public async Task<byte[]> GetMapAsync(HttpRequest req)
    {
    var latitude = req.Query["lat"];
    var longitude = req.Query["long"];
    var zoom = (string)req.Query["zoom"] ?? "14";

    var sb = new StringBuilder();
    sb.Append("https://maps.googleapis.com/maps/api/staticmap")
    .Append($"?center={latitude},{longitude}")
    .Append("&size=400x400")
    .Append($"&zoom={zoom}")
    .Append($"&markers=color:red|{latitude},{longitude}")
    .Append("&format=png32")
    .Append($"&key={this._settings.Google.ApiKey}");
    var requestUri = new Uri(sb.ToString());

    var bytes = await this._http.GetByteArrayAsync(requestUri).ConfigureAwait(false);

    return bytes;
    }
    }

    The NaverMapService class has a similar logic with the same input and assumptions. Here's the code:

    public class NaverMapService : IMapService
    {
    public async Task<byte[]> GetMapAsync(HttpRequest req)
    {
    var latitude = req.Query["lat"];
    var longitude = req.Query["long"];
    var zoom = (string)req.Query["zoom"] ?? "13";

    var sb = new StringBuilder();
    sb.Append("https://naveropenapi.apigw.ntruss.com/map-static/v2/raster")
    .Append($"?center={longitude},{latitude}")
    .Append("&w=400")
    .Append("&h=400")
    .Append($"&level={zoom}")
    .Append($"&markers=color:blue|pos:{longitude}%20{latitude}")
    .Append("&format=png")
    .Append("&lang=en");
    var requestUri = new Uri(sb.ToString());

    this._http.DefaultRequestHeaders.Clear();
    this._http.DefaultRequestHeaders.Add("X-NCP-APIGW-API-KEY-ID", this._settings.Naver.ClientId);
    this._http.DefaultRequestHeaders.Add("X-NCP-APIGW-API-KEY", this._settings.Naver.ClientSecret);

    var bytes = await this._http.GetByteArrayAsync(requestUri).ConfigureAwait(false);

    return bytes;
    }
    }

    Let's take a look at the function endpoints. Here's for the Google Maps and Naver Map. As the GetMapAsync(req) method returns a byte array value, you need to transform it as FileContentResult, with the content type of image/png.

    // Google Maps
    public class GoogleMapsTrigger
    {
    [FunctionName(nameof(GoogleMapsTrigger.GetGoogleMapImage))]
    public async Task<IActionResult> GetGoogleMapImage(
    [HttpTrigger(AuthorizationLevel.Anonymous, "GET", Route = "google/image")] HttpRequest req)
    {
    this._logger.LogInformation("C# HTTP trigger function processed a request.");

    var bytes = await this._service.GetMapAsync(req).ConfigureAwait(false);

    return new FileContentResult(bytes, "image/png");
    }
    }

    // Naver Map
    public class NaverMapsTrigger
    {
    [FunctionName(nameof(NaverMapsTrigger.GetNaverMapImage))]
    public async Task<IActionResult> GetNaverMapImage(
    [HttpTrigger(AuthorizationLevel.Anonymous, "GET", Route = "naver/image")] HttpRequest req)
    {
    this._logger.LogInformation("C# HTTP trigger function processed a request.");

    var bytes = await this._service.GetMapAsync(req).ConfigureAwait(false);

    return new FileContentResult(bytes, "image/png");
    }
    }

    Then, add the OpenAPI capability to each function endpoint. Here's the example:

    // Google Maps
    public class GoogleMapsTrigger
    {
    [FunctionName(nameof(GoogleMapsTrigger.GetGoogleMapImage))]
    // ⬇️⬇️⬇️ Add decorators provided by the OpenAPI extension ⬇️⬇️⬇️
    [OpenApiOperation(operationId: nameof(GoogleMapsTrigger.GetGoogleMapImage), tags: new[] { "google" })]
    [OpenApiParameter(name: "lat", In = ParameterLocation.Query, Required = true, Type = typeof(string), Description = "The **latitude** parameter")]
    [OpenApiParameter(name: "long", In = ParameterLocation.Query, Required = true, Type = typeof(string), Description = "The **longitude** parameter")]
    [OpenApiParameter(name: "zoom", In = ParameterLocation.Query, Required = false, Type = typeof(string), Description = "The **zoom level** parameter &ndash; Default value is `14`")]
    [OpenApiResponseWithBody(statusCode: HttpStatusCode.OK, contentType: "image/png", bodyType: typeof(byte[]), Description = "The map image as an OK response")]
    // ⬆️⬆️⬆️ Add decorators provided by the OpenAPI extension ⬆️⬆️⬆️
    public async Task<IActionResult> GetGoogleMapImage(
    [HttpTrigger(AuthorizationLevel.Anonymous, "GET", Route = "google/image")] HttpRequest req)
    {
    ...
    }
    }

    // Naver Map
    public class NaverMapsTrigger
    {
    [FunctionName(nameof(NaverMapsTrigger.GetNaverMapImage))]
    // ⬇️⬇️⬇️ Add decorators provided by the OpenAPI extension ⬇️⬇️⬇️
    [OpenApiOperation(operationId: nameof(NaverMapsTrigger.GetNaverMapImage), tags: new[] { "naver" })]
    [OpenApiParameter(name: "lat", In = ParameterLocation.Query, Required = true, Type = typeof(string), Description = "The **latitude** parameter")]
    [OpenApiParameter(name: "long", In = ParameterLocation.Query, Required = true, Type = typeof(string), Description = "The **longitude** parameter")]
    [OpenApiParameter(name: "zoom", In = ParameterLocation.Query, Required = false, Type = typeof(string), Description = "The **zoom level** parameter &ndash; Default value is `13`")]
    [OpenApiResponseWithBody(statusCode: HttpStatusCode.OK, contentType: "image/png", bodyType: typeof(byte[]), Description = "The map image as an OK response")]
    // ⬆️⬆️⬆️ Add decorators provided by the OpenAPI extension ⬆️⬆️⬆️
    public async Task<IActionResult> GetNaverMapImage(
    [HttpTrigger(AuthorizationLevel.Anonymous, "GET", Route = "naver/image")] HttpRequest req)
    {
    ...
    }
    }

    Run the function app in the local. Here are the latitude and longitude values for Seoul, Korea.

    • latitude: 37.574703
    • longitude: 126.978519

    Google Map for Seoul

    It seems to be working! Let's deploy it to Azure.

    API Management integration

    Visual Studio 2022 provides a built-in deployment tool for Azure Functions app onto Azure. In addition, the deployment tool supports seamless integration with Azure API Management as long as your Azure Functions app enables the OpenAPI capability. In this post, I'm going to use this feature. Right-mouse click on the Azure Functions project and select the "Publish" menu.

    Visual Studio context menu for publish

    Then, you will see the publish screen. Click the "➕ New" button to create a new publish profile.

    Create a new publish profile

    Choose "Azure" and click the "Next" button.

    Choose the target platform for publish

    Select the app instance. This time simply pick up the "Azure Function App (Windows)" option, then click "Next".

    Choose the target OS for publish

    If you already provision an Azure Function app instance, you will see it on the screen. Otherwise, create a new one. Then, click "Next".

    Choose the target instance for publish

    In the next step, you are asked to choose the Azure API Management instance for integration. Choose one, or create a new one. Then, click "Next".

    Choose the APIM instance for integration

    Finally, select the publish method either local publish or GitHub Actions workflow. Let's pick up the local publish method for now. Then, click "Finish".

    Choose the deployment type

    The publish profile has been created. Click "Close" to move on.

    Publish profile created

    Now the function app is ready for deployment. Click the "Publish" button and see how it goes.

    Publish function app

    The Azure function app has been deployed and integrated with the Azure API Management instance.

    Function app published

    Go to the published function app site, and everything looks OK.

    Function app on Azure

    And API Management shows the function app integrated perfectly.

    Function app integrated with APIM

    Now, you are ready to create a custom connector. Let's move on.

    Two ways of building custom connector

    There are two ways to create a custom connector.

    Export custom connector from API Management

    First, you can directly use the built-in API Management feature. Then, click the ellipsis icon and select the "Create Power Connector" menu.

    Create Power Connector menu

    Then, you are redirected to this screen. While the "API" and "API display name" fields are pre-populated, you need to choose the Power Platform environment tied to your tenant. Choose an environment, click "Authenticate", and click "Create".

    Create custom connector screen

    Check your custom connector on Power Apps or Power Automate side.

    Custom connector created on Power Apps

    However, there's a caveat to this approach. Because it's tied to your tenant, you should use the second approach if you want to use this custom connector on the other tenant.

    Import custom connector from OpenAPI document or URL

    Click the ellipsis icon again and select the "Export" menu.

    Export menu

    On the Export API screen, choose the "OpenAPI v2 (JSON)" panel because Power Platform custom connector currently accepts version 2 of the OpenAPI document.

    Select OpenAPI v2

    Download the OpenAPI document to your local computer and move to your Power Apps or Power Automate page under your desired environment. I'm going to use the Power Automate page. First, go to the "Data" ➡️ "Custom connectors" page. Then, click the "➕ New custom connector" ➡️ "Import an OpenAPI file" at the top right corner.

    New custom connector

    When a modal pops up, give the custom connector name and import the OpenAPI document exported above. Then, click "Continue".

    Import custom connector

    Actually, that's it! Next, click the "✔️ Create connector" button to create the connector.

    Create custom connector

    Go back to the custom connector page, and you will see the "Maps API" custom connector you just created.

    Custom connector imported

    So, you are ready to create a Power Apps app to display your location on Google Maps or Naver Map! Let's move on.

    Where am I? Power Apps app

    Open the Power Apps Studio, and create an empty canvas app, named Who am I with a phone layout.

    Custom connector integration

    To use the custom connector created above, you need to add it to the Power App. Click the cylinder icon on the left and click the "Add data" button.

    Add custom connector to data pane

    Search the custom connector name, "Maps API", and click the custom connector to add.

    Search custom connector

    To use the custom connector, you also need to create a connection to it. Click the "Connect" button and move on.

    Create connection to custom connector

    Now, you've got the connection to the custom connector.

    Connection to custom connector ready

    Controls

    Let's build the Power Apps app. First of all, put three controls Image, Slider and Button onto the canvas.

    Power Apps control added

    Click the "Screen1" control and change the value on the property "OnVisible" to the formula below. The formula stores the current slider value in the zoomlevel collection.

    ClearCollect(
    zoomlevel,
    Slider1.Value
    )

    Click the "Botton1" control and change the value on the property "OnSelected" to the formula below. It passes the current latitude, longitude and zoom level to the custom connector and receives the image data. The received image data is stored in the result collection.

    ClearCollect(
    result,
    MAPS.GetGoogleMapImage(
    Location.Latitude,
    Location.Longitude,
    { zoom: First(zoomlevel).Value }
    )
    )

    Click the "Image1" control and change the value on the property "Image" to the formula below. It gets the image data from the result collection.

    First(result).Url

    Click the "Slider1" control and change the value on the property "OnChange" to the formula below. It stores the current slider value to the zoomlevel collection, followed by calling the custom connector to get the image data against the current location.

    ClearCollect(
    zoomlevel,
    Slider1.Value
    );
    ClearCollect(
    result,
    MAPS.GetGoogleMapImage(
    Location.Latitude,
    Location.Longitude,
    { zoom: First(zoomlevel).Value }
    )
    )

    That seems to be OK. Let's click the "Where am I?" button. But it doesn't show the image. The First(result).Url value is actually similar to this:

    appres://blobmanager/1090a86393a843adbfcf428f0b90e91b/1

    It's the image reference value somewhere you can't get there.

    Workaround Power Automate workflow

    Therefore, you need a workaround using a Power Automate workflow to sort out this issue. Open the Power Automate Studio, create an instant cloud flow with the Power App trigger, and give it the "Where am I" name. Then add input parameters of lat, long and zoom.

    Power Apps trigger on Power Automate workflow

    Add custom connector action to get the map image.

    Select action to get the Google Maps image

    In the action, pass the appropriate parameters to the action.

    Pass parameters to the custom connector action

    Add a "Response" action and put the following values into each field.

    • "Body" field:

      {
      "base64Image": <power_automate_expression>
      }

      The <power_automate_expression> should be concat('data:', body('GetGoogleMapImage')?['$content-type'], ';base64,', body('GetGoogleMapImage')?['$content']).

    • "Response Body JSON Schema" field:

      {
      "type": "object",
      "properties": {
      "base64Image": {
      "type": "string"
      }
      }
      }

    Format the Response action

    Let's return to the Power Apps Studio and add the Power Automate workflow you created.

    Add Power Automate workflow

    Select "Button1" and change the value on the property "OnSelect" below. It replaces the direct call to the custom connector with the Power Automate workflow.

    ClearCollect(
    result,
    WhereamI.Run(
    Location.Latitude,
    Location.Longitude,
    First(zoomlevel).Value
    )
    )

    Also, change the value on the property "OnChange" of the "Slider1" control below, replacing the custom connector call with the Power Automate workflow call.

    ClearCollect(
    zoomlevel,
    Slider1.Value
    );
    ClearCollect(
    result,
    WhereamI.Run(
    Location.Latitude,
    Location.Longitude,
    First(zoomlevel).Value
    )
    )

    And finally, change the "Image1" control's "Image" property value below.

    First(result).base64Image

    The workaround has been applied. Click the "Where am I?" button to see your current location from Google Maps.

    Run Power Apps app #1

    If you change the slider left or right, you will see either the zoomed-in image or the zoomed-out image.

    Run Power Apps app #2

    Now, you've created a Power Apps app to show your current location using:

    • Google Maps API through the custom connector, and
    • Custom connector written in Azure Functions with OpenAPI extension!

    Exercise: Try this yourself!

    You can fork this GitHub repository to your account and play around with it to see how the custom connector works. After forking the repository, make sure that you create all the necessary secrets to your repository documented in the README file.

    Then, click the "Deploy to Azure" button, and it will provision all necessary Azure resources and deploy an Azure Functions app for a custom connector.

    Deploy To Azure

    Once everything is deployed successfully, try to create a Power Apps app and Power Automate workflow to see your current location in real-time!

    Resources: For self-study!

    Want to know more about Power Platform custom connector and Azure Functions OpenAPI extension? Here are several resources you can take a look at:

    - + \ No newline at end of file diff --git a/blog/tags/azure-functions/page/3/index.html b/blog/tags/azure-functions/page/3/index.html index 450e0241e5..dcde6318e9 100644 --- a/blog/tags/azure-functions/page/3/index.html +++ b/blog/tags/azure-functions/page/3/index.html @@ -14,14 +14,14 @@ - +

    16 posts tagged with "azure-functions"

    View All Tags

    · 5 min read
    Madhura Bharadwaj

    Welcome to Day 26 of #30DaysOfServerless!

    Today, we have a special set of posts from our Zero To Hero 🚀 initiative, featuring blog posts authored by our Product Engineering teams for #ServerlessSeptember. Posts were originally published on the Apps on Azure blog on Microsoft Tech Community.


    What We'll Cover

    • Monitoring your Azure Functions
    • Built-in log streaming
    • Live Metrics stream
    • Troubleshooting Azure Functions


    Monitoring your Azure Functions:

    Azure Functions uses Application Insights to collect and analyze log data from individual function executions in your function app.

    Using Application Insights

    Application Insights collects log, performance, and error data. By automatically detecting performance anomalies and featuring powerful analytics tools, you can more easily diagnose issues and better understand how your functions are used. These tools are designed to help you continuously improve performance and usability of your functions. You can even use Application Insights during local function app project development.

    Typically, you create an Application Insights instance when you create your function app. In this case, the instrumentation key required for the integration is already set as an application setting named APPINSIGHTS_INSTRUMENTATIONKEY. With Application Insights integration enabled, telemetry data is sent to your connected Application Insights instance. This data includes logs generated by the Functions host, traces written from your functions code, and performance data. In addition to data from your functions and the Functions host, you can also collect data from the Functions scale controller.

    By default, the data collected from your function app is stored in Application Insights. In the Azure portal, Application Insights provides an extensive set of visualizations of your telemetry data. You can drill into error logs and query events and metrics. To learn more, including basic examples of how to view and query your collected data, see Analyze Azure Functions telemetry in Application Insights.

    Using Log Streaming

    In addition to this, you can have a smoother debugging experience through log streaming. There are two ways to view a stream of log files being generated by your function executions.

    • Built-in log streaming: the App Service platform lets you view a stream of your application log files. This is equivalent to the output seen when you debug your functions during local development and when you use the Test tab in the portal. All log-based information is displayed. For more information, see Stream logs. This streaming method supports only a single instance and can't be used with an app running on Linux in a Consumption plan.
    • Live Metrics Stream: when your function app is connected to Application Insights, you can view log data and other metrics in near real-time in the Azure portal using Live Metrics Stream. Use this method when monitoring functions running on multiple-instances or on Linux in a Consumption plan. This method uses sampled data. Log streams can be viewed both in the portal and in most local development environments.
    Monitoring Azure Functions

    Learn how to configure monitoring for your Azure Functions. See Monitoring Azure Functions data reference for detailed information on the metrics and logs metrics created by Azure Functions.

    In addition to this, Azure Functions uses Azure Monitor to monitor the health of your function apps. Azure Functions collects the same kinds of monitoring data as other Azure resources that are described in Azure Monitor data collection. See Monitoring Azure Functions data reference for detailed information on the metrics and logs metrics created by Azure Functions.

    Troubleshooting your Azure Functions:

    When you do run into issues with your function app, Azure Functions diagnostics points out what’s wrong. It guides you to the right information to troubleshoot and resolve the issue more easily and quickly.

    Let’s explore how to use Azure Functions diagnostics to diagnose and solve common function app issues.

    1. Navigate to your function app in the Azure portal.
    2. Select Diagnose and solve problems to open Azure Functions diagnostics.
    3. Once you’re here, there are multiple ways to retrieve the information you’re looking for. Choose a category that best describes the issue of your function app by using the keywords in the homepage tile. You can also type a keyword that best describes your issue in the search bar. There’s also a section at the bottom of the page that will directly take you to some of the more popular troubleshooting tools. For example, you could type execution to see a list of diagnostic reports related to your function app execution and open them directly from the homepage.

    Monitoring and troubleshooting apps in Azure Functions

    1. For example, click on the Function App Down or Reporting Errors link under Popular troubleshooting tools section. You will find detailed analysis, insights and next steps for the issues that were detected. On the left you’ll see a list of detectors. Click on them to explore more, or if there’s a particular keyword you want to look for, type it Into the search bar on the top.

    Monitoring and troubleshooting apps in Azure Functions

    TROUBLESHOOTING TIP

    Here are some general troubleshooting tips that you can follow if you find your Function App throwing Azure Functions Runtime unreachable error.

    Also be sure to check out the recommended best practices to ensure your Azure Functions are highly reliable. This article details some best practices for designing and deploying efficient function apps that remain healthy and perform well in a cloud-based environment.

    Bonus tip:

    - + \ No newline at end of file diff --git a/blog/tags/azure-functions/page/4/index.html b/blog/tags/azure-functions/page/4/index.html index f1f889d5cd..588909d4fb 100644 --- a/blog/tags/azure-functions/page/4/index.html +++ b/blog/tags/azure-functions/page/4/index.html @@ -14,13 +14,13 @@ - +

    16 posts tagged with "azure-functions"

    View All Tags

    · 9 min read
    Justin Yoo

    Welcome to Day 21 of #30DaysOfServerless!

    We've so far walked through what Azure Event Grid is and how it generally works. Today, let's discuss how Azure Event Grid deals with CloudEvents.


    What We'll Cover


    OK. Let's get started!

    What is CloudEvents?

    Needless to say, events are everywhere. Events come not only from event-driven systems but also from many different systems and devices, including IoT ones like Raspberry PI.

    But the problem is that every event publisher (system/device that creates events) describes their events differently, meaning there is no standard way of describing events. It has caused many issues between systems, mainly from the interoperability perspective.

    1. Consistency: No standard way of describing events resulted in developers having to write their own event handling logic for each event source.
    2. Accessibility: There were no common libraries, tooling and infrastructure to deliver events across systems.
    3. Productivity: The overall productivity decreases because of the lack of the standard format of events.

    Cloud Events Logo

    Therefore, CNCF (Cloud-Native Computing Foundation) has brought up the concept, called CloudEvents. CloudEvents is a specification that commonly describes event data. Conforming any event data to this spec will simplify the event declaration and delivery across systems and platforms and more, resulting in a huge productivity increase.

    How Azure Event Grid brokers CloudEvents

    Before CloudEvents, Azure Event Grid described events in their own way. Therefore, if you want to use Azure Event Grid, you should follow the event format/schema that Azure Event Grid declares. However, not every system/service/application follows the Azure Event Grid schema. Therefore, Azure Event Grid now supports CloudEvents spec as input and output formats.

    Azure Event Grid for Azure

    Take a look at the simple diagram below, which describes how Azure Event Grid captures events raised from various Azure services. In this diagram, Azure Key Vault takes the role of the event source or event publisher, and Azure Logic Apps takes the role of the event handler (I'll discuss Azure Logic Apps as the event handler later in this post). We use Azure Event Grid System Topic for Azure.

    Azure Event Grid for Azure

    Therefore, let's create an Azure Event Grid System Topic that captures events raised from Azure Key Vault when a new version of a secret is added.

    Azure Event Grid System Topic for Key Vault

    As Azure Event Grid makes use of the pub/sub pattern, you need to create the Azure Event Grid Subscription to consume the events. Here's the subscription that uses the Event Grid data format:

    ![Azure Event Grid System Subscription for Key Vault in Event Grid Format][./img/21-cloudevents-via-event-grid-03.png]

    Once you create the subscription, create a new version of the secret on Azure Key Vault. Then, Azure Key Vault raises an event, which is captured in the Event Grid format:

    [
    {
    "id": "6f44b9c0-d37e-40e7-89be-f70a6da291cc",
    "topic": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/rg-aegce-krc/providers/Microsoft.KeyVault/vaults/kv-xxxxxxxx",
    "subject": "hello",
    "eventType": "Microsoft.KeyVault.SecretNewVersionCreated",
    "data": {
    "Id": "https://kv-xxxxxxxx.vault.azure.net/secrets/hello/064dfc082fec463f8d4610ed6118811d",
    "VaultName": "kv-xxxxxxxx",
    "ObjectType": "Secret",
    "ObjectName": "hello",
    "Version": "064dfc082fec463f8d4610ed6118811d",
    "NBF": null,
    "EXP": null
    },
    "dataVersion": "1",
    "metadataVersion": "1",
    "eventTime": "2022-09-21T07:08:09.1234567Z"
    }
    ]

    So, how is it different from the CloudEvents format? Let's take a look. According to the spec, the JSON data in CloudEvents might look like this:

    {
    "id" : "C234-1234-1234",
    "source" : "/mycontext",
    "specversion" : "1.0",
    "type" : "com.example.someevent",
    "comexampleextension1" : "value",
    "time" : "2018-04-05T17:31:00Z",
    "datacontenttype" : "application/cloudevents+json",
    "data" : {
    "appinfoA" : "abc",
    "appinfoB" : 123,
    "appinfoC" : true
    }
    }

    This time, let's create another subscription using the CloudEvents schema. Here's how to create the subscription against the system topic:

    Azure Event Grid System Subscription for Key Vault in CloudEvents Format

    Therefore, Azure Key Vault emits the event data in the CloudEvents format:

    {
    "id": "6f44b9c0-d37e-40e7-89be-f70a6da291cc",
    "source": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/rg-aegce-krc/providers/Microsoft.KeyVault/vaults/kv-xxxxxxxx",
    "specversion": "1.0",
    "type": "Microsoft.KeyVault.SecretNewVersionCreated",
    "subject": "hello",
    "time": "2022-09-21T07:08:09.1234567Z",
    "data": {
    "Id": "https://kv-xxxxxxxx.vault.azure.net/secrets/hello/064dfc082fec463f8d4610ed6118811d",
    "VaultName": "kv-xxxxxxxx",
    "ObjectType": "Secret",
    "ObjectName": "hello",
    "Version": "064dfc082fec463f8d4610ed6118811d",
    "NBF": null,
    "EXP": null
    }
    }

    Can you identify some differences between the Event Grid format and the CloudEvents format? Fortunately, both Event Grid schema and CloudEvents schema look similar to each other. But they might be significantly different if you use a different event source outside Azure.

    Azure Event Grid for Systems outside Azure

    As mentioned above, the event data described outside Azure or your own applications within Azure might not be understandable by Azure Event Grid. In this case, we need to use Azure Event Grid Custom Topic. Here's the diagram for it:

    Azure Event Grid for Applications outside Azure

    Let's create the Azure Event Grid Custom Topic. When you create the topic, make sure that you use the CloudEvent schema during the provisioning process:

    Azure Event Grid Custom Topic

    If your application needs to publish events to Azure Event Grid Custom Topic, your application should build the event data in the CloudEvents format. If you use a .NET application, add the NuGet package first.

    dotnet add package Azure.Messaging.EventGrid

    Then, create the publisher instance. You've already got the topic endpoint URL and the access key.

    var topicEndpoint = new Uri("<Azure Event Grid Custom Topic Endpoint URL>");
    var credential = new AzureKeyCredential("<Azure Event Grid Custom Topic Access Key>");
    var publisher = new EventGridPublisherClient(topicEndpoint, credential);

    Now, build the event data like below. Make sure that you follow the CloudEvents schema that requires additional metadata like event source, event type and content type.

    var source = "/your/event/source";
    var type = "com.source.event.your/OnEventOccurs";

    var data = new MyEventData() { Hello = "World" };

    var @event = new CloudEvent(source, type, data);

    And finally, send the event to Azure Event Grid Custom Topic.

    await publisher.SendEventAsync(@event);

    The captured event data looks like the following:

    {
    "id": "cc2b2775-52b8-43b8-a7cc-c1c33c2b2e59",
    "source": "/your/event/source",
    "type": "com.source.event.my/OnEventOccurs",
    "data": {
    "Hello": "World"
    },
    "time": "2022-09-21T07:08:09.1234567+00:00",
    "specversion": "1.0"
    }

    However, due to limitations, someone might insist that their existing application doesn't or can't emit the event data in the CloudEvents format. In this case, what should we do? There's no standard way of sending the event data in the CloudEvents format to Azure Event Grid Custom Topic. One of the approaches we may be able to apply is to put a converter between the existing application and Azure Event Grid Custom Topic like below:

    Azure Event Grid for Applications outside Azure with Converter

    Once the Function app (or any converter app) receives legacy event data, it internally converts the CloudEvents format and publishes it to Azure Event Grid.

    var data = default(MyRequestData);
    using (var reader = new StreamReader(req.Body))
    {
    var serialised = await reader.ReadToEndAsync();
    data = JsonConvert.DeserializeObject<MyRequestData>(serialised);
    }

    var converted = new MyEventData() { Hello = data.Lorem };
    var @event = new CloudEvent(source, type, converted);

    The converted event data is captured like this:

    {
    "id": "df296da3-77cd-4da2-8122-91f631941610",
    "source": "/your/event/source",
    "type": "com.source.event.my/OnEventOccurs",
    "data": {
    "Hello": "ipsum"
    },
    "time": "2022-09-21T07:08:09.1234567+00:00",
    "specversion": "1.0"
    }

    This approach is beneficial in many integration scenarios to make all the event data canonicalised.

    How Azure Logic Apps consumes CloudEvents

    I put Azure Logic Apps as the event handler in the previous diagrams. According to the CloudEvents spec, each event handler must implement request validation to avoid abuse. One good thing about using Azure Logic Apps is that it has already implemented this request validation feature. It implies that we just subscribe to the topic and consume the event data.

    Create a new Logic Apps instance and add the HTTP Request trigger. Once it saves, you will get the endpoint URL.

    Azure Logic Apps with HTTP Request Trigger

    Then, create the Azure Event Grid Subscription with:

    • Endpoint type: Webhook
    • Endpoint URL: The Logic Apps URL from above.

    Azure Logic Apps with HTTP Request Trigger

    Once the subscription is ready, this Logic Apps works well as the event handler. Here's how it receives the CloudEvents data from the subscription.

    Azure Logic Apps that Received CloudEvents data

    Now you've got the CloudEvents data. It's entirely up to you to handle that event data however you want!

    Exercise: Try this yourself!

    You can fork this GitHub repository to your account and play around with it to see how Azure Event Grid with CloudEvents works. Alternatively, the "Deploy to Azure" button below will provision all necessary Azure resources and deploy an Azure Functions app to mimic the event publisher.

    Deploy To Azure

    Resources: For self-study!

    Want to know more about CloudEvents in real-life examples? Here are several resources you can take a look at:

    - + \ No newline at end of file diff --git a/blog/tags/azure-functions/page/5/index.html b/blog/tags/azure-functions/page/5/index.html index e304374a01..c9735217b5 100644 --- a/blog/tags/azure-functions/page/5/index.html +++ b/blog/tags/azure-functions/page/5/index.html @@ -14,13 +14,13 @@ - +

    16 posts tagged with "azure-functions"

    View All Tags

    · 6 min read
    Ramya Oruganti

    Welcome to Day 19 of #30DaysOfServerless!

    Today, we have a special set of posts from our Zero To Hero 🚀 initiative, featuring blog posts authored by our Product Engineering teams for #ServerlessSeptember. Posts were originally published on the Apps on Azure blog on Microsoft Tech Community.


    What We'll Cover

    • Retry Policy Support - in Apache Kafka Extension
    • AutoOffsetReset property - in Apache Kafka Extension
    • Key support for Kafka messages - in Apache Kafka Extension
    • References: Apache Kafka Extension for Azure Functions


    Recently we launched the Apache Kafka extension for Azure functions in GA with some cool new features like deserialization of Avro Generic records and Kafka headers support. We received great responses - so we're back with more updates!

    Retry Policy support

    Handling errors in Azure Functions is important to avoid data loss or miss events or monitor the health of an application. Apache Kafka Extension for Azure Functions supports retry policy which tells the runtime to rerun a failed execution until either successful completion occurs or the maximum number of retries is reached.

    A retry policy is evaluated when a trigger function raises an uncaught exception. As a best practice, you should catch all exceptions in your code and rethrow any errors that you want to result in a retry.

    There are two retry strategies supported by policy that you can configure :- fixed delay and exponential backoff

    1. Fixed Delay - A specified amount of time is allowed to elapse between each retry.
    2. Exponential Backoff - The first retry waits for the minimum delay. On subsequent retries, time is added exponentially to the initial duration for each retry, until the maximum delay is reached. Exponential back-off adds some small randomization to delays to stagger retries in high-throughput scenarios.
    Please Note

    Retry Policy for Kafka extension is NOT supported for C# (in proc and out proc) trigger and output binding. This is supported for languages like Java, Node (JS , TypeScript), PowerShell and Python trigger and output bindings.

    Here is the sample code view of exponential backoff retry strategy

    Error Handling with Apache Kafka extension for Azure Functions

    AutoOffsetReset property

    AutoOffsetReset property enables customers to configure the behaviour in the absence of an initial offset. Imagine a scenario when there is a need to change consumer group name. The consumer connected using a new consumer group had to reprocess all events starting from the oldest (earliest) one, as this was the default one and this setting wasn’t exposed as configurable option in the Apache Kafka extension for Azure Functions(previously). With the help of this kafka setting you can configure on how to start processing events for newly created consumer groups.

    Due to lack of the ability to configure this setting, offset commit errors were causing topics to restart from earliest offset· Users were looking to be able to set offset setting to either latest or earliest based on their requirements.

    We are happy to share that we have enabled the AutoOffsetReset setting as a configurable one to either - Earliest(Default) and Latest. Setting the value to Earliest configures the consumption of the messages from the the earliest/smallest offset or beginning of the topic partition. Setting the property to Latest configures the consumption of the messages from the latest/largest offset or from the end of the topic partition. This is supported for all the Azure Functions supported languages (C# (in & out), Java, Node (JS and TypeScript), PowerShell and python) and can be used for both triggers and output binding

    Error Handling with Apache Kafka extension for Azure Functions

    Key support for Kafka messages

    With keys the producer/output binding can be mapped to broker and partition to write based on the message. So alongside the message value, we can choose to send a message key and that key can be whatever you want it could be a string, it could be a number . In case you don’t send the key, the key is set to null then the data will be sent in a Round Robin fashion to make it very simple. But in case you send a key with your message, all the messages that share the same key will always go to the same partition and thus you can enable grouping of similar messages into partitions

    Previously while consuming a Kafka event message using the Azure Function kafka extension, the event key was always none although the key was present in the event message.

    Key support was implemented in the extension which enables customers to set/view key in the Kafka event messages coming in to the kafka trigger and set keys to the messages going in to kafka topics (with keys set) through output binding. Therefore key support was enabled in the extension to support both trigger and output binding for all Azure Functions supported languages ( (C# (in & out), Java, Node (JS and TypeScript), PowerShell and python)

    Here is the view of an output binding producer code where Kafka messages are being set with key

    Error Handling with Apache Kafka extension for Azure Functions

    Conclusion:

    In this article you have learnt about the latest additions to the Apache Kafka extension for Azure Functions. Incase you have been waiting for these features to get released or need them you are all set and please go head and try them out!! They are available in the latest extension bundles

    Want to learn more?

    Please refer to Apache Kafka bindings for Azure Functions | Microsoft Docs for detail documentation, samples on the Azure function supported languages and more!

    References

    FEEDBACK WELCOME

    Keep in touch with us on Twitter via @AzureFunctions.

    - + \ No newline at end of file diff --git a/blog/tags/azure-functions/page/6/index.html b/blog/tags/azure-functions/page/6/index.html index 374f8ec85b..1f1ab42390 100644 --- a/blog/tags/azure-functions/page/6/index.html +++ b/blog/tags/azure-functions/page/6/index.html @@ -14,13 +14,13 @@ - +

    16 posts tagged with "azure-functions"

    View All Tags

    · 6 min read
    Melony Qin

    Welcome to Day 12 of #30DaysOfServerless!

    Today, we have a special set of posts from our Zero To Hero 🚀 initiative, featuring blog posts authored by our Product Engineering teams for #ServerlessSeptember. Posts were originally published on the Apps on Azure blog on Microsoft Tech Community.


    What We'll Cover

    • What are Custom Handlers, and why use them?
    • How Custom Handler Works
    • Message Processing With Azure Custom Handler
    • Azure Portal Monitoring


    If you have been working with Azure Functions for a while, you may know Azure Functions is a serverless FaaS (Function as a Service) offered provided by Microsoft Azure, which is built for your key scenarios, including building web APIs, processing file uploads, responding to database changes, processing IoT data streams, managing message queues, and more.

    Custom Handlers: What and Why

    Azure functions support multiple programming languages including C#, F#, Java, JavaScript, Python, typescript, and PowerShell. If you want to get extended language support with Azure functions for other languages such as Go, and Rust, that’s where custom handler comes in.

    An Azure function custom handler allows the use of any language that supports HTTP primitives and author Azure functions. With custom handlers, you can use triggers and input and output bindings via extension bundles, hence it supports all the triggers and bindings you're used to with Azure functions.

    How a Custom Handler Works

    Let’s take a look at custom handlers and how it works.

    • A request is sent to the function host when an event is triggered. It’s up to the function host to issue a request payload to the custom handler, which holds the trigger and inputs binding data as well as other metadata for the function.
    • The custom handler is a local HTTP web server. It executes the function code and returns a response payload to the Functions host.
    • The Functions host passes data from the response to the function's output bindings which will be passed to the downstream stream services for data processing.

    Check out this article to know more about Azure functions custom handlers.


    Message processing with Custom Handlers

    Message processing is one of the key scenarios that Azure functions are trying to address. In the message-processing scenario, events are often collected in queues. These events can trigger Azure functions to execute a piece of business logic.

    You can use the Service Bus trigger to respond to messages from an Azure Service Bus queue - it's then up to the Azure functions custom handlers to take further actions to process the messages. The process is described in the following diagram:

    Building Serverless Go Applications with Azure functions custom handlers

    In Azure function, the function.json defines the function's trigger, input and output bindings, and other configuration settings. Note that every function can have multiple bindings, but it can only have one trigger. The following is an example of setting up the Service Bus queue trigger in the function.json file :

    {
    "bindings": [
    {
    "name": "queueItem",
    "type": "serviceBusTrigger",
    "direction": "in",
    "queueName": "functionqueue",
    "connection": "ServiceBusConnection"
    }
    ]
    }

    You can add a binding definition in the function.json to write the output to a database or other locations of your desire. Supported bindings can be found here.

    As we’re programming in Go, so we need to set the value of defaultExecutablePath to handler in the customHandler.description section in the host.json file.

    Assume we’re programming in Windows OS, and we have named our go application as server.go, after we executed go build server.go command, it produces an executable called server.exe. So here we set server.exe in the host.json as the following example :

      "customHandler": {
    "description": {
    "defaultExecutablePath": "./server.exe",
    "workingDirectory": "",
    "arguments": []
    }
    }

    We’re showcasing a simple Go application here with Azure functions custom handlers where we print out the messages received from the function host. The following is the full code of server.go application :

    package main

    import (
    "encoding/json"
    "fmt"
    "log"
    "net/http"
    "os"
    )

    type InvokeRequest struct {
    Data map[string]json.RawMessage
    Metadata map[string]interface{}
    }

    func queueHandler(w http.ResponseWriter, r *http.Request) {
    var invokeRequest InvokeRequest

    d := json.NewDecoder(r.Body)
    d.Decode(&invokeRequest)

    var parsedMessage string
    json.Unmarshal(invokeRequest.Data["queueItem"], &parsedMessage)

    fmt.Println(parsedMessage)
    }

    func main() {
    customHandlerPort, exists := os.LookupEnv("FUNCTIONS_CUSTOMHANDLER_PORT")
    if !exists {
    customHandlerPort = "8080"
    }
    mux := http.NewServeMux()
    mux.HandleFunc("/MessageProcessorFunction", queueHandler)
    fmt.Println("Go server Listening on: ", customHandlerPort)
    log.Fatal(http.ListenAndServe(":"+customHandlerPort, mux))

    }

    Ensure you have Azure functions core tools installed, then we can use func start command to start our function. Then we’ll have have a C#-based Message Sender application on Github to send out 3000 messages to the Azure service bus queue. You’ll see Azure functions instantly start to process the messages and print out the message as the following:

    Monitoring Serverless Go Applications with Azure functions custom handlers


    Azure portal monitoring

    Let’s go back to Azure portal portal the events see how those messages in Azure Service Bus queue were being processed. There was 3000 messages were queued in the Service Bus queue ( the Blue line stands for incoming Messages ). The outgoing messages (the red line in smaller wave shape ) showing there are progressively being read by Azure functions as the following :

    Monitoring Serverless Go Applications with Azure functions custom handlers

    Check out this article about monitoring Azure Service bus for further information.

    Next steps

    Thanks for following along, we’re looking forward to hearing your feedback. Also, if you discover potential issues, please record them on Azure Functions host GitHub repository or tag us @AzureFunctions on Twitter.

    RESOURCES

    Start to build your serverless applications with custom handlers, check out the official documentation:

    Life is a journey of learning. Let’s stay tuned!

    - + \ No newline at end of file diff --git a/blog/tags/azure-functions/page/7/index.html b/blog/tags/azure-functions/page/7/index.html index 4da7f214ba..d3e484ccbb 100644 --- a/blog/tags/azure-functions/page/7/index.html +++ b/blog/tags/azure-functions/page/7/index.html @@ -14,13 +14,13 @@ - +

    16 posts tagged with "azure-functions"

    View All Tags

    · 5 min read
    Anthony Chu

    Welcome to Day 12 of #30DaysOfServerless!

    Today, we have a special set of posts from our Zero To Hero 🚀 initiative, featuring blog posts authored by our Product Engineering teams for #ServerlessSeptember. Posts were originally published on the Apps on Azure blog on Microsoft Tech Community.


    What We'll Cover

    • Using Visual Studio
    • Using Visual Studio Code: Docker, ACA extensions
    • Using Azure CLI
    • Using CI/CD Pipelines


    Last week, @kendallroden wrote about what it means to be Cloud-Native and how Azure Container Apps provides a serverless containers platform for hosting all of your Cloud-Native applications. Today, we’ll walk through a few ways to get your apps up and running on Azure Container Apps.

    Depending on where you are in your Cloud-Native app development journey, you might choose to use different tools to deploy your apps.

    • “Right-click, publish” – Deploying an app directly from an IDE or code editor is often seen as a bad practice, but it’s one of the quickest ways to test out an app in a cloud environment.
    • Command line interface – CLIs are useful for deploying apps from a terminal. Commands can be run manually or in a script.
    • Continuous integration/deployment – To deploy production apps, the recommended approach is to automate the process in a robust CI/CD pipeline.

    Let's explore some of these options in more depth.

    Visual Studio

    Visual Studio 2022 has built-in support for deploying .NET applications to Azure Container Apps. You can use the familiar publish dialog to provision Container Apps resources and deploy to them directly. This helps you prototype an app and see it run in Azure Container Apps with the least amount of effort.

    Journey to the cloud with Azure Container Apps

    Once you’re happy with the app and it’s ready for production, Visual Studio allows you to push your code to GitHub and set up a GitHub Actions workflow to build and deploy your app every time you push changes. You can do this by checking a box.

    Journey to the cloud with Azure Container Apps

    Visual Studio Code

    There are a couple of valuable extensions that you’ll want to install if you’re working in VS Code.

    Docker extension

    The Docker extension provides commands for building a container image for your app and pushing it to a container registry. It can even do this without requiring Docker Desktop on your local machine --- the “Build image in Azure” command remotely builds and pushes a container image to Azure Container Registry.

    Journey to the cloud with Azure Container Apps

    And if your app doesn’t have a dockerfile, the extension will generate one for you.

    Journey to the cloud with Azure Container Apps

    Azure Container Apps extension

    Once you’ve built your container image and pushed it to a registry, the Azure Container Apps VS Code extension provides commands for creating a container app and deploying revisions using the image you’ve built.

    Journey to the cloud with Azure Container Apps

    Azure CLI

    The Azure CLI can be used to manage pretty much anything in Azure. For Azure Container Apps, you’ll find commands for creating, updating, and managing your Container Apps resources.

    Just like in VS Code, with a few commands in the Azure CLI, you can create your Azure resources, build and push your container image, and then deploy it to a container app.

    To make things as simple as possible, the Azure CLI also has an “az containerapp up” command. This single command takes care of everything that’s needed to turn your source code from your local machine to a cloud-hosted application in Azure Container Apps.

    az containerapp up --name myapp --source ./src

    We saw earlier that Visual Studio can generate a GitHub Actions workflow to automatically build and deploy your app on every commit. “az containerapp up” can do this too. The following adds a workflow to a repo.

    az containerapp up --name myapp --repo https://github.com/myorg/myproject

    CI/CD pipelines

    When it’s time to take your app to production, it’s strongly recommended to set up a CI/CD pipeline to automatically and repeatably build, test, and deploy it. We’ve already seen that tools such as Visual Studio and Azure CLI can automatically generate a workflow for GitHub Actions. You can set up a pipeline in Azure DevOps too. This is an example Azure DevOps pipeline.

    trigger:
    branches:
    include:
    - main

    pool:
    vmImage: ubuntu-latest

    stages:

    - stage: Build

    jobs:
    - job: build
    displayName: Build app

    steps:
    - task: Docker@2
    inputs:
    containerRegistry: 'myregistry'
    repository: 'hello-aca'
    command: 'buildAndPush'
    Dockerfile: 'hello-container-apps/Dockerfile'
    tags: '$(Build.BuildId)'

    - stage: Deploy

    jobs:
    - job: deploy
    displayName: Deploy app

    steps:
    - task: AzureCLI@2
    inputs:
    azureSubscription: 'my-subscription(5361b9d6-46ea-43c3-a898-15f14afb0db6)'
    scriptType: 'bash'
    scriptLocation: 'inlineScript'
    inlineScript: |
    # automatically install Container Apps CLI extension
    az config set extension.use_dynamic_install=yes_without_prompt

    # ensure registry is configured in container app
    az containerapp registry set \
    --name hello-aca \
    --resource-group mygroup \
    --server myregistry.azurecr.io \
    --identity system

    # update container app
    az containerapp update \
    --name hello-aca \
    --resource-group mygroup \
    --image myregistry.azurecr.io/hello-aca:$(Build.BuildId)

    Conclusion

    In this article, we looked at a few ways to deploy your Cloud-Native applications to Azure Container Apps and how to decide which one to use based on where you are in your app’s journey to the cloud.

    To learn more, visit Azure Container Apps | Microsoft Azure today!

    ASK THE EXPERT: LIVE Q&A

    The Azure Container Apps team will answer questions live on September 29.

    - + \ No newline at end of file diff --git a/blog/tags/azure-functions/page/8/index.html b/blog/tags/azure-functions/page/8/index.html index d08ccd8e74..d981e52edc 100644 --- a/blog/tags/azure-functions/page/8/index.html +++ b/blog/tags/azure-functions/page/8/index.html @@ -14,13 +14,13 @@ - +

    16 posts tagged with "azure-functions"

    View All Tags

    · 5 min read
    Nitya Narasimhan
    Devanshi Joshi

    SEP 08: CHANGE IN PUBLISHING SCHEDULE

    Starting from Week 2 (Sep 8), we'll be publishing blog posts in batches rather than on a daily basis, so you can read a series of related posts together. Don't want to miss updates? Just subscribe to the feed


    Welcome to Day 8 of #30DaysOfServerless!

    This marks the end of our Week 1 Roadmap focused on Azure Functions!! Today, we'll do a quick recap of all #ServerlessSeptember activities in Week 1, set the stage for Week 2 - and leave you with some excellent tutorials you should explore to build more advanced scenarios with Azure Functions.

    Ready? Let's go.


    What We'll Cover

    • Azure Functions: Week 1 Recap
    • Advanced Functions: Explore Samples
    • End-to-End: Serverless Hacks & Cloud Skills
    • What's Next: Hello, Containers & Microservices
    • Challenge: Complete the Learning Path


    Week 1 Recap: #30Days & Functions

    Congratulations!! We made it to the end of Week 1 of #ServerlessSeptember. Let's recap what we learned so far:

    • In Core Concepts we looked at where Azure Functions fits into the serverless options available on Azure. And we learned about key concepts like Triggers, Bindings, Custom Handlers and Durable Functions.
    • In Build Your First Function we looked at the tooling options for creating Functions apps, testing them locally, and deploying them to Azure - as we built and deployed our first Functions app.
    • In the next 4 posts, we explored new Triggers, Integrations, and Scenarios - as we looked at building Functions Apps in Java, JavaScript, .NET and Python.
    • And in the Zero-To-Hero series, we learned about Durable Entities - and how we can use them to create stateful serverless solutions using a Chirper Sample as an example scenario.

    The illustrated roadmap below summarizes what we covered each day this week, as we bring our Functions-as-a-Service exploration to a close.


    Advanced Functions: Code Samples

    So, now that we've got our first Functions app under our belt, and validated our local development setup for tooling, where can we go next? A good next step is to explore different triggers and bindings, that drive richer end-to-end scenarios. For example:

    • Integrate Functions with Azure Logic Apps - we'll discuss Azure Logic Apps in Week 3. For now, think of it as a workflow automation tool that lets you integrate seamlessly with other supported Azure services to drive an end-to-end scenario. In this tutorial, we set up a workflow connecting Twitter (get tweet) to Azure Cognitive Services (analyze sentiment) - and use that to trigger an Azure Functions app to send email about the result.
    • Integrate Functions with Event Grid - we'll discuss Azure Event Grid in Week 3. For now, think of it as an eventing service connecting event sources (publishers) to event handlers (subscribers) at cloud scale. In this tutorial, we handle a common use case - a workflow where loading an image to Blob Storage triggers an Azure Functions app that implements a resize function, helping automatically generate thumbnails for the uploaded image.
    • Integrate Functions with CosmosDB and SignalR to bring real-time push-based notifications to your web app. It achieves this by using a Functions app that is triggered by changes in a CosmosDB backend, causing it to broadcast that update (push notification to connected web clients over SignalR, in real time.

    Want more ideas? Check out the Azure Samples for Functions for implementations, and browse the Azure Architecture Center for reference architectures from real-world scenarios that involve Azure Functions usage.


    E2E Scenarios: Hacks & Cloud Skills

    Want to systematically work your way through a single End-to-End scenario involving Azure Functions alongside other serverless support technologies? Check out the Serverless Hacks activity happening during #ServerlessSeptember, and learn to build this "Serverless Tollbooth Application" in a series of 10 challenges. Check out the video series for a reference solution in .NET and sign up for weekly office hours to join peers and discuss your solutions or challenges.

    Or perhaps you prefer to learn core concepts with code in a structured learning path? We have that covered. Check out the 12-module "Create Serverless Applications" course from Microsoft Learn which walks your through concepts, one at a time, with code. Even better - sign up for the free Cloud Skills Challenge and complete the same path (in under 30 days) but this time, with the added fun of competing against your peers for a spot on a leaderboard, and swag.


    What's Next? Hello, Cloud-Native!

    So where to next? In Week 2 we turn our attention from Functions-as-a-Service to building more complex backends using Containers and Microservices. We'll focus on two core technologies - Azure Container Apps and Dapr (Distributed Application Runtime) - both key components of a broader vision around Building Cloud-Native Applications in Azure.

    What is Cloud-Native you ask?

    Fortunately for you, we have an excellent introduction in our Zero-to-Hero article on Go Cloud-Native with Azure Container Apps - that explains the 5 pillars of Cloud-Native and highlights the value of Azure Container Apps (scenarios) and Dapr (sidecar architecture) for simplified microservices-based solution with auto-scale capability. Prefer a visual summary? Here's an illustrate guide to that article for convenience.

    Go Cloud-Native Download a higher resolution version of the image


    Take The Challenge

    We typically end each post with an exercise or activity to reinforce what you learned. For Week 1, we encourage you to take the Cloud Skills Challenge and work your way through at least a subset of the modules, for hands-on experience with the different Azure Functions concepts, integrations, and usage.

    See you in Week 2!

    - + \ No newline at end of file diff --git a/blog/tags/azure-functions/page/9/index.html b/blog/tags/azure-functions/page/9/index.html index d6098be734..fd49b63e64 100644 --- a/blog/tags/azure-functions/page/9/index.html +++ b/blog/tags/azure-functions/page/9/index.html @@ -14,7 +14,7 @@ - + @@ -22,7 +22,7 @@

    16 posts tagged with "azure-functions"

    View All Tags

    · 7 min read
    Jay Miller

    Welcome to Day 7 of #30DaysOfServerless!

    Over the past couple of days, we've explored Azure Functions from the perspective of specific programming languages. Today we'll continue that trend by looking at Python - exploring the Timer Trigger and CosmosDB binding, and showcasing integration with a FastAPI-implemented web app.

    Ready? Let's go.


    What We'll Cover

    • Developer Guidance: Azure Functions On Python
    • Build & Deploy: Wildfire Detection Apps with Timer Trigger + CosmosDB
    • Demo: My Fire Map App: Using FastAPI and Azure Maps to visualize data
    • Next Steps: Explore Azure Samples
    • Exercise: Try this yourself!
    • Resources: For self-study!


    Developer Guidance

    If you're a Python developer new to serverless on Azure, start with the Azure Functions Python Developer Guide. It covers:

    • Quickstarts with Visual Studio Code and Azure CLI
    • Adopting best practices for hosting, reliability and efficiency.
    • Tutorials showcasing Azure automation, image classification and more
    • Samples showcasing Azure Functions features for Python developers

    Now let's dive in and build our first Python-based Azure Functions app.


    Detecting Wildfires Around the World?

    I live in California which is known for lots of wildfires. I wanted to create a proof of concept for developing an application that could let me know if there was a wildfire detected near my home.

    NASA has a few satelites orbiting the Earth that can detect wildfires. These satelites take scans of the radiative heat in and use that to determine the likelihood of a wildfire. NASA updates their information about every 30 minutes and it can take about four hours for to scan and process information.

    Fire Point Near Austin, TX

    I want to get the information but I don't want to ping NASA or another service every time I check.

    What if I occaisionally download all the data I need? Then I can ping that as much as I like.

    I can create a script that does just that. Any time I say I can create a script that is a verbal queue for me to consider using an Azure function. With the function being ran in the cloud, I can ensure the script runs even when I'm not at my computer.

    How the Timer Trigger Works

    This function will utilize the Timer Trigger. This means Azure will call this function to run at a scheduled interval. This isn't the only way to keep the data in sync, but we know that arcgis, the service that we're using says that data is only updated every 30 minutes or so.

    To learn more about the TimerTrigger as a concept, check out the Azure Functions documentation around Timers.

    When we create the function we tell it a few things like where the script will live (in our case in __init__.py) the type and direction and notably often it should run. We specify the timer using schedule": <The CRON INTERVAL>. For us we're using 0 0,30 * * * which means every 30 minutes at the hour and half-hour.

    {
    "scriptFile": "__init__.py",
    "bindings": [
    {
    "name": "reqTimer",
    "type": "timerTrigger",
    "direction": "in",
    "schedule": "0 0,30 * * * *"
    }
    ]
    }

    Next, we create the code that runs when the function is called.

    Connecting to the Database and our Source

    Disclaimer: The data that we're pulling is for educational purposes only. This is not meant to be a production level application. You're welcome play with this project but ensure that you're using the data in compliance with Esri.

    Our function does two important things.

    1. It pulls data from ArcGIS that meets the parameters
    2. It stores that pulled data into our database

    If you want to check out the code in its entirety, check out the GitHub repository.

    Pulling the data from ArcGIS is easy. We can use the ArcGIS Python API. Then, we need to load the service layer. Finally we query that layer for the specific data.

    def write_new_file_data(gis_id:str, layer:int=0) -> FeatureSet:
    """Returns a JSON String of the Dataframe"""
    fire_data = g.content.get(gis_id)
    feature = fire_data.layers[layer] # Loading Featured Layer from ArcGIS
    q = feature.query(
    where="confidence >= 65 AND hours_old <= 4", #The filter for the query
    return_distince_values=True,
    out_fields="confidence, hours_old", # The data we want to store with our points
    out_sr=4326, # The spatial reference of the data
    )
    return q

    Then we need to store the data in our database.

    We're using Cosmos DB for this. COSMOSDB is a NoSQL database, which means that the data looks a lot like a python dictionary as it's JSON. This means that we don't need to worry about converting the data into a format that can be stored in a relational database.

    The second reason is that Cosmos DB is tied into the Azure ecosystem so that if we want to create functions Azure events around it, we can.

    Our script grabs the information that we pulled from ArcGIS and stores it in our database.

    async with CosmosClient.from_connection_string(COSMOS_CONNECTION_STRING) as client:
    container = database.get_container_client(container=CONTAINER)
    for record in data:
    await container.create_item(
    record,
    enable_automatic_id_generation=True,
    )

    In our code each of these functions live in their own space. So in the main function we focus solely on what azure functions will be doing. The script that gets called is __init__.py. There we'll have the function call the other functions running.

    We created another function called load_and_write that does all the work outlined above. __init__.py will call that.

    async def main(reqTimer: func.TimerRequest) -> None:
    database=database
    container=container
    await update_db.load_and_write(gis_id=GIS_LAYER_ID, database=database, container=container)

    Then we deploy the function to Azure. I like to use VS Code's Azure Extension but you can also deploy it a few other ways.

    Deploying the function via VS Code

    Once the function is deployed we can load the Azure portal and see a ping whenever the function is called. The pings correspond to the Function being ran

    We can also see the data now living in the datastore. Document in Cosmos DB

    It's in the Database, Now What?

    Now the real fun begins. We just loaded the last bit of fire data into a database. We can now query that data and serve it to others.

    As I mentioned before, our Cosmos DB data is also stored in Azure, which means that we can deploy Azure Functions to trigger when new data is added. Perhaps you can use this to check for fires near you and use a Logic App to send an alert to your phone or email.

    Another option is to create a web application that talks to the database and displays the data. I've created an example of this using FastAPI – https://jm-func-us-fire-notify.azurewebsites.net.

    Website that Checks for Fires


    Next Steps

    This article showcased the Timer Trigger and the HTTP Trigger for Azure Functions in Python. Now try exploring other triggers and bindings by browsing Bindings code samples for Python and Azure Functions samples for Python

    Once you've tried out the samples, you may want to explore more advanced integrations or extensions for serverless Python scenarios. Here are some suggestions:

    And check out the resources for more tutorials to build up your Azure Functions skills.

    Exercise

    I encourage you to fork the repository and try building and deploying it yourself! You can see the TimerTrigger and a HTTPTrigger building the website.

    Then try extending it. Perhaps if wildfires are a big thing in your area, you can use some of the data available in Planetary Computer to check out some other datasets.

    Resources

    - + \ No newline at end of file diff --git a/blog/tags/azure-logic-apps/index.html b/blog/tags/azure-logic-apps/index.html index 42a6a4849f..adc0c08fdf 100644 --- a/blog/tags/azure-logic-apps/index.html +++ b/blog/tags/azure-logic-apps/index.html @@ -14,14 +14,14 @@ - +

    3 posts tagged with "azure-logic-apps"

    View All Tags

    · 7 min read
    Devanshi Joshi

    It's Serverless September in a Nutshell! Join us as we unpack our month-long learning journey exploring the core technology pillars for Serverless architectures on Azure. Then end with a look at next steps to build your Cloud-native applications on Azure.


    What We'll Cover

    • Functions-as-a-Service (FaaS)
    • Microservices and Containers
    • Serverless Integrations
    • End-to-End Solutions
    • Developer Tools & #Hacktoberfest

    Banner for Serverless September


    Building Cloud-native Apps

    By definition, cloud-native technologies empower organizations to build and run scalable applications in modern, dynamic environments such as public, private, and hybrid clouds. You can learn more about cloud-native in Kendall Roden's #ServerlessSeptember post on Going Cloud-native with Azure Container Apps.

    Serveless technologies accelerate productivity and minimize costs for deploying applications at cloud scale. So, what can we build with serverless technologies in cloud-native on Azure? Anything that is event-driven - examples include:

    • Microservices - scaled by KEDA-compliant triggers
    • Public API Endpoints - scaled by #concurrent HTTP requests
    • Event-Driven Applications - scaled by length of message queue
    • Web Applications - scaled by #concurrent HTTP requests
    • Background Process - scaled by CPU and Memory usage

    Great - but as developers, we really want to know how we can get started building and deploying serverless solutions on Azure. That was the focus of our #ServerlessSeptember journey. Let's take a quick look at the four key themes.

    Functions-as-a-Service (FaaS)

    Functions-as-a-Service (FaaS) is the epitome of developer productivity for full-stack modern apps. As developers, you don't manage infrastructure and focus only on business logic and application code. And, with Serverless Compute you only pay for when your code runs - making this the simplest first step to begin migrating your application to cloud-native.

    In Azure, FaaS is provided by Azure Functions. Check out our Functions + Serverless on Azure to go from learning core concepts, to building your first Functions app in your programming language of choice. Azure functions support multiple programming languages including C#, F#, Java, JavaScript, Python, Typescript, and PowerShell.

    Want to get extended language support for languages like Go, and Rust? You can Use Custom Handlers to make this happen! But what if you want to have long-running functions, or create complex workflows involving more than one function? Read our post on Durable Entities to learn how you can orchestrate this with Azure Functions.

    Check out this recent AskTheExpert Q&A session with the Azure Functions team to get answers to popular community questions on Azure Functions features and usage.

    Microservices and Containers

    Functions-as-a-Service is an ideal first step towards serverless development. But Functions are just one of the 5 pillars of cloud-native. This week we'll look at two of the other pillars: microservices and containers - with specific focus on two core technologies: Azure Container Apps and Dapr (Distributed Application Runtime).

    In this 6-part series of posts, we walk through each technology independently, before looking at the value of building Azure Container Apps with Dapr.

    • In Hello Container Apps we learned core concepts & deployed our first ACA.
    • In Microservices Communication we learned about ACA environments and virtual networks, and how microservices communicate in ACA with a hands-on tutorial.
    • In Scaling Your Container Apps we learned about KEDA (Kubernetes Event-Driven Autoscaler) and configuring ACA for autoscaling with KEDA-compliant triggers.
    • In Build with Dapr we introduced the Distributed Application Runtime (Dapr), exploring its Building Block APIs and sidecar architecture for working with ACA.
    • In Secure ACA Access we learned how to secure ACA access to external services with - and without - Dapr, covering Secret Stores and Managed Identity.
    • Finally, Build ACA with Dapr tied it all together with a enterprise app scenario where an orders processor (ACA) uses Dapr APIs (PubSub, State Management) to receive and store order messages from Azure Service Bus.

    Build ACA with Dapr

    Check out this recent AskTheExpert Q&A session with the Azure Container Apps team for answers to popular community questions on core features and usage.

    Serverless Integrations

    In the first half of the month we looked at compute resources for building and deploying serverless applications. In the second half, we look at integration tools and resources that automate developer workflows to streamline the end-to-end developer experience.

    In Azure, this is enabled by services like Azure Logic Apps and Azure Event Grid. Azure Logic Apps provides a visual designer to create and automate workflows with little or no code involved. Azure Event Grid provides a highly-scable event broker with support for pub/sub communications to drive async event-driven architectures.

    • In Tracking Weather Data Changes With Logic Apps we look at how you can use Logic Apps to integrate the MSN weather service with Azure CosmosDB, allowing automated collection of weather data on changes.

    • In Teach the Cloud to Read & Categorize Mail we take it a step further, using Logic Apps to automate a workflow that includes a Computer Vision service to "read" images and store the results to CosmosDB.

    • In Integrate with Microsoft Graph we explore a multi-cloud scenario (Azure + M365) where change notifications from Microsoft Graph can be integrated using Logic Apps and Event Hubs to power an onboarding workflow.

    • In Cloud Events with Event Grid we learn about the CloudEvents specification (for consistently describing event data) - and learn how Event Grid brokers events in this format. Azure Logic Apps can be an Event handler (subscriber) that uses the event to trigger an automated workflow on receipt.

      Azure Event Grid And Logic Apps

    Want to explore other such integrations? Browse Azure Architectures and filter by selected Azure services for more real-world scenarios.


    End-to-End Solutions

    We've covered serverless compute solutions (for building your serverless applications) and serverless integration services to automate end-to-end workflows in synchronous or asynchronous event-driven architectures. In this final week, we want to leave you with a sense of end-to-end development tools and use cases that can be enabled by Serverless on Azure. Here are some key examples:

    ArticleDescription
    In this tutorial, you'll learn to deploy a containerized ASP.NET Core 6.0 application to Azure Container Apps - with a Blazor front-end and two Web API projects
    Deploy Java containers to cloudIn this tutorial you learn to build and deploy a Java application running on Spring Boot, by publishing it in a container to Azure Container Registry, then deploying to Azure Container Apps,, from ACR, via the Azure Portal.
    **Where am I? My GPS Location with Serverless Power Platform Custom Connector**In this step-by-step tutorial you learn to integrate a serverless application (built on Azure Functions and OpenAPI) with Power Platforms custom connectors via Azure API Management (API-M).This pattern can empower a new ecosystem of fusion apps for cases like inventory management.
    And in our Serverless Hacks initiative, we walked through an 8-step hack to build a serverless tollbooth. Check out this 12-part video walkthrough of a reference solution using .NET.

    Developer Tools

    But wait - there's more. Those are a sample of the end-to-end application scenarios that are built on serverless on Azure. But what about the developer experience? In this article, we say hello to the Azure Developer CLI - an open-source tool that streamlines your develop-deploy workflow, with simple commands that map to core stages of your development journey. Go from code to cloud with one CLI

    And watch this space for more such tutorials and content through October, including a special #Hacktoberfest focused initiative to encourage and support first-time contributors to open-source. Here's a sneak peek at the project we plan to share - the new awesome-azd templates gallery.


    Join us at Microsoft Ignite!

    Want to continue your learning journey, and learn about what's next for Serverless on Azure? Microsoft Ignite happens Oct 12-14 this year and has multiple sessions on relevant technologies and tools. Check out the Session Catalog and register here to attend online.

    - + \ No newline at end of file diff --git a/blog/tags/azure-logic-apps/page/2/index.html b/blog/tags/azure-logic-apps/page/2/index.html index 6ede6cc6a4..510fe2024e 100644 --- a/blog/tags/azure-logic-apps/page/2/index.html +++ b/blog/tags/azure-logic-apps/page/2/index.html @@ -14,13 +14,13 @@ - +

    3 posts tagged with "azure-logic-apps"

    View All Tags

    · 9 min read
    Justin Yoo

    Welcome to Day 21 of #30DaysOfServerless!

    We've so far walked through what Azure Event Grid is and how it generally works. Today, let's discuss how Azure Event Grid deals with CloudEvents.


    What We'll Cover


    OK. Let's get started!

    What is CloudEvents?

    Needless to say, events are everywhere. Events come not only from event-driven systems but also from many different systems and devices, including IoT ones like Raspberry PI.

    But the problem is that every event publisher (system/device that creates events) describes their events differently, meaning there is no standard way of describing events. It has caused many issues between systems, mainly from the interoperability perspective.

    1. Consistency: No standard way of describing events resulted in developers having to write their own event handling logic for each event source.
    2. Accessibility: There were no common libraries, tooling and infrastructure to deliver events across systems.
    3. Productivity: The overall productivity decreases because of the lack of the standard format of events.

    Cloud Events Logo

    Therefore, CNCF (Cloud-Native Computing Foundation) has brought up the concept, called CloudEvents. CloudEvents is a specification that commonly describes event data. Conforming any event data to this spec will simplify the event declaration and delivery across systems and platforms and more, resulting in a huge productivity increase.

    How Azure Event Grid brokers CloudEvents

    Before CloudEvents, Azure Event Grid described events in their own way. Therefore, if you want to use Azure Event Grid, you should follow the event format/schema that Azure Event Grid declares. However, not every system/service/application follows the Azure Event Grid schema. Therefore, Azure Event Grid now supports CloudEvents spec as input and output formats.

    Azure Event Grid for Azure

    Take a look at the simple diagram below, which describes how Azure Event Grid captures events raised from various Azure services. In this diagram, Azure Key Vault takes the role of the event source or event publisher, and Azure Logic Apps takes the role of the event handler (I'll discuss Azure Logic Apps as the event handler later in this post). We use Azure Event Grid System Topic for Azure.

    Azure Event Grid for Azure

    Therefore, let's create an Azure Event Grid System Topic that captures events raised from Azure Key Vault when a new version of a secret is added.

    Azure Event Grid System Topic for Key Vault

    As Azure Event Grid makes use of the pub/sub pattern, you need to create the Azure Event Grid Subscription to consume the events. Here's the subscription that uses the Event Grid data format:

    ![Azure Event Grid System Subscription for Key Vault in Event Grid Format][./img/21-cloudevents-via-event-grid-03.png]

    Once you create the subscription, create a new version of the secret on Azure Key Vault. Then, Azure Key Vault raises an event, which is captured in the Event Grid format:

    [
    {
    "id": "6f44b9c0-d37e-40e7-89be-f70a6da291cc",
    "topic": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/rg-aegce-krc/providers/Microsoft.KeyVault/vaults/kv-xxxxxxxx",
    "subject": "hello",
    "eventType": "Microsoft.KeyVault.SecretNewVersionCreated",
    "data": {
    "Id": "https://kv-xxxxxxxx.vault.azure.net/secrets/hello/064dfc082fec463f8d4610ed6118811d",
    "VaultName": "kv-xxxxxxxx",
    "ObjectType": "Secret",
    "ObjectName": "hello",
    "Version": "064dfc082fec463f8d4610ed6118811d",
    "NBF": null,
    "EXP": null
    },
    "dataVersion": "1",
    "metadataVersion": "1",
    "eventTime": "2022-09-21T07:08:09.1234567Z"
    }
    ]

    So, how is it different from the CloudEvents format? Let's take a look. According to the spec, the JSON data in CloudEvents might look like this:

    {
    "id" : "C234-1234-1234",
    "source" : "/mycontext",
    "specversion" : "1.0",
    "type" : "com.example.someevent",
    "comexampleextension1" : "value",
    "time" : "2018-04-05T17:31:00Z",
    "datacontenttype" : "application/cloudevents+json",
    "data" : {
    "appinfoA" : "abc",
    "appinfoB" : 123,
    "appinfoC" : true
    }
    }

    This time, let's create another subscription using the CloudEvents schema. Here's how to create the subscription against the system topic:

    Azure Event Grid System Subscription for Key Vault in CloudEvents Format

    Therefore, Azure Key Vault emits the event data in the CloudEvents format:

    {
    "id": "6f44b9c0-d37e-40e7-89be-f70a6da291cc",
    "source": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/rg-aegce-krc/providers/Microsoft.KeyVault/vaults/kv-xxxxxxxx",
    "specversion": "1.0",
    "type": "Microsoft.KeyVault.SecretNewVersionCreated",
    "subject": "hello",
    "time": "2022-09-21T07:08:09.1234567Z",
    "data": {
    "Id": "https://kv-xxxxxxxx.vault.azure.net/secrets/hello/064dfc082fec463f8d4610ed6118811d",
    "VaultName": "kv-xxxxxxxx",
    "ObjectType": "Secret",
    "ObjectName": "hello",
    "Version": "064dfc082fec463f8d4610ed6118811d",
    "NBF": null,
    "EXP": null
    }
    }

    Can you identify some differences between the Event Grid format and the CloudEvents format? Fortunately, both Event Grid schema and CloudEvents schema look similar to each other. But they might be significantly different if you use a different event source outside Azure.

    Azure Event Grid for Systems outside Azure

    As mentioned above, the event data described outside Azure or your own applications within Azure might not be understandable by Azure Event Grid. In this case, we need to use Azure Event Grid Custom Topic. Here's the diagram for it:

    Azure Event Grid for Applications outside Azure

    Let's create the Azure Event Grid Custom Topic. When you create the topic, make sure that you use the CloudEvent schema during the provisioning process:

    Azure Event Grid Custom Topic

    If your application needs to publish events to Azure Event Grid Custom Topic, your application should build the event data in the CloudEvents format. If you use a .NET application, add the NuGet package first.

    dotnet add package Azure.Messaging.EventGrid

    Then, create the publisher instance. You've already got the topic endpoint URL and the access key.

    var topicEndpoint = new Uri("<Azure Event Grid Custom Topic Endpoint URL>");
    var credential = new AzureKeyCredential("<Azure Event Grid Custom Topic Access Key>");
    var publisher = new EventGridPublisherClient(topicEndpoint, credential);

    Now, build the event data like below. Make sure that you follow the CloudEvents schema that requires additional metadata like event source, event type and content type.

    var source = "/your/event/source";
    var type = "com.source.event.your/OnEventOccurs";

    var data = new MyEventData() { Hello = "World" };

    var @event = new CloudEvent(source, type, data);

    And finally, send the event to Azure Event Grid Custom Topic.

    await publisher.SendEventAsync(@event);

    The captured event data looks like the following:

    {
    "id": "cc2b2775-52b8-43b8-a7cc-c1c33c2b2e59",
    "source": "/your/event/source",
    "type": "com.source.event.my/OnEventOccurs",
    "data": {
    "Hello": "World"
    },
    "time": "2022-09-21T07:08:09.1234567+00:00",
    "specversion": "1.0"
    }

    However, due to limitations, someone might insist that their existing application doesn't or can't emit the event data in the CloudEvents format. In this case, what should we do? There's no standard way of sending the event data in the CloudEvents format to Azure Event Grid Custom Topic. One of the approaches we may be able to apply is to put a converter between the existing application and Azure Event Grid Custom Topic like below:

    Azure Event Grid for Applications outside Azure with Converter

    Once the Function app (or any converter app) receives legacy event data, it internally converts the CloudEvents format and publishes it to Azure Event Grid.

    var data = default(MyRequestData);
    using (var reader = new StreamReader(req.Body))
    {
    var serialised = await reader.ReadToEndAsync();
    data = JsonConvert.DeserializeObject<MyRequestData>(serialised);
    }

    var converted = new MyEventData() { Hello = data.Lorem };
    var @event = new CloudEvent(source, type, converted);

    The converted event data is captured like this:

    {
    "id": "df296da3-77cd-4da2-8122-91f631941610",
    "source": "/your/event/source",
    "type": "com.source.event.my/OnEventOccurs",
    "data": {
    "Hello": "ipsum"
    },
    "time": "2022-09-21T07:08:09.1234567+00:00",
    "specversion": "1.0"
    }

    This approach is beneficial in many integration scenarios to make all the event data canonicalised.

    How Azure Logic Apps consumes CloudEvents

    I put Azure Logic Apps as the event handler in the previous diagrams. According to the CloudEvents spec, each event handler must implement request validation to avoid abuse. One good thing about using Azure Logic Apps is that it has already implemented this request validation feature. It implies that we just subscribe to the topic and consume the event data.

    Create a new Logic Apps instance and add the HTTP Request trigger. Once it saves, you will get the endpoint URL.

    Azure Logic Apps with HTTP Request Trigger

    Then, create the Azure Event Grid Subscription with:

    • Endpoint type: Webhook
    • Endpoint URL: The Logic Apps URL from above.

    Azure Logic Apps with HTTP Request Trigger

    Once the subscription is ready, this Logic Apps works well as the event handler. Here's how it receives the CloudEvents data from the subscription.

    Azure Logic Apps that Received CloudEvents data

    Now you've got the CloudEvents data. It's entirely up to you to handle that event data however you want!

    Exercise: Try this yourself!

    You can fork this GitHub repository to your account and play around with it to see how Azure Event Grid with CloudEvents works. Alternatively, the "Deploy to Azure" button below will provision all necessary Azure resources and deploy an Azure Functions app to mimic the event publisher.

    Deploy To Azure

    Resources: For self-study!

    Want to know more about CloudEvents in real-life examples? Here are several resources you can take a look at:

    - + \ No newline at end of file diff --git a/blog/tags/azure-logic-apps/page/3/index.html b/blog/tags/azure-logic-apps/page/3/index.html index 7343905d79..08fc8e4d36 100644 --- a/blog/tags/azure-logic-apps/page/3/index.html +++ b/blog/tags/azure-logic-apps/page/3/index.html @@ -14,13 +14,13 @@ - +

    3 posts tagged with "azure-logic-apps"

    View All Tags

    · 5 min read
    Nitya Narasimhan
    Devanshi Joshi

    SEP 08: CHANGE IN PUBLISHING SCHEDULE

    Starting from Week 2 (Sep 8), we'll be publishing blog posts in batches rather than on a daily basis, so you can read a series of related posts together. Don't want to miss updates? Just subscribe to the feed


    Welcome to Day 8 of #30DaysOfServerless!

    This marks the end of our Week 1 Roadmap focused on Azure Functions!! Today, we'll do a quick recap of all #ServerlessSeptember activities in Week 1, set the stage for Week 2 - and leave you with some excellent tutorials you should explore to build more advanced scenarios with Azure Functions.

    Ready? Let's go.


    What We'll Cover

    • Azure Functions: Week 1 Recap
    • Advanced Functions: Explore Samples
    • End-to-End: Serverless Hacks & Cloud Skills
    • What's Next: Hello, Containers & Microservices
    • Challenge: Complete the Learning Path


    Week 1 Recap: #30Days & Functions

    Congratulations!! We made it to the end of Week 1 of #ServerlessSeptember. Let's recap what we learned so far:

    • In Core Concepts we looked at where Azure Functions fits into the serverless options available on Azure. And we learned about key concepts like Triggers, Bindings, Custom Handlers and Durable Functions.
    • In Build Your First Function we looked at the tooling options for creating Functions apps, testing them locally, and deploying them to Azure - as we built and deployed our first Functions app.
    • In the next 4 posts, we explored new Triggers, Integrations, and Scenarios - as we looked at building Functions Apps in Java, JavaScript, .NET and Python.
    • And in the Zero-To-Hero series, we learned about Durable Entities - and how we can use them to create stateful serverless solutions using a Chirper Sample as an example scenario.

    The illustrated roadmap below summarizes what we covered each day this week, as we bring our Functions-as-a-Service exploration to a close.


    Advanced Functions: Code Samples

    So, now that we've got our first Functions app under our belt, and validated our local development setup for tooling, where can we go next? A good next step is to explore different triggers and bindings, that drive richer end-to-end scenarios. For example:

    • Integrate Functions with Azure Logic Apps - we'll discuss Azure Logic Apps in Week 3. For now, think of it as a workflow automation tool that lets you integrate seamlessly with other supported Azure services to drive an end-to-end scenario. In this tutorial, we set up a workflow connecting Twitter (get tweet) to Azure Cognitive Services (analyze sentiment) - and use that to trigger an Azure Functions app to send email about the result.
    • Integrate Functions with Event Grid - we'll discuss Azure Event Grid in Week 3. For now, think of it as an eventing service connecting event sources (publishers) to event handlers (subscribers) at cloud scale. In this tutorial, we handle a common use case - a workflow where loading an image to Blob Storage triggers an Azure Functions app that implements a resize function, helping automatically generate thumbnails for the uploaded image.
    • Integrate Functions with CosmosDB and SignalR to bring real-time push-based notifications to your web app. It achieves this by using a Functions app that is triggered by changes in a CosmosDB backend, causing it to broadcast that update (push notification to connected web clients over SignalR, in real time.

    Want more ideas? Check out the Azure Samples for Functions for implementations, and browse the Azure Architecture Center for reference architectures from real-world scenarios that involve Azure Functions usage.


    E2E Scenarios: Hacks & Cloud Skills

    Want to systematically work your way through a single End-to-End scenario involving Azure Functions alongside other serverless support technologies? Check out the Serverless Hacks activity happening during #ServerlessSeptember, and learn to build this "Serverless Tollbooth Application" in a series of 10 challenges. Check out the video series for a reference solution in .NET and sign up for weekly office hours to join peers and discuss your solutions or challenges.

    Or perhaps you prefer to learn core concepts with code in a structured learning path? We have that covered. Check out the 12-module "Create Serverless Applications" course from Microsoft Learn which walks your through concepts, one at a time, with code. Even better - sign up for the free Cloud Skills Challenge and complete the same path (in under 30 days) but this time, with the added fun of competing against your peers for a spot on a leaderboard, and swag.


    What's Next? Hello, Cloud-Native!

    So where to next? In Week 2 we turn our attention from Functions-as-a-Service to building more complex backends using Containers and Microservices. We'll focus on two core technologies - Azure Container Apps and Dapr (Distributed Application Runtime) - both key components of a broader vision around Building Cloud-Native Applications in Azure.

    What is Cloud-Native you ask?

    Fortunately for you, we have an excellent introduction in our Zero-to-Hero article on Go Cloud-Native with Azure Container Apps - that explains the 5 pillars of Cloud-Native and highlights the value of Azure Container Apps (scenarios) and Dapr (sidecar architecture) for simplified microservices-based solution with auto-scale capability. Prefer a visual summary? Here's an illustrate guide to that article for convenience.

    Go Cloud-Native Download a higher resolution version of the image


    Take The Challenge

    We typically end each post with an exercise or activity to reinforce what you learned. For Week 1, we encourage you to take the Cloud Skills Challenge and work your way through at least a subset of the modules, for hands-on experience with the different Azure Functions concepts, integrations, and usage.

    See you in Week 2!

    - + \ No newline at end of file diff --git a/blog/tags/cloud-native/index.html b/blog/tags/cloud-native/index.html index 3070522416..32920e4fc9 100644 --- a/blog/tags/cloud-native/index.html +++ b/blog/tags/cloud-native/index.html @@ -14,13 +14,13 @@ - +

    One post tagged with "cloud-native"

    View All Tags

    · 5 min read
    Savannah Ostrowski

    Welcome to Beyond #30DaysOfServerless! in October!

    Yes, it's October!! And since we ended #ServerlessSeptember with a focus on End-to-End Development for Serverless on Azure, we thought it would be good to share updates in October that can help you skill up even further.

    Today, we're following up on the Code to Cloud with azd blog post (Day #29) where we introduced the Azure Developer CLI (azd), an open-source tool for streamlining your end-to-end developer experience going from local development environment to Azure cloud. In today's post, we celebrate the October 2022 release of the tool, with three cool new features.

    And if it's October, it must be #Hacktoberfest!! Read on to learn about how you can take advantage of one of the new features, to contribute to the azd open-source community and ecosystem!

    Ready? Let's go!


    What We'll Cover

    • Azure Friday: Introducing the Azure Developer CLI (Video)
    • October 2022 Release: What's New in the Azure Developer CLI?
      • Azure Pipelines for CI/CD: Learn more
      • Improved Infrastructure as Code structure via Bicep modules: Learn more
      • A new azd template gallery: The new azd-templates gallery for community use! Learn more
    • Awesome-Azd: The new azd-templates gallery for Community use
      • Features: discover, create, contribute, request - templates
      • Hacktoberfest: opportunities to contribute in October - and beyond!


    Azure Friday

    This post is a follow-up to our #ServerlessSeptember post on Code to Cloud with Azure Developer CLI where we introduced azd, a new open-source tool that makes it quick and simple for you to move your application from a local development environment to Azure, streamlining your end-to-end developer workflow in the process.

    Prefer to watch a video overview? I have you covered! Check out my recent conversation with Scott Hanselman on Azure Friday where we:

    • talked about the code-to-cloud developer journey
    • walkthrough the ins and outs of an azd template
    • explored Azure Developer CLI commands in the terminal and VS Code, and
    • (probably most importantly) got a web app up and running on Azure with a database, Key Vault and monitoring all in a couple of minutes

    October Release

    We're pleased to announce the October 2022 release of the Azure Developer CLI (currently 0.3.0-beta.2). Read the release announcement for more details. Here are the highlights:

    • Azure Pipelines for CI/CD: This addresses azure-dev#101, adding support for Azure Pipelines (alongside GitHub Actions) as a CI/CD provider. Learn more about usage and related documentation.
    • Improved Infrastructure as Code structure via Bicep modules: This addresses azure-dev#543, which recognized the complexity of using a single resources.bicep file for all resources. With this release, azd templates now come with Bicep modules organized by purpose making it easier to edit and understand. Learn more about this structure, and how to use it.
    • New Templates Gallery - awesome-azd: This addresses azure-dev#398, which aimed to make templates more discoverable and easier to contribute. Learn more about how the new gallery improves the template discovery experience.

    In the next section, we'll dive briefly into the last feature, introducing the new awesome-azd site and resource for templates discovery and contribution. And, since it's #Hacktoberfest season, we'll talk about the Contributor Guide and the many ways you can contribute to this project - with, or without, code.


    It's awesome-azd

    Welcome to awesome-azd a new template gallery hosted on GitHub Pages, and meant to be a destination site for discovering, requesting, and contributing azd-templates for community use!

    In addition, it's README reflects the awesome-list resource format, providing a location for the community to share "best of" resources for Azure Developer CLI - from blog posts and videos, to full-scale tutorials and templates.

    The Gallery is organized into three main areas:

    Take a minute to explore the Gallery and note the features:

    • Search for templates by name
    • Requested Templates - indicating asks from the community
    • Featured Templates - highlighting high-quality templates
    • Filters - to discover templates by and/or query combinations

    Check back often to see the latest contributed templates and requests!


    Hacktoberfest

    So, why is this a good time to talk about the Gallery? Because October means it's time for #Hacktoberfest - a month-long celebration of open-source projects and their maintainers, and an opportunity for first-time contributors to get support and guidance making their first pull-requests! Check out the #Hacktoberfest topic on GitHub for projects you can contribute to.

    And we hope you think of awesome-azd as another possible project to contribute to.

    Check out the FAQ section to learn how to create, discover, and contribute templates. Or take a couple of minutes to watch this video walkthrough from Jon Gallant:

    And don't hesitate to reach out to us - either via Issues on the repo, or in the Discussions section of this site, to give us feedback!

    Happy Hacking! 🎃


    - + \ No newline at end of file diff --git a/blog/tags/cloudevents/index.html b/blog/tags/cloudevents/index.html index 9e3a81ef64..fdc11a05e0 100644 --- a/blog/tags/cloudevents/index.html +++ b/blog/tags/cloudevents/index.html @@ -14,13 +14,13 @@ - +

    One post tagged with "cloudevents"

    View All Tags

    · 9 min read
    Justin Yoo

    Welcome to Day 21 of #30DaysOfServerless!

    We've so far walked through what Azure Event Grid is and how it generally works. Today, let's discuss how Azure Event Grid deals with CloudEvents.


    What We'll Cover


    OK. Let's get started!

    What is CloudEvents?

    Needless to say, events are everywhere. Events come not only from event-driven systems but also from many different systems and devices, including IoT ones like Raspberry PI.

    But the problem is that every event publisher (system/device that creates events) describes their events differently, meaning there is no standard way of describing events. It has caused many issues between systems, mainly from the interoperability perspective.

    1. Consistency: No standard way of describing events resulted in developers having to write their own event handling logic for each event source.
    2. Accessibility: There were no common libraries, tooling and infrastructure to deliver events across systems.
    3. Productivity: The overall productivity decreases because of the lack of the standard format of events.

    Cloud Events Logo

    Therefore, CNCF (Cloud-Native Computing Foundation) has brought up the concept, called CloudEvents. CloudEvents is a specification that commonly describes event data. Conforming any event data to this spec will simplify the event declaration and delivery across systems and platforms and more, resulting in a huge productivity increase.

    How Azure Event Grid brokers CloudEvents

    Before CloudEvents, Azure Event Grid described events in their own way. Therefore, if you want to use Azure Event Grid, you should follow the event format/schema that Azure Event Grid declares. However, not every system/service/application follows the Azure Event Grid schema. Therefore, Azure Event Grid now supports CloudEvents spec as input and output formats.

    Azure Event Grid for Azure

    Take a look at the simple diagram below, which describes how Azure Event Grid captures events raised from various Azure services. In this diagram, Azure Key Vault takes the role of the event source or event publisher, and Azure Logic Apps takes the role of the event handler (I'll discuss Azure Logic Apps as the event handler later in this post). We use Azure Event Grid System Topic for Azure.

    Azure Event Grid for Azure

    Therefore, let's create an Azure Event Grid System Topic that captures events raised from Azure Key Vault when a new version of a secret is added.

    Azure Event Grid System Topic for Key Vault

    As Azure Event Grid makes use of the pub/sub pattern, you need to create the Azure Event Grid Subscription to consume the events. Here's the subscription that uses the Event Grid data format:

    ![Azure Event Grid System Subscription for Key Vault in Event Grid Format][./img/21-cloudevents-via-event-grid-03.png]

    Once you create the subscription, create a new version of the secret on Azure Key Vault. Then, Azure Key Vault raises an event, which is captured in the Event Grid format:

    [
    {
    "id": "6f44b9c0-d37e-40e7-89be-f70a6da291cc",
    "topic": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/rg-aegce-krc/providers/Microsoft.KeyVault/vaults/kv-xxxxxxxx",
    "subject": "hello",
    "eventType": "Microsoft.KeyVault.SecretNewVersionCreated",
    "data": {
    "Id": "https://kv-xxxxxxxx.vault.azure.net/secrets/hello/064dfc082fec463f8d4610ed6118811d",
    "VaultName": "kv-xxxxxxxx",
    "ObjectType": "Secret",
    "ObjectName": "hello",
    "Version": "064dfc082fec463f8d4610ed6118811d",
    "NBF": null,
    "EXP": null
    },
    "dataVersion": "1",
    "metadataVersion": "1",
    "eventTime": "2022-09-21T07:08:09.1234567Z"
    }
    ]

    So, how is it different from the CloudEvents format? Let's take a look. According to the spec, the JSON data in CloudEvents might look like this:

    {
    "id" : "C234-1234-1234",
    "source" : "/mycontext",
    "specversion" : "1.0",
    "type" : "com.example.someevent",
    "comexampleextension1" : "value",
    "time" : "2018-04-05T17:31:00Z",
    "datacontenttype" : "application/cloudevents+json",
    "data" : {
    "appinfoA" : "abc",
    "appinfoB" : 123,
    "appinfoC" : true
    }
    }

    This time, let's create another subscription using the CloudEvents schema. Here's how to create the subscription against the system topic:

    Azure Event Grid System Subscription for Key Vault in CloudEvents Format

    Therefore, Azure Key Vault emits the event data in the CloudEvents format:

    {
    "id": "6f44b9c0-d37e-40e7-89be-f70a6da291cc",
    "source": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/rg-aegce-krc/providers/Microsoft.KeyVault/vaults/kv-xxxxxxxx",
    "specversion": "1.0",
    "type": "Microsoft.KeyVault.SecretNewVersionCreated",
    "subject": "hello",
    "time": "2022-09-21T07:08:09.1234567Z",
    "data": {
    "Id": "https://kv-xxxxxxxx.vault.azure.net/secrets/hello/064dfc082fec463f8d4610ed6118811d",
    "VaultName": "kv-xxxxxxxx",
    "ObjectType": "Secret",
    "ObjectName": "hello",
    "Version": "064dfc082fec463f8d4610ed6118811d",
    "NBF": null,
    "EXP": null
    }
    }

    Can you identify some differences between the Event Grid format and the CloudEvents format? Fortunately, both Event Grid schema and CloudEvents schema look similar to each other. But they might be significantly different if you use a different event source outside Azure.

    Azure Event Grid for Systems outside Azure

    As mentioned above, the event data described outside Azure or your own applications within Azure might not be understandable by Azure Event Grid. In this case, we need to use Azure Event Grid Custom Topic. Here's the diagram for it:

    Azure Event Grid for Applications outside Azure

    Let's create the Azure Event Grid Custom Topic. When you create the topic, make sure that you use the CloudEvent schema during the provisioning process:

    Azure Event Grid Custom Topic

    If your application needs to publish events to Azure Event Grid Custom Topic, your application should build the event data in the CloudEvents format. If you use a .NET application, add the NuGet package first.

    dotnet add package Azure.Messaging.EventGrid

    Then, create the publisher instance. You've already got the topic endpoint URL and the access key.

    var topicEndpoint = new Uri("<Azure Event Grid Custom Topic Endpoint URL>");
    var credential = new AzureKeyCredential("<Azure Event Grid Custom Topic Access Key>");
    var publisher = new EventGridPublisherClient(topicEndpoint, credential);

    Now, build the event data like below. Make sure that you follow the CloudEvents schema that requires additional metadata like event source, event type and content type.

    var source = "/your/event/source";
    var type = "com.source.event.your/OnEventOccurs";

    var data = new MyEventData() { Hello = "World" };

    var @event = new CloudEvent(source, type, data);

    And finally, send the event to Azure Event Grid Custom Topic.

    await publisher.SendEventAsync(@event);

    The captured event data looks like the following:

    {
    "id": "cc2b2775-52b8-43b8-a7cc-c1c33c2b2e59",
    "source": "/your/event/source",
    "type": "com.source.event.my/OnEventOccurs",
    "data": {
    "Hello": "World"
    },
    "time": "2022-09-21T07:08:09.1234567+00:00",
    "specversion": "1.0"
    }

    However, due to limitations, someone might insist that their existing application doesn't or can't emit the event data in the CloudEvents format. In this case, what should we do? There's no standard way of sending the event data in the CloudEvents format to Azure Event Grid Custom Topic. One of the approaches we may be able to apply is to put a converter between the existing application and Azure Event Grid Custom Topic like below:

    Azure Event Grid for Applications outside Azure with Converter

    Once the Function app (or any converter app) receives legacy event data, it internally converts the CloudEvents format and publishes it to Azure Event Grid.

    var data = default(MyRequestData);
    using (var reader = new StreamReader(req.Body))
    {
    var serialised = await reader.ReadToEndAsync();
    data = JsonConvert.DeserializeObject<MyRequestData>(serialised);
    }

    var converted = new MyEventData() { Hello = data.Lorem };
    var @event = new CloudEvent(source, type, converted);

    The converted event data is captured like this:

    {
    "id": "df296da3-77cd-4da2-8122-91f631941610",
    "source": "/your/event/source",
    "type": "com.source.event.my/OnEventOccurs",
    "data": {
    "Hello": "ipsum"
    },
    "time": "2022-09-21T07:08:09.1234567+00:00",
    "specversion": "1.0"
    }

    This approach is beneficial in many integration scenarios to make all the event data canonicalised.

    How Azure Logic Apps consumes CloudEvents

    I put Azure Logic Apps as the event handler in the previous diagrams. According to the CloudEvents spec, each event handler must implement request validation to avoid abuse. One good thing about using Azure Logic Apps is that it has already implemented this request validation feature. It implies that we just subscribe to the topic and consume the event data.

    Create a new Logic Apps instance and add the HTTP Request trigger. Once it saves, you will get the endpoint URL.

    Azure Logic Apps with HTTP Request Trigger

    Then, create the Azure Event Grid Subscription with:

    • Endpoint type: Webhook
    • Endpoint URL: The Logic Apps URL from above.

    Azure Logic Apps with HTTP Request Trigger

    Once the subscription is ready, this Logic Apps works well as the event handler. Here's how it receives the CloudEvents data from the subscription.

    Azure Logic Apps that Received CloudEvents data

    Now you've got the CloudEvents data. It's entirely up to you to handle that event data however you want!

    Exercise: Try this yourself!

    You can fork this GitHub repository to your account and play around with it to see how Azure Event Grid with CloudEvents works. Alternatively, the "Deploy to Azure" button below will provision all necessary Azure resources and deploy an Azure Functions app to mimic the event publisher.

    Deploy To Azure

    Resources: For self-study!

    Want to know more about CloudEvents in real-life examples? Here are several resources you can take a look at:

    - + \ No newline at end of file diff --git a/blog/tags/custom-connector/index.html b/blog/tags/custom-connector/index.html index 238c2e7803..7416376fed 100644 --- a/blog/tags/custom-connector/index.html +++ b/blog/tags/custom-connector/index.html @@ -14,13 +14,13 @@ - +

    One post tagged with "custom-connector"

    View All Tags

    · 14 min read
    Justin Yoo

    Welcome to Day 28 of #30DaysOfServerless!

    Since it's the serverless end-to-end week, I'm going to discuss how to use a serverless application Azure Functions with OpenAPI extension to be seamlessly integrated with Power Platform custom connector through Azure API Management - in a post I call "Where am I? My GPS Location with Serverless Power Platform Custom Connector"

    OK. Are you ready? Let's get started!


    What We'll Cover

    • What is Power Platform custom connector?
    • Proxy app to Google Maps and Naver Map API
    • API Management integration
    • Two ways of building custom connector
    • Where am I? Power Apps app
    • Exercise: Try this yourself!
    • Resources: For self-study!


    SAMPLE REPO

    Want to follow along? Check out the sample app on GitHub repository used in this post.

    What is Power Platform custom connector?

    Power Platform is a low-code/no-code application development tool for fusion teams that consist of a group of people. Those people come from various disciplines, including field experts (domain experts), IT professionals and professional developers, to draw business values successfully. Within the fusion team, the domain experts become citizen developers or low-code developers by Power Platform. In addition, Making Power Platform more powerful is that it offers hundreds of connectors to other Microsoft 365 and third-party services like SAP, ServiceNow, Salesforce, Google, etc.

    However, what if you want to use your internal APIs or APIs not yet offering their official connectors? Here's an example. If your company has an inventory management system, and you want to use it within your Power Apps or Power Automate. That point is exactly where Power Platform custom connectors is necessary.

    Inventory Management System for Power Apps

    Therefore, Power Platform custom connectors enrich those citizen developers' capabilities because those connectors can connect any API applications for the citizen developers to use.

    In this post, let's build a custom connector that provides a static map image generated by Google Maps API and Naver Map API using your GPS location.

    Proxy app to Google Maps and Naver Map API

    First, let's build an Azure Functions app that connects to Google Maps and Naver Map. Suppose that you've already got the API keys for both services. If you haven't yet, get the keys first by visiting here for Google and here for Naver. Then, store them to local.settings.json within your Azure Functions app.

    {
    "Values": {
    ...
    "Maps__Google__ApiKey": "<GOOGLE_MAPS_API_KEY>",
    "Maps__Naver__ClientId": "<NAVER_MAP_API_CLIENT_ID>",
    "Maps__Naver__ClientSecret": "<NAVER_MAP_API_CLIENT_SECRET>"
    }
    }

    Here's the sample logic to get the static image from Google Maps API. It takes the latitude and longitude of your current location and image zoom level, then returns the static map image. There are a few hard-coded assumptions, though:

    • The image size should be 400x400.
    • The image should be in .png format.
    • The marker should show be red and show my location.
    public class GoogleMapService : IMapService
    {
    public async Task<byte[]> GetMapAsync(HttpRequest req)
    {
    var latitude = req.Query["lat"];
    var longitude = req.Query["long"];
    var zoom = (string)req.Query["zoom"] ?? "14";

    var sb = new StringBuilder();
    sb.Append("https://maps.googleapis.com/maps/api/staticmap")
    .Append($"?center={latitude},{longitude}")
    .Append("&size=400x400")
    .Append($"&zoom={zoom}")
    .Append($"&markers=color:red|{latitude},{longitude}")
    .Append("&format=png32")
    .Append($"&key={this._settings.Google.ApiKey}");
    var requestUri = new Uri(sb.ToString());

    var bytes = await this._http.GetByteArrayAsync(requestUri).ConfigureAwait(false);

    return bytes;
    }
    }

    The NaverMapService class has a similar logic with the same input and assumptions. Here's the code:

    public class NaverMapService : IMapService
    {
    public async Task<byte[]> GetMapAsync(HttpRequest req)
    {
    var latitude = req.Query["lat"];
    var longitude = req.Query["long"];
    var zoom = (string)req.Query["zoom"] ?? "13";

    var sb = new StringBuilder();
    sb.Append("https://naveropenapi.apigw.ntruss.com/map-static/v2/raster")
    .Append($"?center={longitude},{latitude}")
    .Append("&w=400")
    .Append("&h=400")
    .Append($"&level={zoom}")
    .Append($"&markers=color:blue|pos:{longitude}%20{latitude}")
    .Append("&format=png")
    .Append("&lang=en");
    var requestUri = new Uri(sb.ToString());

    this._http.DefaultRequestHeaders.Clear();
    this._http.DefaultRequestHeaders.Add("X-NCP-APIGW-API-KEY-ID", this._settings.Naver.ClientId);
    this._http.DefaultRequestHeaders.Add("X-NCP-APIGW-API-KEY", this._settings.Naver.ClientSecret);

    var bytes = await this._http.GetByteArrayAsync(requestUri).ConfigureAwait(false);

    return bytes;
    }
    }

    Let's take a look at the function endpoints. Here's for the Google Maps and Naver Map. As the GetMapAsync(req) method returns a byte array value, you need to transform it as FileContentResult, with the content type of image/png.

    // Google Maps
    public class GoogleMapsTrigger
    {
    [FunctionName(nameof(GoogleMapsTrigger.GetGoogleMapImage))]
    public async Task<IActionResult> GetGoogleMapImage(
    [HttpTrigger(AuthorizationLevel.Anonymous, "GET", Route = "google/image")] HttpRequest req)
    {
    this._logger.LogInformation("C# HTTP trigger function processed a request.");

    var bytes = await this._service.GetMapAsync(req).ConfigureAwait(false);

    return new FileContentResult(bytes, "image/png");
    }
    }

    // Naver Map
    public class NaverMapsTrigger
    {
    [FunctionName(nameof(NaverMapsTrigger.GetNaverMapImage))]
    public async Task<IActionResult> GetNaverMapImage(
    [HttpTrigger(AuthorizationLevel.Anonymous, "GET", Route = "naver/image")] HttpRequest req)
    {
    this._logger.LogInformation("C# HTTP trigger function processed a request.");

    var bytes = await this._service.GetMapAsync(req).ConfigureAwait(false);

    return new FileContentResult(bytes, "image/png");
    }
    }

    Then, add the OpenAPI capability to each function endpoint. Here's the example:

    // Google Maps
    public class GoogleMapsTrigger
    {
    [FunctionName(nameof(GoogleMapsTrigger.GetGoogleMapImage))]
    // ⬇️⬇️⬇️ Add decorators provided by the OpenAPI extension ⬇️⬇️⬇️
    [OpenApiOperation(operationId: nameof(GoogleMapsTrigger.GetGoogleMapImage), tags: new[] { "google" })]
    [OpenApiParameter(name: "lat", In = ParameterLocation.Query, Required = true, Type = typeof(string), Description = "The **latitude** parameter")]
    [OpenApiParameter(name: "long", In = ParameterLocation.Query, Required = true, Type = typeof(string), Description = "The **longitude** parameter")]
    [OpenApiParameter(name: "zoom", In = ParameterLocation.Query, Required = false, Type = typeof(string), Description = "The **zoom level** parameter &ndash; Default value is `14`")]
    [OpenApiResponseWithBody(statusCode: HttpStatusCode.OK, contentType: "image/png", bodyType: typeof(byte[]), Description = "The map image as an OK response")]
    // ⬆️⬆️⬆️ Add decorators provided by the OpenAPI extension ⬆️⬆️⬆️
    public async Task<IActionResult> GetGoogleMapImage(
    [HttpTrigger(AuthorizationLevel.Anonymous, "GET", Route = "google/image")] HttpRequest req)
    {
    ...
    }
    }

    // Naver Map
    public class NaverMapsTrigger
    {
    [FunctionName(nameof(NaverMapsTrigger.GetNaverMapImage))]
    // ⬇️⬇️⬇️ Add decorators provided by the OpenAPI extension ⬇️⬇️⬇️
    [OpenApiOperation(operationId: nameof(NaverMapsTrigger.GetNaverMapImage), tags: new[] { "naver" })]
    [OpenApiParameter(name: "lat", In = ParameterLocation.Query, Required = true, Type = typeof(string), Description = "The **latitude** parameter")]
    [OpenApiParameter(name: "long", In = ParameterLocation.Query, Required = true, Type = typeof(string), Description = "The **longitude** parameter")]
    [OpenApiParameter(name: "zoom", In = ParameterLocation.Query, Required = false, Type = typeof(string), Description = "The **zoom level** parameter &ndash; Default value is `13`")]
    [OpenApiResponseWithBody(statusCode: HttpStatusCode.OK, contentType: "image/png", bodyType: typeof(byte[]), Description = "The map image as an OK response")]
    // ⬆️⬆️⬆️ Add decorators provided by the OpenAPI extension ⬆️⬆️⬆️
    public async Task<IActionResult> GetNaverMapImage(
    [HttpTrigger(AuthorizationLevel.Anonymous, "GET", Route = "naver/image")] HttpRequest req)
    {
    ...
    }
    }

    Run the function app in the local. Here are the latitude and longitude values for Seoul, Korea.

    • latitude: 37.574703
    • longitude: 126.978519

    Google Map for Seoul

    It seems to be working! Let's deploy it to Azure.

    API Management integration

    Visual Studio 2022 provides a built-in deployment tool for Azure Functions app onto Azure. In addition, the deployment tool supports seamless integration with Azure API Management as long as your Azure Functions app enables the OpenAPI capability. In this post, I'm going to use this feature. Right-mouse click on the Azure Functions project and select the "Publish" menu.

    Visual Studio context menu for publish

    Then, you will see the publish screen. Click the "➕ New" button to create a new publish profile.

    Create a new publish profile

    Choose "Azure" and click the "Next" button.

    Choose the target platform for publish

    Select the app instance. This time simply pick up the "Azure Function App (Windows)" option, then click "Next".

    Choose the target OS for publish

    If you already provision an Azure Function app instance, you will see it on the screen. Otherwise, create a new one. Then, click "Next".

    Choose the target instance for publish

    In the next step, you are asked to choose the Azure API Management instance for integration. Choose one, or create a new one. Then, click "Next".

    Choose the APIM instance for integration

    Finally, select the publish method either local publish or GitHub Actions workflow. Let's pick up the local publish method for now. Then, click "Finish".

    Choose the deployment type

    The publish profile has been created. Click "Close" to move on.

    Publish profile created

    Now the function app is ready for deployment. Click the "Publish" button and see how it goes.

    Publish function app

    The Azure function app has been deployed and integrated with the Azure API Management instance.

    Function app published

    Go to the published function app site, and everything looks OK.

    Function app on Azure

    And API Management shows the function app integrated perfectly.

    Function app integrated with APIM

    Now, you are ready to create a custom connector. Let's move on.

    Two ways of building custom connector

    There are two ways to create a custom connector.

    Export custom connector from API Management

    First, you can directly use the built-in API Management feature. Then, click the ellipsis icon and select the "Create Power Connector" menu.

    Create Power Connector menu

    Then, you are redirected to this screen. While the "API" and "API display name" fields are pre-populated, you need to choose the Power Platform environment tied to your tenant. Choose an environment, click "Authenticate", and click "Create".

    Create custom connector screen

    Check your custom connector on Power Apps or Power Automate side.

    Custom connector created on Power Apps

    However, there's a caveat to this approach. Because it's tied to your tenant, you should use the second approach if you want to use this custom connector on the other tenant.

    Import custom connector from OpenAPI document or URL

    Click the ellipsis icon again and select the "Export" menu.

    Export menu

    On the Export API screen, choose the "OpenAPI v2 (JSON)" panel because Power Platform custom connector currently accepts version 2 of the OpenAPI document.

    Select OpenAPI v2

    Download the OpenAPI document to your local computer and move to your Power Apps or Power Automate page under your desired environment. I'm going to use the Power Automate page. First, go to the "Data" ➡️ "Custom connectors" page. Then, click the "➕ New custom connector" ➡️ "Import an OpenAPI file" at the top right corner.

    New custom connector

    When a modal pops up, give the custom connector name and import the OpenAPI document exported above. Then, click "Continue".

    Import custom connector

    Actually, that's it! Next, click the "✔️ Create connector" button to create the connector.

    Create custom connector

    Go back to the custom connector page, and you will see the "Maps API" custom connector you just created.

    Custom connector imported

    So, you are ready to create a Power Apps app to display your location on Google Maps or Naver Map! Let's move on.

    Where am I? Power Apps app

    Open the Power Apps Studio, and create an empty canvas app, named Who am I with a phone layout.

    Custom connector integration

    To use the custom connector created above, you need to add it to the Power App. Click the cylinder icon on the left and click the "Add data" button.

    Add custom connector to data pane

    Search the custom connector name, "Maps API", and click the custom connector to add.

    Search custom connector

    To use the custom connector, you also need to create a connection to it. Click the "Connect" button and move on.

    Create connection to custom connector

    Now, you've got the connection to the custom connector.

    Connection to custom connector ready

    Controls

    Let's build the Power Apps app. First of all, put three controls Image, Slider and Button onto the canvas.

    Power Apps control added

    Click the "Screen1" control and change the value on the property "OnVisible" to the formula below. The formula stores the current slider value in the zoomlevel collection.

    ClearCollect(
    zoomlevel,
    Slider1.Value
    )

    Click the "Botton1" control and change the value on the property "OnSelected" to the formula below. It passes the current latitude, longitude and zoom level to the custom connector and receives the image data. The received image data is stored in the result collection.

    ClearCollect(
    result,
    MAPS.GetGoogleMapImage(
    Location.Latitude,
    Location.Longitude,
    { zoom: First(zoomlevel).Value }
    )
    )

    Click the "Image1" control and change the value on the property "Image" to the formula below. It gets the image data from the result collection.

    First(result).Url

    Click the "Slider1" control and change the value on the property "OnChange" to the formula below. It stores the current slider value to the zoomlevel collection, followed by calling the custom connector to get the image data against the current location.

    ClearCollect(
    zoomlevel,
    Slider1.Value
    );
    ClearCollect(
    result,
    MAPS.GetGoogleMapImage(
    Location.Latitude,
    Location.Longitude,
    { zoom: First(zoomlevel).Value }
    )
    )

    That seems to be OK. Let's click the "Where am I?" button. But it doesn't show the image. The First(result).Url value is actually similar to this:

    appres://blobmanager/1090a86393a843adbfcf428f0b90e91b/1

    It's the image reference value somewhere you can't get there.

    Workaround Power Automate workflow

    Therefore, you need a workaround using a Power Automate workflow to sort out this issue. Open the Power Automate Studio, create an instant cloud flow with the Power App trigger, and give it the "Where am I" name. Then add input parameters of lat, long and zoom.

    Power Apps trigger on Power Automate workflow

    Add custom connector action to get the map image.

    Select action to get the Google Maps image

    In the action, pass the appropriate parameters to the action.

    Pass parameters to the custom connector action

    Add a "Response" action and put the following values into each field.

    • "Body" field:

      {
      "base64Image": <power_automate_expression>
      }

      The <power_automate_expression> should be concat('data:', body('GetGoogleMapImage')?['$content-type'], ';base64,', body('GetGoogleMapImage')?['$content']).

    • "Response Body JSON Schema" field:

      {
      "type": "object",
      "properties": {
      "base64Image": {
      "type": "string"
      }
      }
      }

    Format the Response action

    Let's return to the Power Apps Studio and add the Power Automate workflow you created.

    Add Power Automate workflow

    Select "Button1" and change the value on the property "OnSelect" below. It replaces the direct call to the custom connector with the Power Automate workflow.

    ClearCollect(
    result,
    WhereamI.Run(
    Location.Latitude,
    Location.Longitude,
    First(zoomlevel).Value
    )
    )

    Also, change the value on the property "OnChange" of the "Slider1" control below, replacing the custom connector call with the Power Automate workflow call.

    ClearCollect(
    zoomlevel,
    Slider1.Value
    );
    ClearCollect(
    result,
    WhereamI.Run(
    Location.Latitude,
    Location.Longitude,
    First(zoomlevel).Value
    )
    )

    And finally, change the "Image1" control's "Image" property value below.

    First(result).base64Image

    The workaround has been applied. Click the "Where am I?" button to see your current location from Google Maps.

    Run Power Apps app #1

    If you change the slider left or right, you will see either the zoomed-in image or the zoomed-out image.

    Run Power Apps app #2

    Now, you've created a Power Apps app to show your current location using:

    • Google Maps API through the custom connector, and
    • Custom connector written in Azure Functions with OpenAPI extension!

    Exercise: Try this yourself!

    You can fork this GitHub repository to your account and play around with it to see how the custom connector works. After forking the repository, make sure that you create all the necessary secrets to your repository documented in the README file.

    Then, click the "Deploy to Azure" button, and it will provision all necessary Azure resources and deploy an Azure Functions app for a custom connector.

    Deploy To Azure

    Once everything is deployed successfully, try to create a Power Apps app and Power Automate workflow to see your current location in real-time!

    Resources: For self-study!

    Want to know more about Power Platform custom connector and Azure Functions OpenAPI extension? Here are several resources you can take a look at:

    - + \ No newline at end of file diff --git a/blog/tags/dapr/index.html b/blog/tags/dapr/index.html index b5f44ed415..6fcaa4a986 100644 --- a/blog/tags/dapr/index.html +++ b/blog/tags/dapr/index.html @@ -14,14 +14,14 @@ - +

    16 posts tagged with "dapr"

    View All Tags

    · 7 min read
    Devanshi Joshi

    It's Serverless September in a Nutshell! Join us as we unpack our month-long learning journey exploring the core technology pillars for Serverless architectures on Azure. Then end with a look at next steps to build your Cloud-native applications on Azure.


    What We'll Cover

    • Functions-as-a-Service (FaaS)
    • Microservices and Containers
    • Serverless Integrations
    • End-to-End Solutions
    • Developer Tools & #Hacktoberfest

    Banner for Serverless September


    Building Cloud-native Apps

    By definition, cloud-native technologies empower organizations to build and run scalable applications in modern, dynamic environments such as public, private, and hybrid clouds. You can learn more about cloud-native in Kendall Roden's #ServerlessSeptember post on Going Cloud-native with Azure Container Apps.

    Serveless technologies accelerate productivity and minimize costs for deploying applications at cloud scale. So, what can we build with serverless technologies in cloud-native on Azure? Anything that is event-driven - examples include:

    • Microservices - scaled by KEDA-compliant triggers
    • Public API Endpoints - scaled by #concurrent HTTP requests
    • Event-Driven Applications - scaled by length of message queue
    • Web Applications - scaled by #concurrent HTTP requests
    • Background Process - scaled by CPU and Memory usage

    Great - but as developers, we really want to know how we can get started building and deploying serverless solutions on Azure. That was the focus of our #ServerlessSeptember journey. Let's take a quick look at the four key themes.

    Functions-as-a-Service (FaaS)

    Functions-as-a-Service (FaaS) is the epitome of developer productivity for full-stack modern apps. As developers, you don't manage infrastructure and focus only on business logic and application code. And, with Serverless Compute you only pay for when your code runs - making this the simplest first step to begin migrating your application to cloud-native.

    In Azure, FaaS is provided by Azure Functions. Check out our Functions + Serverless on Azure to go from learning core concepts, to building your first Functions app in your programming language of choice. Azure functions support multiple programming languages including C#, F#, Java, JavaScript, Python, Typescript, and PowerShell.

    Want to get extended language support for languages like Go, and Rust? You can Use Custom Handlers to make this happen! But what if you want to have long-running functions, or create complex workflows involving more than one function? Read our post on Durable Entities to learn how you can orchestrate this with Azure Functions.

    Check out this recent AskTheExpert Q&A session with the Azure Functions team to get answers to popular community questions on Azure Functions features and usage.

    Microservices and Containers

    Functions-as-a-Service is an ideal first step towards serverless development. But Functions are just one of the 5 pillars of cloud-native. This week we'll look at two of the other pillars: microservices and containers - with specific focus on two core technologies: Azure Container Apps and Dapr (Distributed Application Runtime).

    In this 6-part series of posts, we walk through each technology independently, before looking at the value of building Azure Container Apps with Dapr.

    • In Hello Container Apps we learned core concepts & deployed our first ACA.
    • In Microservices Communication we learned about ACA environments and virtual networks, and how microservices communicate in ACA with a hands-on tutorial.
    • In Scaling Your Container Apps we learned about KEDA (Kubernetes Event-Driven Autoscaler) and configuring ACA for autoscaling with KEDA-compliant triggers.
    • In Build with Dapr we introduced the Distributed Application Runtime (Dapr), exploring its Building Block APIs and sidecar architecture for working with ACA.
    • In Secure ACA Access we learned how to secure ACA access to external services with - and without - Dapr, covering Secret Stores and Managed Identity.
    • Finally, Build ACA with Dapr tied it all together with a enterprise app scenario where an orders processor (ACA) uses Dapr APIs (PubSub, State Management) to receive and store order messages from Azure Service Bus.

    Build ACA with Dapr

    Check out this recent AskTheExpert Q&A session with the Azure Container Apps team for answers to popular community questions on core features and usage.

    Serverless Integrations

    In the first half of the month we looked at compute resources for building and deploying serverless applications. In the second half, we look at integration tools and resources that automate developer workflows to streamline the end-to-end developer experience.

    In Azure, this is enabled by services like Azure Logic Apps and Azure Event Grid. Azure Logic Apps provides a visual designer to create and automate workflows with little or no code involved. Azure Event Grid provides a highly-scable event broker with support for pub/sub communications to drive async event-driven architectures.

    • In Tracking Weather Data Changes With Logic Apps we look at how you can use Logic Apps to integrate the MSN weather service with Azure CosmosDB, allowing automated collection of weather data on changes.

    • In Teach the Cloud to Read & Categorize Mail we take it a step further, using Logic Apps to automate a workflow that includes a Computer Vision service to "read" images and store the results to CosmosDB.

    • In Integrate with Microsoft Graph we explore a multi-cloud scenario (Azure + M365) where change notifications from Microsoft Graph can be integrated using Logic Apps and Event Hubs to power an onboarding workflow.

    • In Cloud Events with Event Grid we learn about the CloudEvents specification (for consistently describing event data) - and learn how Event Grid brokers events in this format. Azure Logic Apps can be an Event handler (subscriber) that uses the event to trigger an automated workflow on receipt.

      Azure Event Grid And Logic Apps

    Want to explore other such integrations? Browse Azure Architectures and filter by selected Azure services for more real-world scenarios.


    End-to-End Solutions

    We've covered serverless compute solutions (for building your serverless applications) and serverless integration services to automate end-to-end workflows in synchronous or asynchronous event-driven architectures. In this final week, we want to leave you with a sense of end-to-end development tools and use cases that can be enabled by Serverless on Azure. Here are some key examples:

    ArticleDescription
    In this tutorial, you'll learn to deploy a containerized ASP.NET Core 6.0 application to Azure Container Apps - with a Blazor front-end and two Web API projects
    Deploy Java containers to cloudIn this tutorial you learn to build and deploy a Java application running on Spring Boot, by publishing it in a container to Azure Container Registry, then deploying to Azure Container Apps,, from ACR, via the Azure Portal.
    **Where am I? My GPS Location with Serverless Power Platform Custom Connector**In this step-by-step tutorial you learn to integrate a serverless application (built on Azure Functions and OpenAPI) with Power Platforms custom connectors via Azure API Management (API-M).This pattern can empower a new ecosystem of fusion apps for cases like inventory management.
    And in our Serverless Hacks initiative, we walked through an 8-step hack to build a serverless tollbooth. Check out this 12-part video walkthrough of a reference solution using .NET.

    Developer Tools

    But wait - there's more. Those are a sample of the end-to-end application scenarios that are built on serverless on Azure. But what about the developer experience? In this article, we say hello to the Azure Developer CLI - an open-source tool that streamlines your develop-deploy workflow, with simple commands that map to core stages of your development journey. Go from code to cloud with one CLI

    And watch this space for more such tutorials and content through October, including a special #Hacktoberfest focused initiative to encourage and support first-time contributors to open-source. Here's a sneak peek at the project we plan to share - the new awesome-azd templates gallery.


    Join us at Microsoft Ignite!

    Want to continue your learning journey, and learn about what's next for Serverless on Azure? Microsoft Ignite happens Oct 12-14 this year and has multiple sessions on relevant technologies and tools. Check out the Session Catalog and register here to attend online.

    - + \ No newline at end of file diff --git a/blog/tags/dapr/page/10/index.html b/blog/tags/dapr/page/10/index.html index a051088403..8f751ca9ed 100644 --- a/blog/tags/dapr/page/10/index.html +++ b/blog/tags/dapr/page/10/index.html @@ -14,14 +14,14 @@ - +

    16 posts tagged with "dapr"

    View All Tags

    · 11 min read
    Kendall Roden

    Welcome to Day 13 of #30DaysOfServerless!

    In the previous post, we learned about all things Distributed Application Runtime (Dapr) and highlighted the capabilities you can unlock through managed Dapr in Azure Container Apps! Today, we'll dive into how we can make use of Container Apps secrets and managed identities to securely access cloud-hosted resources that your Container Apps depend on!

    Ready? Let's go.


    What We'll Cover

    • Secure access to external services overview
    • Using Container Apps Secrets
    • Using Managed Identity for connecting to Azure resources
    • Using Dapr secret store component references (Dapr-only)
    • Conclusion
    • Resources: For self-study!


    Securing access to external services

    In most, if not all, microservice-based applications, one or more services in the system will rely on other cloud-hosted resources; Think external services like databases, secret stores, message brokers, event sources, etc. To interact with these services, an application must have the ability to establish a secure connection. Traditionally, an application will authenticate to these backing resources using some type of connection string or password.

    I'm not sure if it was just me, but one of the first things I learned as a developer was to ensure credentials and other sensitive information were never checked into the codebase. The ability to inject these values at runtime is a non-negotiable.

    In Azure Container Apps, applications can securely leverage connection information via Container Apps Secrets. If the resource is Azure-based, a more ideal solution that removes the dependence on secrets altogether is using Managed Identity.

    Specifically for Dapr-enabled container apps, users can now tap into the power of the Dapr secrets API! With this new capability unlocked in Container Apps, users can call the Dapr secrets API from application code to securely access secrets from Key Vault or other backing secret stores. In addition, customers can also make use of a secret store component reference when wiring up Dapr state store components and more!

    ALSO, I'm excited to share that support for Dapr + Managed Identity is now available!!. What does this mean? It means that you can enable Managed Identity for your container app - and when establishing connections via Dapr, the Dapr sidecar can use this identity! This means simplified components without the need for secrets when connecting to Azure services!

    Let's dive a bit deeper into the following three topics:

    1. Using Container Apps secrets in your container apps
    2. Using Managed Identity to connect to Azure services
    3. Connecting to services securely for Dapr-enabled apps

    Secure access to external services without Dapr

    Leveraging Container Apps secrets at runtime

    Users can leverage this approach for any values which need to be securely stored, however, it is recommended to use Managed Identity where possible when connecting to Azure-specific resources.

    First, let's establish a few important points regarding secrets in container apps:

    • Secrets are scoped at the container app level, meaning secrets cannot be shared across container apps today
    • When running in multiple-revision mode,
      • changes to secrets do not generate a new revision
      • running revisions will not be automatically restarted to reflect changes. If you want to force-update existing container app revisions to reflect the changed secrets values, you will need to perform revision restarts.
    STEP 1

    Provide the secure value as a secret parameter when creating your container app using the syntax "SECRET_NAME=SECRET_VALUE"

    az containerapp create \
    --resource-group "my-resource-group" \
    --name queuereader \
    --environment "my-environment-name" \
    --image demos/queuereader:v1 \
    --secrets "queue-connection-string=$CONNECTION_STRING"
    STEP 2

    Create an environment variable which references the value of the secret created in step 1 using the syntax "ENV_VARIABLE_NAME=secretref:SECRET_NAME"

    az containerapp create \
    --resource-group "my-resource-group" \
    --name myQueueApp \
    --environment "my-environment-name" \
    --image demos/myQueueApp:v1 \
    --secrets "queue-connection-string=$CONNECTIONSTRING" \
    --env-vars "QueueName=myqueue" "ConnectionString=secretref:queue-connection-string"

    This ConnectionString environment variable can be used within your application code to securely access the connection string value at runtime.

    Using Managed Identity to connect to Azure services

    A managed identity from Azure Active Directory (Azure AD) allows your container app to access other Azure AD-protected resources. This approach is recommended where possible as it eliminates the need for managing secret credentials in your container apps and allows you to properly scope the permissions needed for a given container app using role-based access control. Both system-assigned and user-assigned identities are available in container apps. For more background on managed identities in Azure AD, see Managed identities for Azure resources.

    To configure your app with a system-assigned managed identity you will follow similar steps to the following:

    STEP 1

    Run the following command to create a system-assigned identity for your container app

    az containerapp identity assign \
    --name "myQueueApp" \
    --resource-group "my-resource-group" \
    --system-assigned
    STEP 2

    Retrieve the identity details for your container app and store the Principal ID for the identity in a variable "PRINCIPAL_ID"

    az containerapp identity show \
    --name "myQueueApp" \
    --resource-group "my-resource-group"
    STEP 3

    Assign the appropriate roles and permissions to your container app's managed identity using the Principal ID in step 2 based on the resources you need to access (example below)

    az role assignment create \
    --role "Storage Queue Data Contributor" \
    --assignee $PRINCIPAL_ID \
    --scope "/subscriptions/<subscription>/resourceGroups/<resource-group>/providers/Microsoft.Storage/storageAccounts/<storage-account>/queueServices/default/queues/<queue>"

    After running the above commands, your container app will be able to access your Azure Store Queue because it's managed identity has been assigned the "Store Queue Data Contributor" role. The role assignments you create will be contingent solely on the resources your container app needs to access. To instrument your code to use this managed identity, see more details here.

    In addition to using managed identity to access services from your container app, you can also use managed identity to pull your container images from Azure Container Registry.

    Secure access to external services with Dapr

    For Dapr-enabled apps, there are a few ways to connect to the resources your solutions depend on. In this section, we will discuss when to use each approach.

    1. Using Container Apps secrets in your Dapr components
    2. Using Managed Identity with Dapr Components
    3. Using Dapr Secret Stores for runtime secrets and component references

    Using Container Apps secrets in Dapr components

    Prior to providing support for the Dapr Secret's Management building block, this was the only approach available for securely storing sensitive values for use in Dapr components.

    In Dapr OSS, when no secret store reference is provided in a Dapr component file, the default secret store is set to "Kubernetes secrets". In Container Apps, we do not expose the ability to use this default store. Rather, Container Apps secrets can be used in it's place.

    With the introduction of the Secrets API and the ability to use Dapr + Managed Identity, this approach is useful for a limited number of scenarios:

    • Quick demos and dev/test scenarios using the Container Apps CLI
    • Securing values when a secret store is not configured or available for use
    • Using service principal credentials to configure an Azure Key Vault secret store component (Using Managed Identity is recommend)
    • Securing access credentials which may be required when creating a non-Azure secret store component
    STEP 1

    Create a Dapr component which can be used by one or more services in the container apps environment. In the below example, you will create a secret to store the storage account key and reference this secret from the appropriate Dapr metadata property.

       componentType: state.azure.blobstorage
    version: v1
    metadata:
    - name: accountName
    value: testStorage
    - name: accountKey
    secretRef: account-key
    - name: containerName
    value: myContainer
    secrets:
    - name: account-key
    value: "<STORAGE_ACCOUNT_KEY>"
    scopes:
    - myApp
    STEP 2

    Deploy the Dapr component using the below command with the appropriate arguments.

     az containerapp env dapr-component set \
    --name "my-environment" \
    --resource-group "my-resource-group" \
    --dapr-component-name statestore \
    --yaml "./statestore.yaml"

    Using Managed Identity with Dapr Components

    Dapr-enabled container apps can now make use of managed identities within Dapr components. This is the most ideal path for connecting to Azure services securely, and allows for the removal of sensitive values in the component itself.

    The Dapr sidecar makes use of the existing identities available within a given container app; Dapr itself does not have it's own identity. Therefore, the steps to enable Dapr + MI are similar to those in the section regarding managed identity for non-Dapr apps. See example steps below specifically for using a system-assigned identity:

    1. Create a system-assigned identity for your container app

    2. Retrieve the identity details for your container app and store the Principal ID for the identity in a variable "PRINCIPAL_ID"

    3. Assign the appropriate roles and permissions (for accessing resources backing your Dapr components) to your ACA's managed identity using the Principal ID

    4. Create a simplified Dapr component without any secrets required

          componentType: state.azure.blobstorage
      version: v1
      metadata:
      - name: accountName
      value: testStorage
      - name: containerName
      value: myContainer
      scopes:
      - myApp
    5. Deploy the component to test the connection from your container app via Dapr!

    Keep in mind, all Dapr components will be loaded by each Dapr-enabled container app in an environment by default. In order to avoid apps without the appropriate permissions from loading a component unsuccessfully, use scopes. This will ensure that only applications with the appropriate identities to access the backing resource load the component.

    Using Dapr Secret Stores for runtime secrets and component references

    Dapr integrates with secret stores to provide apps and other components with secure storage and access to secrets such as access keys and passwords. The Dapr Secrets API is now available for use in Container Apps.

    Using Dapr’s secret store building block typically involves the following:

    • Setting up a component for a specific secret store solution.
    • Retrieving secrets using the Dapr secrets API in the application code.
    • Optionally, referencing secrets in Dapr component files.

    Let's walk through a couple sample workflows involving the use of Dapr's Secrets Management capabilities!

    Setting up a component for a specific secret store solution

    1. Create an Azure Key Vault instance for hosting the secrets required by your application.

      az keyvault create --name "<your-unique-keyvault-name>" --resource-group "my-resource-group" --location "<your-location>"
    2. Create an Azure Key Vault component in your environment without the secrets values, as the connection will be established to Azure Key Vault via Managed Identity.

          componentType: secretstores.azure.keyvault
      version: v1
      metadata:
      - name: vaultName
      value: "[your_keyvault_name]"
      scopes:
      - myApp
      az containerapp env dapr-component set \
      --name "my-environment" \
      --resource-group "my-resource-group" \
      --dapr-component-name secretstore \
      --yaml "./secretstore.yaml"
    3. Run the following command to create a system-assigned identity for your container app

      az containerapp identity assign \
      --name "myApp" \
      --resource-group "my-resource-group" \
      --system-assigned
    4. Retrieve the identity details for your container app and store the Principal ID for the identity in a variable "PRINCIPAL_ID"

      az containerapp identity show \
      --name "myApp" \
      --resource-group "my-resource-group"
    5. Assign the appropriate roles and permissions to your container app's managed identity to access Azure Key Vault

      az role assignment create \
      --role "Key Vault Secrets Officer" \
      --assignee $PRINCIPAL_ID \
      --scope /subscriptions/{subscriptionid}/resourcegroups/{resource-group-name}/providers/Microsoft.KeyVault/vaults/{key-vault-name}
    6. Begin using the Dapr Secrets API in your application code to retrieve secrets! See additional details here.

    Referencing secrets in Dapr component files

    Once a Dapr secret store component is available in the environment, it can be used to retrieve secrets for use in other components. For example, when creating a state store component, you can add a reference to the Dapr secret store from which you would like to source connection information. You will no longer use secrets directly in the component spec, but rather will instruct the Dapr sidecar to retrieve the secrets from the specified store.

          componentType: state.azure.blobstorage
    version: v1
    metadata:
    - name: accountName
    value: testStorage
    - name: accountKey
    secretRef: account-key
    - name: containerName
    value: myContainer
    secretStoreComponent: "<SECRET_STORE_COMPONENT_NAME>"
    scopes:
    - myApp

    Summary

    In this post, we have covered the high-level details on how to work with secret values in Azure Container Apps for both Dapr and Non-Dapr apps. In the next article, we will walk through a complex Dapr example from end-to-end which makes use of the new support for Dapr + Managed Identity. Stayed tuned for additional documentation around Dapr secrets as it will be release in the next two weeks!

    Resources

    Here are the main resources to explore for self-study:

    - + \ No newline at end of file diff --git a/blog/tags/dapr/page/11/index.html b/blog/tags/dapr/page/11/index.html index 10e75d9053..eb7809d9a4 100644 --- a/blog/tags/dapr/page/11/index.html +++ b/blog/tags/dapr/page/11/index.html @@ -14,13 +14,13 @@ - +

    16 posts tagged with "dapr"

    View All Tags

    · 8 min read
    Nitya Narasimhan

    Welcome to Day 12 of #30DaysOfServerless!

    So far we've looked at Azure Container Apps - what it is, how it enables microservices communication, and how it enables auto-scaling with KEDA compliant scalers. Today we'll shift gears and talk about Dapr - the Distributed Application Runtime - and how it makes microservices development with ACA easier with core building blocks and a sidecar architecture!

    Ready? Let's go!


    What We'll Cover

    • What is Dapr and why use it?
    • Building Block APIs
    • Dapr Quickstart and Tutorials
    • Dapr-enabled ACA: A Sidecar Approach
    • Exercise: Build & Deploy a Dapr-enabled ACA.
    • Resources: For self-study!


    Hello, Dapr!

    Building distributed applications is hard. Building reliable and portable microservces means having middleware that deals with challenges like service discovery, sync and async communications, state management, secure information sharing and more. Integrating these support services into your application can be challenging from both development and maintenance perspectives, adding complexity that is independent of the core application logic you want to focus on.

    This is where Dapr (Distributed Application Runtime) shines - it's defined as::

    a portable, event-driven runtime that makes it easy for any developer to build resilient, stateless and stateful applications that run on the cloud and edge and embraces the diversity of languages and developer frameworks.

    But what does this actually mean to me as an app developer?


    Dapr + Apps: A Sidecar Approach

    The strength of Dapr lies in its ability to:

    • abstract complexities of distributed systems middleware - with Building Block APIs that implement components using best practices to tackle key challenges.
    • implement a Sidecar Pattern with interactions via APIs - allowing applications to keep their codebase clean and focus on app logic.
    • be Incrementally Adoptable - allowing developers to start by integrating one API, then evolving to use more as and when needed.
    • be Platform Agnostic - allowing applications to be developed in a preferred language or framework without impacting integration capabilities.

    The application-dapr sidecar interaction is illustrated below. The API abstraction allows applications to get the desired functionality without having to know how it was implemented, or without having to integrate Dapr-specific code into their codebase. Note how the sidecar process listens on port 3500 and the API provides clear routes for the specific building blocks supported by Dapr (e.g, /secrets, /state etc.)


    Dapr Building Blocks: API Interactions

    Dapr Building Blocks refers to HTTP and gRPC endpoints exposed by Dapr API endpoints exposed by the Dapr sidecar, providing key capabilities like state management, observability, service-to-service invocation, pub/sub messaging and more to the associated application.

    Building Blocks: Under the Hood
    The Dapr API is implemented by modular components that codify best practices for tackling the specific challenge that they represent. The API abstraction allows component implementations to evolve, or alternatives to be used , without requiring changes to the application codebase.

    The latest Dapr release has the building blocks shown in the above figure. Not all capabilities are available to Azure Container Apps by default - check the documentation for the latest updates on this. For now, Azure Container Apps + Dapr integration provides the following capabilities to the application:

    In the next section, we'll dive into Dapr-enabled Azure Container Apps. Before we do that, here are a couple of resources to help you explore the Dapr platform by itself, and get more hands-on experience with the concepts and capabilities:

    • Dapr Quickstarts - build your first Dapr app, then explore quickstarts for a core APIs including service-to-service invocation, pub/sub, state mangement, bindings and secrets management.
    • Dapr Tutorials - go beyond the basic quickstart and explore more realistic service integrations and usage scenarios. Try the distributed calculator example!

    Integrate Dapr & Azure Container Apps

    Dapr currently has a v1.9 (preview) version, but Azure Container Apps supports Dapr v1.8. In this section, we'll look at what it takes to enable, configure, and use, Dapr integration with Azure Container Apps. It involves 3 steps: enabling Dapr using settings, configuring Dapr components (API) for use, then invoking the APIs.

    Here's a simple a publisher-subscriber scenario from the documentation. We have two Container apps identified as publisher-app and subscriber-app deployed in a single environment. Each ACA has an activated daprd sidecar, allowing them to use the Pub/Sub API to communicate asynchronously with each other - without having to write the underlying pub/sub implementation themselves. Rather, we can see that the Dapr API uses a pubsub,azure.servicebus component to implement that capability.

    Pub/sub example

    Let's look at how this is setup.

    1. Enable Dapr in ACA: Settings

    We can enable Dapr integration in the Azure Container App during creation by specifying settings in one of two ways, based on your development preference:

    • Using Azure CLI: use custom commandline options for each setting
    • Using Infrastructure-as-Code (IaC): using properties for Bicep, ARM templates

    Once enabled, Dapr will run in the same environment as the Azure Container App, and listen on port 3500 for API requests. The Dapr sidecar can be shared my multiple Container Apps deployed in the same environment.

    There are four main settings we will focus on for this demo - the example below shows the ARM template properties, but you can find the equivalent CLI parameters here for comparison.

    • dapr.enabled - enable Dapr for Azure Container App
    • dapr.appPort - specify port on which app is listening
    • dapr.appProtocol - specify if using http (default) or gRPC for API
    • dapr.appId - specify unique application ID for service discovery, usage

    These are defined under the properties.configuration section for your resource. Changing Dapr settings does not update the revision but it will restart ACA revisions and replicas. Here is what the relevant section of the ARM template looks like for the publisher-app ACA in the scenario shown above.

    "dapr": {
    "enabled": true,
    "appId": "publisher-app",
    "appProcotol": "http",
    "appPort": 80
    }

    2. Configure Dapr in ACA: Components

    The next step after activating the Dapr sidecar, is to define the APIs that you want to use and potentially specify the Dapr components (specific implementations of that API) that you prefer. These components are created at environment-level and by default, Dapr-enabled containers apps in an environment will load the complete set of deployed components -- use the scopes property to ensure only components needed by a given app are loaded at runtime. Here's what the ARM template resources section looks like for the example above. This tells us that the environment has a dapr-pubsub component of type pubsub.azure.servicebus deployed - where that component is loaded by container apps with dapr ids (publisher-app, subscriber-app).

    USING MANAGED IDENTITY + DAPR

    The secrets approach used here is idea for demo purposes. However, we recommend using Managed Identity with Dapr in production. For more details on secrets, check out tomorrow's post on Secrets and Managed Identity in Azure Container Apps

    {
    "resources": [
    {
    "type": "daprComponents",
    "name": "dapr-pubsub",
    "properties": {
    "componentType": "pubsub.azure.servicebus",
    "version": "v1",
    "secrets": [
    {
    "name": "sb-root-connectionstring",
    "value": "value"
    }
    ],
    "metadata": [
    {
    "name": "connectionString",
    "secretRef": "sb-root-connectionstring"
    }
    ],
    // Application scopes
    "scopes": ["publisher-app", "subscriber-app"]

    }
    }
    ]
    }

    With this configuration, the ACA is now set to use pub/sub capabilities from the Dapr sidecar, using standard HTTP requests to the exposed API endpoint for this service.

    Exercise: Deploy Dapr-enabled ACA

    In the next couple posts in this series, we'll be discussing how you can use the Dapr secrets API and doing a walkthrough of a more complex example, to show how Dapr-enabled Azure Container Apps are created and deployed.

    However, you can get hands-on experience with these concepts by walking through one of these two tutorials, each providing an alternative approach to configure and setup the application describe in the scenario below:

    Resources

    Here are the main resources to explore for self-study:

    - + \ No newline at end of file diff --git a/blog/tags/dapr/page/12/index.html b/blog/tags/dapr/page/12/index.html index e76d933a0a..4272a5ec09 100644 --- a/blog/tags/dapr/page/12/index.html +++ b/blog/tags/dapr/page/12/index.html @@ -14,13 +14,13 @@ - +

    16 posts tagged with "dapr"

    View All Tags

    · 6 min read
    Melony Qin

    Welcome to Day 12 of #30DaysOfServerless!

    Today, we have a special set of posts from our Zero To Hero 🚀 initiative, featuring blog posts authored by our Product Engineering teams for #ServerlessSeptember. Posts were originally published on the Apps on Azure blog on Microsoft Tech Community.


    What We'll Cover

    • What are Custom Handlers, and why use them?
    • How Custom Handler Works
    • Message Processing With Azure Custom Handler
    • Azure Portal Monitoring


    If you have been working with Azure Functions for a while, you may know Azure Functions is a serverless FaaS (Function as a Service) offered provided by Microsoft Azure, which is built for your key scenarios, including building web APIs, processing file uploads, responding to database changes, processing IoT data streams, managing message queues, and more.

    Custom Handlers: What and Why

    Azure functions support multiple programming languages including C#, F#, Java, JavaScript, Python, typescript, and PowerShell. If you want to get extended language support with Azure functions for other languages such as Go, and Rust, that’s where custom handler comes in.

    An Azure function custom handler allows the use of any language that supports HTTP primitives and author Azure functions. With custom handlers, you can use triggers and input and output bindings via extension bundles, hence it supports all the triggers and bindings you're used to with Azure functions.

    How a Custom Handler Works

    Let’s take a look at custom handlers and how it works.

    • A request is sent to the function host when an event is triggered. It’s up to the function host to issue a request payload to the custom handler, which holds the trigger and inputs binding data as well as other metadata for the function.
    • The custom handler is a local HTTP web server. It executes the function code and returns a response payload to the Functions host.
    • The Functions host passes data from the response to the function's output bindings which will be passed to the downstream stream services for data processing.

    Check out this article to know more about Azure functions custom handlers.


    Message processing with Custom Handlers

    Message processing is one of the key scenarios that Azure functions are trying to address. In the message-processing scenario, events are often collected in queues. These events can trigger Azure functions to execute a piece of business logic.

    You can use the Service Bus trigger to respond to messages from an Azure Service Bus queue - it's then up to the Azure functions custom handlers to take further actions to process the messages. The process is described in the following diagram:

    Building Serverless Go Applications with Azure functions custom handlers

    In Azure function, the function.json defines the function's trigger, input and output bindings, and other configuration settings. Note that every function can have multiple bindings, but it can only have one trigger. The following is an example of setting up the Service Bus queue trigger in the function.json file :

    {
    "bindings": [
    {
    "name": "queueItem",
    "type": "serviceBusTrigger",
    "direction": "in",
    "queueName": "functionqueue",
    "connection": "ServiceBusConnection"
    }
    ]
    }

    You can add a binding definition in the function.json to write the output to a database or other locations of your desire. Supported bindings can be found here.

    As we’re programming in Go, so we need to set the value of defaultExecutablePath to handler in the customHandler.description section in the host.json file.

    Assume we’re programming in Windows OS, and we have named our go application as server.go, after we executed go build server.go command, it produces an executable called server.exe. So here we set server.exe in the host.json as the following example :

      "customHandler": {
    "description": {
    "defaultExecutablePath": "./server.exe",
    "workingDirectory": "",
    "arguments": []
    }
    }

    We’re showcasing a simple Go application here with Azure functions custom handlers where we print out the messages received from the function host. The following is the full code of server.go application :

    package main

    import (
    "encoding/json"
    "fmt"
    "log"
    "net/http"
    "os"
    )

    type InvokeRequest struct {
    Data map[string]json.RawMessage
    Metadata map[string]interface{}
    }

    func queueHandler(w http.ResponseWriter, r *http.Request) {
    var invokeRequest InvokeRequest

    d := json.NewDecoder(r.Body)
    d.Decode(&invokeRequest)

    var parsedMessage string
    json.Unmarshal(invokeRequest.Data["queueItem"], &parsedMessage)

    fmt.Println(parsedMessage)
    }

    func main() {
    customHandlerPort, exists := os.LookupEnv("FUNCTIONS_CUSTOMHANDLER_PORT")
    if !exists {
    customHandlerPort = "8080"
    }
    mux := http.NewServeMux()
    mux.HandleFunc("/MessageProcessorFunction", queueHandler)
    fmt.Println("Go server Listening on: ", customHandlerPort)
    log.Fatal(http.ListenAndServe(":"+customHandlerPort, mux))

    }

    Ensure you have Azure functions core tools installed, then we can use func start command to start our function. Then we’ll have have a C#-based Message Sender application on Github to send out 3000 messages to the Azure service bus queue. You’ll see Azure functions instantly start to process the messages and print out the message as the following:

    Monitoring Serverless Go Applications with Azure functions custom handlers


    Azure portal monitoring

    Let’s go back to Azure portal portal the events see how those messages in Azure Service Bus queue were being processed. There was 3000 messages were queued in the Service Bus queue ( the Blue line stands for incoming Messages ). The outgoing messages (the red line in smaller wave shape ) showing there are progressively being read by Azure functions as the following :

    Monitoring Serverless Go Applications with Azure functions custom handlers

    Check out this article about monitoring Azure Service bus for further information.

    Next steps

    Thanks for following along, we’re looking forward to hearing your feedback. Also, if you discover potential issues, please record them on Azure Functions host GitHub repository or tag us @AzureFunctions on Twitter.

    RESOURCES

    Start to build your serverless applications with custom handlers, check out the official documentation:

    Life is a journey of learning. Let’s stay tuned!

    - + \ No newline at end of file diff --git a/blog/tags/dapr/page/13/index.html b/blog/tags/dapr/page/13/index.html index 5e49823ea5..281539ae2b 100644 --- a/blog/tags/dapr/page/13/index.html +++ b/blog/tags/dapr/page/13/index.html @@ -14,13 +14,13 @@ - +

    16 posts tagged with "dapr"

    View All Tags

    · 5 min read
    Anthony Chu

    Welcome to Day 12 of #30DaysOfServerless!

    Today, we have a special set of posts from our Zero To Hero 🚀 initiative, featuring blog posts authored by our Product Engineering teams for #ServerlessSeptember. Posts were originally published on the Apps on Azure blog on Microsoft Tech Community.


    What We'll Cover

    • Using Visual Studio
    • Using Visual Studio Code: Docker, ACA extensions
    • Using Azure CLI
    • Using CI/CD Pipelines


    Last week, @kendallroden wrote about what it means to be Cloud-Native and how Azure Container Apps provides a serverless containers platform for hosting all of your Cloud-Native applications. Today, we’ll walk through a few ways to get your apps up and running on Azure Container Apps.

    Depending on where you are in your Cloud-Native app development journey, you might choose to use different tools to deploy your apps.

    • “Right-click, publish” – Deploying an app directly from an IDE or code editor is often seen as a bad practice, but it’s one of the quickest ways to test out an app in a cloud environment.
    • Command line interface – CLIs are useful for deploying apps from a terminal. Commands can be run manually or in a script.
    • Continuous integration/deployment – To deploy production apps, the recommended approach is to automate the process in a robust CI/CD pipeline.

    Let's explore some of these options in more depth.

    Visual Studio

    Visual Studio 2022 has built-in support for deploying .NET applications to Azure Container Apps. You can use the familiar publish dialog to provision Container Apps resources and deploy to them directly. This helps you prototype an app and see it run in Azure Container Apps with the least amount of effort.

    Journey to the cloud with Azure Container Apps

    Once you’re happy with the app and it’s ready for production, Visual Studio allows you to push your code to GitHub and set up a GitHub Actions workflow to build and deploy your app every time you push changes. You can do this by checking a box.

    Journey to the cloud with Azure Container Apps

    Visual Studio Code

    There are a couple of valuable extensions that you’ll want to install if you’re working in VS Code.

    Docker extension

    The Docker extension provides commands for building a container image for your app and pushing it to a container registry. It can even do this without requiring Docker Desktop on your local machine --- the “Build image in Azure” command remotely builds and pushes a container image to Azure Container Registry.

    Journey to the cloud with Azure Container Apps

    And if your app doesn’t have a dockerfile, the extension will generate one for you.

    Journey to the cloud with Azure Container Apps

    Azure Container Apps extension

    Once you’ve built your container image and pushed it to a registry, the Azure Container Apps VS Code extension provides commands for creating a container app and deploying revisions using the image you’ve built.

    Journey to the cloud with Azure Container Apps

    Azure CLI

    The Azure CLI can be used to manage pretty much anything in Azure. For Azure Container Apps, you’ll find commands for creating, updating, and managing your Container Apps resources.

    Just like in VS Code, with a few commands in the Azure CLI, you can create your Azure resources, build and push your container image, and then deploy it to a container app.

    To make things as simple as possible, the Azure CLI also has an “az containerapp up” command. This single command takes care of everything that’s needed to turn your source code from your local machine to a cloud-hosted application in Azure Container Apps.

    az containerapp up --name myapp --source ./src

    We saw earlier that Visual Studio can generate a GitHub Actions workflow to automatically build and deploy your app on every commit. “az containerapp up” can do this too. The following adds a workflow to a repo.

    az containerapp up --name myapp --repo https://github.com/myorg/myproject

    CI/CD pipelines

    When it’s time to take your app to production, it’s strongly recommended to set up a CI/CD pipeline to automatically and repeatably build, test, and deploy it. We’ve already seen that tools such as Visual Studio and Azure CLI can automatically generate a workflow for GitHub Actions. You can set up a pipeline in Azure DevOps too. This is an example Azure DevOps pipeline.

    trigger:
    branches:
    include:
    - main

    pool:
    vmImage: ubuntu-latest

    stages:

    - stage: Build

    jobs:
    - job: build
    displayName: Build app

    steps:
    - task: Docker@2
    inputs:
    containerRegistry: 'myregistry'
    repository: 'hello-aca'
    command: 'buildAndPush'
    Dockerfile: 'hello-container-apps/Dockerfile'
    tags: '$(Build.BuildId)'

    - stage: Deploy

    jobs:
    - job: deploy
    displayName: Deploy app

    steps:
    - task: AzureCLI@2
    inputs:
    azureSubscription: 'my-subscription(5361b9d6-46ea-43c3-a898-15f14afb0db6)'
    scriptType: 'bash'
    scriptLocation: 'inlineScript'
    inlineScript: |
    # automatically install Container Apps CLI extension
    az config set extension.use_dynamic_install=yes_without_prompt

    # ensure registry is configured in container app
    az containerapp registry set \
    --name hello-aca \
    --resource-group mygroup \
    --server myregistry.azurecr.io \
    --identity system

    # update container app
    az containerapp update \
    --name hello-aca \
    --resource-group mygroup \
    --image myregistry.azurecr.io/hello-aca:$(Build.BuildId)

    Conclusion

    In this article, we looked at a few ways to deploy your Cloud-Native applications to Azure Container Apps and how to decide which one to use based on where you are in your app’s journey to the cloud.

    To learn more, visit Azure Container Apps | Microsoft Azure today!

    ASK THE EXPERT: LIVE Q&A

    The Azure Container Apps team will answer questions live on September 29.

    - + \ No newline at end of file diff --git a/blog/tags/dapr/page/14/index.html b/blog/tags/dapr/page/14/index.html index ea788a6b8f..fc2a5ea69a 100644 --- a/blog/tags/dapr/page/14/index.html +++ b/blog/tags/dapr/page/14/index.html @@ -14,13 +14,13 @@ - +

    16 posts tagged with "dapr"

    View All Tags

    · 12 min read
    Nitya Narasimhan

    Welcome to Day 9 of #30DaysOfServerless!


    What We'll Cover

    • The Week Ahead
    • Hello, Container Apps!
    • Quickstart: Build Your First ACA!
    • Under The Hood: Core ACA Concepts
    • Exercise: Try this yourself!
    • Resources: For self-study!


    The Week Ahead

    Welcome to Week 2 of #ServerlessSeptember, where we put the focus on Microservices and building Cloud-Native applications that are optimized for serverless solutions on Azure. One week is not enough to do this complex topic justice so consider this a 7-part jumpstart to the longer journey.

    1. Hello, Container Apps (ACA) - Learn about Azure Container Apps, a key service that helps you run microservices and containerized apps on a serverless platform. Know the core concepts. (Tutorial 1: First ACA)
    2. Communication with Microservices - Dive deeper into two key concepts: environments and virtual networking. Learn how microservices communicate in ACA, and walkthrough an example. (Tutorial 2: ACA with 3 Microservices)
    3. Scaling Your Container Apps - Learn about KEDA. Understand how to configure your ACA for auto-scaling with KEDA-supported triggers. Put this into action by walking through a tutorial. (Tutorial 3: Configure Autoscaling)
    4. Hello, Distributed Application Runtime (Dapr) - Learn about Dapr and how its Building Block APIs simplify microservices development with ACA. Know how the sidecar pattern enables incremental adoption of Dapr APIs without requiring any Dapr code integration in app. (Tutorial 4: Setup & Explore Dapr)
    5. Building ACA with Dapr - See how Dapr works with ACA by building a Dapr-enabled Azure Container App. Walk through a .NET tutorial using Pub/Sub and State Management APIs in an enterprise scenario. (Tutorial 5: Build ACA with Dapr)
    6. Managing Secrets With Dapr - We'll look at the Secrets API (a key Building Block of Dapr) and learn how it simplifies management of sensitive information in ACA.
    7. Microservices + Serverless On Azure - We recap Week 2 (Microservices) and set the stage for Week 3 ( Integrations) of Serverless September. Plus, self-study resources including ACA development tutorials in different languages.

    Ready? Let's go!


    Azure Container Apps!

    When building your application, your first decision is about where you host your application. The Azure Architecture Center has a handy chart to help you decide between choices like Azure Functions, Azure App Service, Azure Container Instances, Azure Container Apps and more. But if you are new to this space, you'll need a good understanding of the terms and concepts behind the services Today, we'll focus on Azure Container Apps (ACA) - so let's start with the fundamentals.

    Containerized App Defined

    A containerized app is one where the application components, dependencies, and configuration, are packaged into a single file (container image), which can be instantiated in an isolated runtime environment (container) that is portable across hosts (OS). This makes containers lightweight and scalable - and ensures that applications behave consistently on different host platforms.

    Container images can be shared via container registries (public or private) helping developers discover and deploy related apps with less effort. Scaling a containerized app can be as simple as activating more instances of its container image. However, this requires container orchestrators to automate the management of container apps for efficiency. Orchestrators use technologies like Kubernetes to support capabilities like workload scheduling, self-healing and auto-scaling on demand.

    Cloud-Native & Microservices

    Containers are seen as one of the 5 pillars of Cloud-Native app development, an approach where applications are designed explicitly to take advantage of the unique benefits of modern dynamic environments (involving public, private and hybrid clouds). Containers are particularly suited to serverless solutions based on microservices.

    • With serverless - developers use managed services instead of managing their own infrastructure. Services are typically event-driven and can be configured for autoscaling with rules tied to event triggers. Serverless is cost-effective, with developers paying only for the compute cycles and resources they use.
    • With microservices - developers compose their applications from independent components. Each component can be deployed in its own container, and scaled at that granularity. This simplifies component reuse (across apps) and maintainability (over time) - with developers evolving functionality at microservice (vs. app) levels.

    Hello, Azure Container Apps!

    Azure Container Apps is the managed service that helps you run containerized apps and microservices as a serverless compute solution, on Azure. You can:

    • deploy serverless API endpoints - autoscaled by HTTP request traffic
    • host background processing apps - autoscaled by CPU or memory load
    • handle event-driven processing - autoscaled by #messages in queue
    • run microservices - autoscaled by any KEDA-supported scaler.

    Want a quick intro to the topic? Start by watching the short video below - then read these two posts from our ZeroToHero series:


    Deploy Your First ACA

    Dev Options

    We typically have three options for development:

    • Use the Azure Portal - provision and deploy from a browser.
    • Use Visual Studio Code (with relevant extensions) - if you prefer an IDE
    • Using Azure CLI - if you prefer to build and deploy from command line.

    The documentation site has quickstarts for three contexts:

    For this quickstart, we'll go with the first option (sample image) so we can move quickly to core concepts. We'll leave the others as an exercise for you to explore.

    1. Setup Resources

    PRE-REQUISITES

    You need:

    • An Azure account with an active subscription
    • An installed Azure CLI

    Start by logging into Azure from the CLI. The command should launch a browser to complete the auth flow (or give you an option to take an alternative path).

    $ az login

    Successful authentication will result in extensive command-line output detailing the status of your subscription.

    Next, install the Azure Container Apps extension for the CLI

    $ az extension add --name containerapp --upgrade
    ...
    The installed extension 'containerapp' is in preview.

    Once successfully installed, register the Microsoft.App namespace.

    $ az provider register --namespace Microsoft.App

    Then set local environment variables in that terminal - and verify they are set correctly:

    $ RESOURCE_GROUP="my-container-apps"
    $ LOCATION="canadacentral"
    $ CONTAINERAPPS_ENVIRONMENT="my-environment"

    $ echo $LOCATION $RESOURCE_GROUP $CONTAINERAPPS_ENVIRONMENT
    canadacentral my-container-apps my-environment

    Now you can use Azure CLI to provision a resource group for this tutorial. Creating a resource group also makes it easier for us to delete/reclaim all resources used at the end of this tutorial.

    az group create \
    --name $RESOURCE_GROUP \
    --location $LOCATION
    Congratulations

    You completed the Setup step!

    On completion, the console should print out the details of the newly created resource group. You should also be able to visit the Azure Portal and find the newly-active my-container-apps resource group under your active subscription.

    2. Create Environment

    An environment is like the picket fence around your property. It creates a secure boundary that contains a group of container apps - such that all apps deployed to it share the same virtual network and logging resources.

    $ az containerapp env create \
    --name $CONTAINERAPPS_ENVIRONMENT \
    --resource-group $RESOURCE_GROUP \
    --location $LOCATION

    No Log Analytics workspace provided.
    Generating a Log Analytics workspace with name ...

    This can take a few minutes. When done, you will see the terminal display more details. You can also check the resource group in the portal and see that a Container Apps Environment and a Log Analytics Workspace are created for you as part of this step.

    You've got the fence set up. Now it's time to build your home - er, container app!

    3. Create Container App

    Here's the command we'll use to create our first Azure Container App. Note that the --image argument provides the link to a pre-existing containerapps-helloworld image.

    az containerapp create \
    --name my-container-app \
    --resource-group $RESOURCE_GROUP \
    --environment $CONTAINERAPPS_ENVIRONMENT \
    --image mcr.microsoft.com/azuredocs/containerapps-helloworld:latest \
    --target-port 80 \
    --ingress 'external' \
    --query properties.configuration.ingress.fqdn
    ...
    ...

    Container app created. Access your app at <URL>

    The --ingress property shows that the app is open to external requests; in other words, it is publicly visible at the <URL> that is printed out on the terminal on successsful completion of this step.

    4. Verify Deployment

    Let's see if this works. You can verify that your container app by visitng the URL returned above in your browser. You should see something like this!

    Container App Hello World

    You can also visit the Azure Portal and look under the created Resource Group. You should see a new Container App type of resource was created after this step.

    Congratulations

    You just created and deployed your first "Hello World" Azure Container App! This validates your local development environment setup and existence of a valid Azure subscription.

    5. Clean Up Your Resources

    It's good practice to clean up resources once you are done with a tutorial.

    THIS ACTION IS IRREVERSIBLE

    This command deletes the resource group we created above - and all resources in it. So make sure you specified the right name, then confirm deletion.

    $ az group delete --name $RESOURCE_GROUP
    Are you sure you want to perform this operation? (y/n):

    Note that you can also delete the resource group from the Azure Portal interface if that feels more comfortable. For now, we'll just use the Portal to verify that deletion occurred. If you had previously opened the Resource Group page for the created resource, just refresh it. You should see something like this:

    Resource Not Found


    Core Concepts

    COMING SOON

    An illustrated guide summarizing these concepts in a single sketchnote.

    We covered a lot today - we'll stop with a quick overview of core concepts behind Azure Container Apps, each linked to documentation for self-study. We'll dive into more details on some of these concepts in upcoming articles:

    • Environments - are the secure boundary around a group of container apps that are deployed in the same virtual network. They write logs to a shared Log Analytics workspace and can communicate seamlessly using Dapr, if used.
    • Containers refer to the container image deployed in the Azure Container App. They can use any runtime, programming language, or development stack - and be discovered using any public or private container registry. A container app can support multiple containers.
    • Revisions are immutable snapshots of an Azure Container App. The first revision is created when the ACA is first deployed, with new revisions created when redeployment occurs with revision-scope changes. Multiple revisions can run concurrently in an environment.
    • Application Lifecycle Management revolves around these revisions, with a container app having three phases: deployment, update and deactivation.
    • Microservices are independent units of functionality in Cloud-Native architectures. A single container app typically represents a single microservice, and can be composed from one or more containers. Microservices can now be scaled and upgraded indepedently, giving your application more flexbility and control.
    • Networking architecture consist of a virtual network (VNET) associated with the environment. Unless you provide a custom VNET at environment creation time, a default VNET is automatically created. The VNET configuration determines access (ingress, internal vs. external) and can influence auto-scaling choices (e.g., use HTTP Edge Proxy and scale based on number of HTTP requests).
    • Observability is about monitoring the health of your application and diagnosing it to improve reliability or performance. Azure Container Apps has a number of features - from Log streaming and Container console to integration with Azure Monitor - to provide a holistic view of application status over time.
    • Easy Auth is possible with built-in support for authentication and authorization including support for popular identity providers like Facebook, Google, Twitter and GitHub - alongside the Microsoft Identity Platform.

    Keep these terms in mind as we walk through more tutorials this week, to see how they find application in real examples. Finally, a note on Dapr, the Distributed Application Runtime that abstracts away many of the challenges posed by distributed systems - and lets you focus on your application logic.

    DAPR INTEGRATION MADE EASY

    Dapr uses a sidecar architecture, allowing Azure Container Apps to communicate with Dapr Building Block APIs over either gRPC or HTTP. Your ACA can be built to run with or without Dapr - giving you the flexibility to incrementally adopt specific APIs and unlock related capabilities as the need arises.

    In later articles this week, we'll do a deeper dive into Dapr and build our first Dapr-enable Azure Container App to get a better understanding of this integration.

    Exercise

    Congratulations! You made it! By now you should have a good idea of what Cloud-Native development means, why Microservices and Containers are important to that vision - and how Azure Container Apps helps simplify the building and deployment of microservices based applications using serverless architectures on Azure.

    Now it's your turn to reinforce learning by doing.

    Resources

    Three key resources to bookmark and explore:

    - + \ No newline at end of file diff --git a/blog/tags/dapr/page/15/index.html b/blog/tags/dapr/page/15/index.html index bd85775162..4af7520af9 100644 --- a/blog/tags/dapr/page/15/index.html +++ b/blog/tags/dapr/page/15/index.html @@ -14,7 +14,7 @@ - + @@ -22,7 +22,7 @@

    16 posts tagged with "dapr"

    View All Tags

    · 8 min read
    David Justo

    Welcome to Day 6 of #30DaysOfServerless!

    Today, we have a special set of posts from our Zero To Hero 🚀 initiative, featuring blog posts authored by our Product Engineering teams for #ServerlessSeptember. Posts were originally published on the Apps on Azure blog on Microsoft Tech Community.


    What We'll Cover

    • What are Durable Entities
    • Some Background
    • A Programming Model
    • Entities for a Micro-Blogging Platform


    Durable Entities are a special type of Azure Function that allow you to implement stateful objects in a serverless environment. They make it easy to introduce stateful components to your app without needing to manually persist data to external storage, so you can focus on your business logic. We’ll demonstrate their power with a real-life example in the last section.

    Entities 101: Some Background

    Programming Durable Entities feels a lot like object-oriented programming, except that these “objects” exist in a distributed system. Like objects, each Entity instance has a unique identifier, i.e. an entity ID that can be used to read and manipulate their internal state. Entities define a list of operations that constrain how their internal state is managed, like an object interface.

    Some experienced readers may realize that Entities sound a lot like an implementation of the Actor Pattern. For a discussion of the relationship between Entities and Actors, please refer to this documentation.

    Entities are a part of the Durable Functions Extension, an extension of Azure Functions that empowers programmers with stateful abstractions for serverless, such as Orchestrations (i.e. workflows).

    Durable Functions is available in most Azure Functions runtime environments: .NET, Node.js, Python, PowerShell, and Java (preview). For this article, we’ll focus on the C# experience, but note that Entities are also available in Node.js and Python; their availability in other languages is underway.

    Entities 102: The programming model

    Imagine you want to implement a simple Entity that just counts things. Its interface allows you to get the current count, add to the current count, and to reset the count to zero.

    If you implement this in an object-oriented way, you’d probably define a class (say “Counter”) with a method to get the current count (say “Counter.Get”), another to add to the count (say “Counter.Add”), and another to reset the count (say “Counter.Reset”). Well, the implementation of an Entity in C# is not that different from this sketch:

    [JsonObject(MemberSerialization.OptIn)] 
    public class Counter
    {
    [JsonProperty("value")]
    public int Value { get; set; }

    public void Add(int amount)
    {
    this.Value += amount;
    }

    public Task Reset()
    {
    this.Value = 0;
    return Task.CompletedTask;
    }

    public Task<int> Get()
    {
    return Task.FromResult(this.Value);
    }
    [FunctionName(nameof(Counter))]
    public static Task Run([EntityTrigger] IDurableEntityContext ctx)
    => ctx.DispatchAsync<Counter>();

    }

    We’ve defined a class named Counter, with an internal count stored in the variable “Value” which is manipulated through the “Add” and “Reset” methods, and which can be read via “Get”.

    The “Run” method is simply boilerplate required for the Azure Functions framework to interact with the object we’ve defined – it’s the method that the framework calls internally when it needs to load the Entity object. When DispatchAsync is called, the Entity and its corresponded state (the last count in “Value”) is loaded from storage. Again, this is mostly just boilerplate: your Entity’s business logic lies in the rest of the class.

    Finally, the Json annotation on top of the class and the Value field tells the Durable Functions framework that the “Value” field is to be durably persisted as part of the durable state on each Entity invocation. If you were to annotate other class variables with JsonProperty, they would also become part of the managed state.

    Entities for a micro-blogging platform

    We’ll try to implement a simple micro-blogging platform, a la Twitter. Let’s call it “Chirper”. In Chirper, users write chirps (i.e tweets), they can follow, and unfollow other users, and they can read the chirps of users they follow.

    Defining Entity

    Just like in OOP, it’s useful to begin by identifying what are the stateful agents of this scenario. In this case, users have state (who they follow and their chirps), and chirps have state in the form of their content. So, we could model these stateful agents as Entities!

    Below is a potential way to implement a User for Chirper as an Entity:

      [JsonObject(MemberSerialization = MemberSerialization.OptIn)] 
    public class User: IUser
    {
    [JsonProperty]
    public List<string> FollowedUsers { get; set; } = new List<string>();

    public void Add(string user)
    {
    FollowedUsers.Add(user);
    }

    public void Remove(string user)
    {
    FollowedUsers.Remove(user);
    }

    public Task<List<string>> Get()
    {
    return Task.FromResult(FollowedUsers);
    }
    // note: removed boilerplate “Run” method, for conciseness.
    }

    In this case, our Entity’s internal state is stored in “FollowedUsers” which is an array of accounts followed by this user. The operations exposed by this entity allow clients to read and modify this data: it can be read by “Get”, a new follower can be added via “Add”, and a user can be unfollowed via “Remove”.

    With that, we’ve modeled a Chirper’s user as an Entity! Recall that Entity instances each has a unique ID, so we can consider that unique ID to correspond to a specific user account.

    What about chirps? Should we represent them as Entities as well? That would certainly be valid. However, we would then need to create a mapping between an entity ID and every chirp entity ID that this user wrote.

    For demonstration purposes, a simpler approach would be to create an Entity that stores the list of all chirps authored by a given user; call it UserChirps. Then, we could fix each User Entity to share the same entity ID as its corresponding UserChirps Entity, making client operations easier.

    Below is a simple implementation of UserChirps:

      [JsonObject(MemberSerialization = MemberSerialization.OptIn)] 
    public class UserChirps : IUserChirps
    {
    [JsonProperty]
    public List<Chirp> Chirps { get; set; } = new List<Chirp>();

    public void Add(Chirp chirp)
    {
    Chirps.Add(chirp);
    }

    public void Remove(DateTime timestamp)
    {
    Chirps.RemoveAll(chirp => chirp.Timestamp == timestamp);
    }

    public Task<List<Chirp>> Get()
    {
    return Task.FromResult(Chirps);
    }

    // Omitted boilerplate “Run” function
    }

    Here, our state is stored in Chirps, a list of user posts. Our operations are the same as before: Get, Read, and Add. It’s the same pattern as before, but we’re representing different data.

    To put it all together, let’s set up Entity clients to generate and manipulate these Entities according to some REST API.

    Interacting with Entity

    Before going there, let’s talk briefly about how you can interact with an Entity. Entity interactions take one of two forms -- calls and signals:

    Calling an entity is a two-way communication. You send an operation message to the entity and then wait for the response message before you continue. The response can be a result value or an error. Signaling an entity is a one-way (fire-and-forget) communication. You send an operation message but don’t wait for a response. You have the reassurance that the message will be delivered eventually, but you don’t know when and don’t know what the response is. For example, when you read the state of an Entity, you are performing a “call” interaction. When you record that a user has followed another, you may choose to simply signal it.

    Now say user with a given userId (say “durableFan99” ) wants to post a chirp. For this, you can write an HTTP endpoint to signal the UserChips entity to record that chirp. We can leverage the HTTP Trigger functionality from Azure Functions and pair it with an entity client binding that signals the Add operation of our Chirp Entity:

    [FunctionName("UserChirpsPost")] 
    public static async Task<HttpResponseMessage> UserChirpsPost(
    [HttpTrigger(AuthorizationLevel.Function, "post", Route = "user/{userId}/chirps")]
    HttpRequestMessage req,
    DurableClient] IDurableClient client,
    ILogger log,
    string userId)
    {
    Authenticate(req, userId);
    var chirp = new Chirp()
    {
    UserId = userId,
    Timestamp = DateTime.UtcNow,
    Content = await req.Content.ReadAsStringAsync(),
    };
    await client.SignalEntityAsync<IUserChirps>(userId, x => x.Add(chirp));
    return req.CreateResponse(HttpStatusCode.Accepted, chirp);
    }

    Following the same pattern as above, to get all the chirps from a user, you could read the status of your Entity via ReadEntityStateAsync, which follows the call-interaction pattern as your client expects a response:

    [FunctionName("UserChirpsGet")] 
    public static async Task<HttpResponseMessage> UserChirpsGet(
    [HttpTrigger(AuthorizationLevel.Function, "get", Route = "user/{userId}/chirps")] HttpRequestMessage req,
    [DurableClient] IDurableClient client,
    ILogger log,
    string userId)
    {

    Authenticate(req, userId);
    var target = new EntityId(nameof(UserChirps), userId);
    var chirps = await client.ReadEntityStateAsync<UserChirps>(target);
    return chirps.EntityExists
    ? req.CreateResponse(HttpStatusCode.OK, chirps.EntityState.Chirps)
    : req.CreateResponse(HttpStatusCode.NotFound);
    }

    And there you have it! To play with a complete implementation of Chirper, you can try out our sample in the Durable Functions extension repo.

    Thank you!

    info

    Thanks for following along, and we hope you find Entities as useful as we do! If you have questions or feedback, please file issues in the repo above or tag us @AzureFunctions on Twitter

    - + \ No newline at end of file diff --git a/blog/tags/dapr/page/16/index.html b/blog/tags/dapr/page/16/index.html index af8542a500..e6b853513a 100644 --- a/blog/tags/dapr/page/16/index.html +++ b/blog/tags/dapr/page/16/index.html @@ -14,13 +14,13 @@ - +

    16 posts tagged with "dapr"

    View All Tags

    · 8 min read
    Kendall Roden

    Welcome to Day 6 of #30DaysOfServerless!

    Today, we have a special set of posts from our Zero To Hero 🚀 initiative, featuring blog posts authored by our Product Engineering teams for #ServerlessSeptember. Posts were originally published on the Apps on Azure blog on Microsoft Tech Community.


    What We'll Cover

    • Defining Cloud-Native
    • Introduction to Azure Container Apps
    • Dapr In Azure Container Apps
    • Conclusion


    Defining Cloud-Native

    While I’m positive I’m not the first person to ask this, I think it’s an appropriate way for us to kick off this article: “How many developers does it take to define Cloud-Native?” I hope you aren’t waiting for a punch line because I seriously want to know your thoughts (drop your perspectives in the comments..) but if you ask me, the limit does not exist!

    A quick online search of the topic returns a laundry list of articles, e-books, twitter threads, etc. all trying to nail down the one true definition. While diving into the rabbit hole of Cloud-Native, you will inevitably find yourself on the Cloud-Native Computing Foundation (CNCF) site. The CNCF is part of the Linux Foundation and aims to make "Cloud-Native computing ubiquitous" through deep open source project and community involvement. The CNCF has also published arguably the most popularized definition of Cloud-Native which begins with the following statement:

    “Cloud-Native technologies empower organizations to build and run scalable applications in modern, dynamic environments such as public, private, and hybrid clouds."

    Over the past four years, my day-to-day work has been driven primarily by the surging demand for application containerization and the drastic adoption of Kubernetes as the de-facto container orchestrator. Customers are eager to learn and leverage patterns, practices and technologies that enable building "loosely coupled systems that are resilient, manageable, and observable". Enterprise developers at these organizations are being tasked with rapidly deploying event-driven, horizontally-scalable, polyglot services via repeatable, code-to-cloud pipelines.

    While building Cloud-Native solutions can enable rapid innovation, the transition to adopting a Cloud-Native architectural approach comes with a steep learning curve and a new set of considerations. In a document published by Microsoft called What is Cloud-Native?, there are a few key areas highlighted to aid customers in the adoption of best practices for building modern, portable applications which I will summarize below:

    Cloud infrastructure

    • Cloud-Native applications leverage cloud infrastructure and make use of Platform-as-a-service offerings
    • Cloud-Native applications depend on highly-elastic infrastructure with automatic scaling, self-healing, and monitoring capabilities

    Modern application design

    • Cloud-Native applications should be constructed using principles outlined in the 12 factor methodology

    Microservices

    • Cloud-Native applications are typically composed of microservices where each core function, or service, is built and deployed independently

    Containers

    • Cloud-Native applications are typically deployed using containers as a packaging mechanism where an application's code and dependencies are bundled together for consistency of deployment
    • Cloud-Native applications leverage container orchestration technologies- primarily Kubernetes- for achieving capabilities such as workload scheduling, self-healing, auto-scale, etc.

    Backing services

    • Cloud-Native applications are ideally stateless workloads which retrieve and store data in data stores external to the application hosting infrastructure. Cloud providers like Azure provide an array of backing data services which can be securely accessed from application code and provide capabilities for ensuring application data is highly-available

    Automation

    • Cloud-Native solutions should use deployment automation for backing cloud infrastructure via versioned, parameterized Infrastructure as Code (IaC) templates which provide a consistent, repeatable process for provisioning cloud resources.
    • Cloud-Native solutions should make use of modern CI/CD practices and pipelines to ensure successful, reliable infrastructure and application deployment.

    Azure Container Apps

    In many of the conversations I've had with customers that involve talk of Kubernetes and containers, the topics of cost-optimization, security, networking, and reducing infrastructure and operations inevitably arise. I personally have yet to meet with any customers eager to have their developers get more involved with infrastructure concerns.

    One of my former colleagues, Jeff Hollan, made a statement while appearing on a 2019 episode of The Cloud-Native Show where he shared his perspective on Cloud-Native:

    "When I think about Cloud-Native... it's writing applications in a way where you are specifically thinking about the benefits the cloud can provide... to me, serverless is the perfect realization of that because the only reason you can write serverless applications is because the cloud exists."

    I must say that I agree with Jeff's perspective. In addition to optimizing development practices for the Cloud-Native world, reducing infrastructure exposure and operations is equally as important to many organizations and can be achieved as a result of cloud platform innovation.

    In May of 2022, Microsoft announced the general availability of Azure Container Apps. Azure Container Apps provides customers with the ability to run microservices and containerized applications on a serverless, consumption-based platform.

    For those interested in taking advantage of the open source ecosystem while reaping the benefits of a managed platform experience, Container Apps run on Kubernetes and provides a set of managed open source projects embedded directly into the platform including the Kubernetes Event Driven Autoscaler (KEDA), the Distributed Application Runtime (Dapr) and Envoy.

    Azure Kubernetes Service vs. Azure Container Apps

    Container apps provides other Cloud-Native features and capabilities in addition to those above including, but not limited to:

    The ability to dynamically scale and support growing numbers of users, events, and requests is one of the core requirements for most Cloud-Native, distributed applications. Azure Container Apps is purpose-built with this and other Cloud-Native tenants in mind.

    What can you build with Azure Container Apps?

    Dapr in Azure Container Apps

    As a quick personal note before we dive into this section I will say I am a bit bias about Dapr. When Dapr was first released, I had an opportunity to immediately get involved and became an early advocate for the project. It is created by developers for developers, and solves tangible problems customers architecting distributed systems face:

    HOW DO I
    • integrate with external systems that my app has to react and respond to?
    • create event driven apps which reliably send events from one service to another?
    • observe the calls and events between my services to diagnose issues in production?
    • access secrets securely from within my application?
    • discover other services and call methods on them?
    • prevent committing to a technology early and have the flexibility to swap out an alternative based on project or environment changes?

    While existing solutions were in the market which could be used to address some of the concerns above, there was not a lightweight, CNCF-backed project which could provide a unified approach to solve the more fundamental ask from customers: "How do I make it easy for developers to build microservices based on Cloud-Native best practices?"

    Enter Dapr!

    The Distributed Application Runtime (Dapr) provides APIs that simplify microservice connectivity. Whether your communication pattern is service to service invocation or pub/sub messaging, Dapr helps you write resilient and secured microservices. By letting Dapr’s sidecar take care of the complex challenges such as service discovery, message broker integration, encryption, observability, and secret management, you can focus on business logic and keep your code simple."

    The Container Apps platform provides a managed and supported Dapr integration which eliminates the need for deploying and managing the Dapr OSS project. In addition to providing managed upgrades, the platform also exposes a simplified Dapr interaction model to increase developer productivity and reduce the friction required to leverage Dapr capabilities. While the Dapr integration makes it easier for customers to adopt Cloud-Native best practices in container apps it is not required to make use of the container apps platform.

    Image on Dapr

    For additional insight into the dapr integration visit aka.ms/aca-dapr.

    Conclusion

    Backed by and integrated with powerful Cloud-Native technologies, Azure Container Apps strives to make developers productive, while reducing the operational overhead and learning curve that typically accompanies adopting a cloud-native strategy.

    If you are interested in building resilient, portable and highly-scalable apps visit Azure Container Apps | Microsoft Azure today!

    ASK THE EXPERT: LIVE Q&A

    The Azure Container Apps team will answer questions live on September 29.

    - + \ No newline at end of file diff --git a/blog/tags/dapr/page/2/index.html b/blog/tags/dapr/page/2/index.html index dfa9981ad7..137fb2cd77 100644 --- a/blog/tags/dapr/page/2/index.html +++ b/blog/tags/dapr/page/2/index.html @@ -14,13 +14,13 @@ - +

    16 posts tagged with "dapr"

    View All Tags

    · 7 min read
    Brian Benz

    Welcome to Day 25 of #30DaysOfServerless!

    Azure Container Apps enable application code packaged in containers to run and scale without the overhead of managing cloud infrastructure and container orchestration. In this post I'll show you how to deploy a Java application running on Spring Boot in a container to Azure Container Registry and Azure Container Apps.


    What We'll Cover

    • Introduction to Deploying Java containers in the cloud
    • Step-by-step: Deploying to Azure Container Registry
    • Step-by-step: Deploying and running on Azure Container Apps
    • Resources: For self-study!


    Deploy Java containers to cloud

    We'll deploy a Java application running on Spring Boot in a container to Azure Container Registry and Azure Container Apps. Here are the main steps:

    • Create Azure Container Registry (ACR) on Azure portal
    • Create Azure Container App (ACA) on Azure portal.
    • Deploy code to Azure Container Registry from the Azure CLI.
    • Deploy container from ACR to ACA using the Azure portal.
    PRE-REQUISITES

    Sign in to Azure from the CLI using the az login command, and follow the prompts in your browser to complete the authentication process. Also, ensure you're running the latest version of the CLI by using the az upgrade command.

    1. Get Sample Code

    Fork and clone the sample GitHub repo to your local machine. Navigate to the and click Fork in the top-right corner of the page.

    The example code that we're using is a very basic containerized Spring Boot example. There are a lot more details to learn about Spring boot apps in docker, for a deep dive check out this Spring Boot Guide

    2. Run Sample Locally (Optional)

    If you have docker installed locally, you can optionally test the code on your local machine. Navigate to the root directory of the forked repository and run the following commands:

    docker build -t spring-boot-docker-aca .
    docker run -p 8080:8080 spring-boot-docker-aca

    Open a browser and go to https://localhost:8080. You should see this message:

    Hello Docker World

    That indicates the the Spring Boot app is successfully running locally in a docker container.

    Next, let's set up an Azure Container Registry an an Azure Container App and deploy this container to the cloud!


    3. Step-by-step: Deploy to ACR

    To create a container registry from the portal dashboard, Select Create a resource > Containers > Container Registry.

    Navigate to container registry in portal

    In the Basics tab, enter values for Resource group and Registry name. The registry name must be unique within Azure, and contain 5-50 alphanumeric characters. Create a new resource group in the West US location named spring-boot-docker-aca. Select the 'Basic' SKU.

    Keep the default values for the remaining settings. Then select Review + create, then Create. When the Deployment succeeded message appears, select the container registry in the portal.

    Note the registry server name ending with azurecr.io. You will use this in the following steps when you push and pull images with Docker.

    3.1 Log into registry using the Azure CLI

    Before pushing and pulling container images, you must log in to the registry instance. Sign into the Azure CLI on your local machine, then run the az acr login command. For this step, use the registry name, not the server name ending with azurecr.io.

    From the command line, type:

    az acr login --name myregistryname

    The command returns Login Succeeded once completed.

    3.2 Build & deploy with az acr build

    Next, we're going to deploy the docker container we created earlier using the AZ ACR Build command. AZ ACR Build creates a docker build from local code and pushes the container to Azure Container Registry if the build is successful.

    Go to your local clone of the spring-boot-docker-aca repo in the command line, type:

    az acr build --registry myregistryname --image spring-boot-docker-aca:v1 .

    3.3 List container images

    Once the AZ ACR Build command is complete, you should be able to view the container as a repository in the registry. In the portal, open your registry and select Repositories, then select the spring-boot-docker-aca repository you created with docker push. You should also see the v1 image under Tags.

    4. Deploy on ACA

    Now that we have an image in the Azure Container Registry, we can deploy it to Azure Container Apps. For the first deployment, we'll pull the container from our ACR as part of the ACA setup.

    4.1 Create a container app

    We'll create the container app at the same place that we created the container registry in the Azure portal. From the portal, select Create a resource > Containers > Container App. In the Basics tab, set these values:

    4.2 Enter project details

    SettingAction
    SubscriptionYour Azure subscription.
    Resource groupUse the spring-boot-docker-aca resource group
    Container app nameEnter spring-boot-docker-aca.

    4.3 Create an environment

    1. In the Create Container App environment field, select Create new.

    2. In the Create Container App Environment page on the Basics tab, enter the following values:

      SettingValue
      Environment nameEnter my-environment.
      RegionSelect westus3.
    3. Select OK.

    4. Select the Create button at the bottom of the Create Container App Environment page.

    5. Select the Next: App settings button at the bottom of the page.

    5. App settings tab

    The App settings tab is where you connect to the ACR and pull the repository image:

    SettingAction
    Use quickstart imageUncheck the checkbox.
    NameEnter spring-boot-docker-aca.
    Image sourceSelect Azure Container Registry
    RegistrySelect your ACR from the list.
    ImageSelect spring-boot-docker-aca from the list.
    Image TagSelect v1 from the list.

    5.1 Application ingress settings

    SettingAction
    IngressSelect Enabled.
    Ingress visibilitySelect External to publicly expose your container app.
    Target portEnter 8080.

    5.2 Deploy the container app

    1. Select the Review and create button at the bottom of the page.
    2. Select Create.

    Once the deployment is successfully completed, you'll see the message: Your deployment is complete.

    5.3 Verify deployment

    In the portal, go to the Overview of your spring-boot-docker-aca Azure Container App, and click on the Application Url. You should see this message in the browser:

    Hello Docker World

    That indicates the the Spring Boot app is running in a docker container in your spring-boot-docker-aca Azure Container App.

    Resources: For self-study!

    Once you have an understanding of the basics in ths post, there is so much more to learn!

    Thanks for stopping by!

    - + \ No newline at end of file diff --git a/blog/tags/dapr/page/3/index.html b/blog/tags/dapr/page/3/index.html index 39cf425a2a..86b443f686 100644 --- a/blog/tags/dapr/page/3/index.html +++ b/blog/tags/dapr/page/3/index.html @@ -14,13 +14,13 @@ - +

    16 posts tagged with "dapr"

    View All Tags

    · 19 min read
    Alex Wolf

    Welcome to Day 24 of #30DaysOfServerless!

    We continue exploring E2E scenarios with this tutorial where you'll deploy a containerized ASP.NET Core 6.0 application to Azure Container Apps.

    The application consists of a front-end web app built using Blazor Server, as well as two Web API projects to manage data. These projects will exist as three separate containers inside of a shared container apps environment.


    What We'll Cover

    • Deploy ASP.NET Core 6.0 app to Azure Container Apps
    • Automate deployment workflows using GitHub Actions
    • Provision and deploy resources using Azure Bicep
    • Exercise: Try this yourself!
    • Resources: For self-study!


    Introduction

    Azure Container Apps enables you to run microservices and containerized applications on a serverless platform. With Container Apps, you enjoy the benefits of running containers while leaving behind the concerns of manually configuring cloud infrastructure and complex container orchestrators.

    In this tutorial, you'll deploy a containerized ASP.NET Core 6.0 application to Azure Container Apps. The application consists of a front-end web app built using Blazor Server, as well as two Web API projects to manage data. These projects will exist as three separate containers inside of a shared container apps environment.

    You will use GitHub Actions in combination with Bicep to deploy the application. These tools provide an approachable and sustainable solution for building CI/CD pipelines and working with Container Apps.

    PRE-REQUISITES

    Architecture

    In this tutorial, we'll setup a container app environment with a separate container for each project in the sample store app. The major components of the sample project include:

    • A Blazor Server front-end web app to display product information
    • A products API to list available products
    • An inventory API to determine how many products are in stock
    • GitHub Actions and Bicep templates to provision Azure resources and then build and deploy the sample app.

    You will explore these templates later in the tutorial.

    Public internet traffic should be proxied to the Blazor app. The back-end APIs should only be reachable via requests from the Blazor app inside the container apps environment. This setup can be achieved using container apps environment ingress configurations during deployment.

    An architecture diagram of the shopping app


    Project Sources

    Want to follow along? Fork the sample below. The tutorial can be completed with or without Dapr integration. Pick the path you feel comfortable in. Dapr provides various benefits that make working with Microservices easier - you can learn more in the docs. For this tutorial you will need GitHub and Azure CLI.

    PICK YOUR PATH

    To follow along with this tutorial, fork the relevant sample project below.

    You can run the app locally from Visual Studio:

    • Right click on the Blazor Store project and select Set as Startup Project.
    • Press the start button at the top of Visual Studio to run the app.
    • (Once running) start each API in the background by
    • right-clicking on the project node
    • selecting Debug --> Start without debugging.

    Once the Blazor app is running, you should see something like this:

    An architecture diagram of the shopping app


    Configuring Azure credentials

    In order to deploy the application to Azure through GitHub Actions, you first need to create a service principal. The service principal will allow the GitHub Actions process to authenticate to your Azure subscription to create resources and deploy code. You can learn more about Service Principals in the Azure CLI documentation. For this step you'll need to be logged into the Azure CLI.

    1) If you have not done so already, make sure to fork the sample project to your own GitHub account or organization.

    1) Once you have completed this step, create a service principal using the Azure CLI command below:

    ```azurecli
    $subscriptionId=$(az account show --query id --output tsv)
    az ad sp create-for-rbac --sdk-auth --name WebAndApiSample --role Contributor --scopes /subscriptions/$subscriptionId
    ```

    1) Copy the JSON output of the CLI command to your clipboard

    1) Under the settings tab of your forked GitHub repo, create a new secret named AzureSPN. The name is important to match the Bicep templates included in the project, which we'll review later. Paste the copied service principal values on your clipboard into the secret and save your changes. This new secret will be used by the GitHub Actions workflow to authenticate to Azure.

    :::image type="content" source="./img/dotnet/github-secrets.png" alt-text="A screenshot of adding GitHub secrets.":::

    Deploy using Github Actions

    You are now ready to deploy the application to Azure Container Apps using GitHub Actions. The sample application includes a GitHub Actions template that is configured to build and deploy any changes to a branch named deploy. The deploy branch does not exist in your forked repository by default, but you can easily create it through the GitHub user interface.

    1) Switch to the Actions tab along the top navigation of your GitHub repository. If you have not done so already, ensure that workflows are enabled by clicking the button in the center of the page.

    A screenshot showing how to enable GitHub actions

    1) Navigate to the main Code tab of your repository and select the main dropdown. Enter deploy into the branch input box, and then select Create branch: deploy from 'main'.

    A screenshot showing how to create the deploy branch

    1) On the new deploy branch, navigate down into the .github/workflows folder. You should see a file called deploy.yml, which contains the main GitHub Actions workflow script. Click on the file to view its content. You'll learn more about this file later in the tutorial.

    1) Click the pencil icon in the upper right to edit the document.

    1) Change the RESOURCE_GROUP_NAME: value to msdocswebappapis or another valid resource group name of your choosing.

    1) In the upper right of the screen, select Start commit and then Commit changes to commit your edit. This will persist the change to the file and trigger the GitHub Actions workflow to build and deploy the app.

    A screenshot showing how to commit changes

    1) Switch to the Actions tab along the top navigation again. You should see the workflow running to create the necessary resources and deploy the app. The workflow may take several minutes to run. When it completes successfully, all of the jobs should have a green checkmark icon next to them.

    The completed GitHub workflow.

    Explore the Azure resources

    Once the GitHub Actions workflow has completed successfully you can browse the created resources in the Azure portal.

    1) On the left navigation, select Resource Groups. Next,choose the msdocswebappapis resource group that was created by the GitHub Actions workflow.

    2) You should see seven resources available that match the screenshot and table descriptions below.

    The resources created in Azure.

    Resource nameTypeDescription
    inventoryContainer appThe containerized inventory API.
    msdocswebappapisacrContainer registryA registry that stores the built Container images for your apps.
    msdocswebappapisaiApplication insightsApplication insights provides advanced monitoring, logging and metrics for your apps.
    msdocswebappapisenvContainer apps environmentA container environment that manages networking, security and resource concerns. All of your containers live in this environment.
    msdocswebappapislogsLog Analytics workspaceA workspace environment for managing logging and analytics for the container apps environment
    productsContainer appThe containerized products API.
    storeContainer appThe Blazor front-end web app.

    3) You can view your running app in the browser by clicking on the store container app. On the overview page, click the Application Url link on the upper right of the screen.

    :::image type="content" source="./img/dotnet/application-url.png" alt-text="The link to browse the app.":::

    Understanding the GitHub Actions workflow

    The GitHub Actions workflow created and deployed resources to Azure using the deploy.yml file in the .github folder at the root of the project. The primary purpose of this file is to respond to events - such as commits to a branch - and run jobs to accomplish tasks. The deploy.yml file in the sample project has three main jobs:

    • Provision: Create the necessary resources in Azure, such as the container apps environment. This step leverages Bicep templates to create the Azure resources, which you'll explore in a moment.
    • Build: Create the container images for the three apps in the project and store them in the container registry.
    • Deploy: Deploy the container images to the different container apps created during the provisioning job.

    The deploy.yml file also accepts parameters to make the workflow more dynamic, such as setting the resource group name or the Azure region resources will be provisioned to.

    Below is a commented version of the deploy.yml file that highlights the essential steps.

    name: Build and deploy .NET application to Container Apps

    # Trigger the workflow on pushes to the deploy branch
    on:
    push:
    branches:
    - deploy

    env:
    # Set workflow variables
    RESOURCE_GROUP_NAME: msdocswebappapis

    REGION: eastus

    STORE_DOCKER: Store/Dockerfile
    STORE_IMAGE: store

    INVENTORY_DOCKER: Store.InventoryApi/Dockerfile
    INVENTORY_IMAGE: inventory

    PRODUCTS_DOCKER: Store.ProductApi/Dockerfile
    PRODUCTS_IMAGE: products

    jobs:
    # Create the required Azure resources
    provision:
    runs-on: ubuntu-latest

    steps:

    - name: Checkout to the branch
    uses: actions/checkout@v2

    - name: Azure Login
    uses: azure/login@v1
    with:
    creds: ${{ secrets.AzureSPN }}

    - name: Create resource group
    uses: azure/CLI@v1
    with:
    inlineScript: >
    echo "Creating resource group in Azure"
    echo "Executing 'az group create -l ${{ env.REGION }} -n ${{ env.RESOURCE_GROUP_NAME }}'"
    az group create -l ${{ env.REGION }} -n ${{ env.RESOURCE_GROUP_NAME }}

    # Use Bicep templates to create the resources in Azure
    - name: Creating resources
    uses: azure/CLI@v1
    with:
    inlineScript: >
    echo "Creating resources"
    az deployment group create --resource-group ${{ env.RESOURCE_GROUP_NAME }} --template-file '/github/workspace/Azure/main.bicep' --debug

    # Build the three app container images
    build:
    runs-on: ubuntu-latest
    needs: provision

    steps:

    - name: Checkout to the branch
    uses: actions/checkout@v2

    - name: Azure Login
    uses: azure/login@v1
    with:
    creds: ${{ secrets.AzureSPN }}

    - name: Set up Docker Buildx
    uses: docker/setup-buildx-action@v1

    - name: Login to ACR
    run: |
    set -euo pipefail
    access_token=$(az account get-access-token --query accessToken -o tsv)
    refresh_token=$(curl https://${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io/oauth2/exchange -v -d "grant_type=access_token&service=${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io&access_token=$access_token" | jq -r .refresh_token)
    docker login -u 00000000-0000-0000-0000-000000000000 --password-stdin ${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io <<< "$refresh_token"

    - name: Build the products api image and push it to ACR
    uses: docker/build-push-action@v2
    with:
    push: true
    tags: ${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io/${{ env.PRODUCTS_IMAGE }}:${{ github.sha }}
    file: ${{ env.PRODUCTS_DOCKER }}

    - name: Build the inventory api image and push it to ACR
    uses: docker/build-push-action@v2
    with:
    push: true
    tags: ${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io/${{ env.INVENTORY_IMAGE }}:${{ github.sha }}
    file: ${{ env.INVENTORY_DOCKER }}

    - name: Build the frontend image and push it to ACR
    uses: docker/build-push-action@v2
    with:
    push: true
    tags: ${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io/${{ env.STORE_IMAGE }}:${{ github.sha }}
    file: ${{ env.STORE_DOCKER }}

    # Deploy the three container images
    deploy:
    runs-on: ubuntu-latest
    needs: build

    steps:

    - name: Checkout to the branch
    uses: actions/checkout@v2

    - name: Azure Login
    uses: azure/login@v1
    with:
    creds: ${{ secrets.AzureSPN }}

    - name: Installing Container Apps extension
    uses: azure/CLI@v1
    with:
    inlineScript: >
    az config set extension.use_dynamic_install=yes_without_prompt

    az extension add --name containerapp --yes

    - name: Login to ACR
    run: |
    set -euo pipefail
    access_token=$(az account get-access-token --query accessToken -o tsv)
    refresh_token=$(curl https://${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io/oauth2/exchange -v -d "grant_type=access_token&service=${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io&access_token=$access_token" | jq -r .refresh_token)
    docker login -u 00000000-0000-0000-0000-000000000000 --password-stdin ${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io <<< "$refresh_token"

    - name: Deploy Container Apps
    uses: azure/CLI@v1
    with:
    inlineScript: >
    az containerapp registry set -n products -g ${{ env.RESOURCE_GROUP_NAME }} --server ${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io

    az containerapp update -n products -g ${{ env.RESOURCE_GROUP_NAME }} -i ${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io/${{ env.PRODUCTS_IMAGE }}:${{ github.sha }}

    az containerapp registry set -n inventory -g ${{ env.RESOURCE_GROUP_NAME }} --server ${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io

    az containerapp update -n inventory -g ${{ env.RESOURCE_GROUP_NAME }} -i ${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io/${{ env.INVENTORY_IMAGE }}:${{ github.sha }}

    az containerapp registry set -n store -g ${{ env.RESOURCE_GROUP_NAME }} --server ${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io

    az containerapp update -n store -g ${{ env.RESOURCE_GROUP_NAME }} -i ${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io/${{ env.STORE_IMAGE }}:${{ github.sha }}

    - name: logout
    run: >
    az logout

    Understanding the Bicep templates

    During the provisioning stage of the GitHub Actions workflow, the main.bicep file is processed. Bicep files provide a declarative way of generating resources in Azure and are ideal for managing infrastructure as code. You can learn more about Bicep in the related documentation. The main.bicep file in the sample project creates the following resources:

    • The container registry to store images of the containerized apps.
    • The container apps environment, which handles networking and resource management for the container apps.
    • Three container apps - one for the Blazor front-end and two for the back-end product and inventory APIs.
    • Configuration values to connect these services together

    main.bicep without Dapr

    param location string = resourceGroup().location

    # create the azure container registry
    resource acr 'Microsoft.ContainerRegistry/registries@2021-09-01' = {
    name: toLower('${resourceGroup().name}acr')
    location: location
    sku: {
    name: 'Basic'
    }
    properties: {
    adminUserEnabled: true
    }
    }

    # create the aca environment
    module env 'environment.bicep' = {
    name: 'containerAppEnvironment'
    params: {
    location: location
    }
    }

    # create the various configuration pairs
    var shared_config = [
    {
    name: 'ASPNETCORE_ENVIRONMENT'
    value: 'Development'
    }
    {
    name: 'APPINSIGHTS_INSTRUMENTATIONKEY'
    value: env.outputs.appInsightsInstrumentationKey
    }
    {
    name: 'APPLICATIONINSIGHTS_CONNECTION_STRING'
    value: env.outputs.appInsightsConnectionString
    }
    ]

    # create the products api container app
    module products 'container_app.bicep' = {
    name: 'products'
    params: {
    name: 'products'
    location: location
    registryPassword: acr.listCredentials().passwords[0].value
    registryUsername: acr.listCredentials().username
    containerAppEnvironmentId: env.outputs.id
    registry: acr.name
    envVars: shared_config
    externalIngress: false
    }
    }

    # create the inventory api container app
    module inventory 'container_app.bicep' = {
    name: 'inventory'
    params: {
    name: 'inventory'
    location: location
    registryPassword: acr.listCredentials().passwords[0].value
    registryUsername: acr.listCredentials().username
    containerAppEnvironmentId: env.outputs.id
    registry: acr.name
    envVars: shared_config
    externalIngress: false
    }
    }

    # create the store api container app
    var frontend_config = [
    {
    name: 'ProductsApi'
    value: 'http://${products.outputs.fqdn}'
    }
    {
    name: 'InventoryApi'
    value: 'http://${inventory.outputs.fqdn}'
    }
    ]

    module store 'container_app.bicep' = {
    name: 'store'
    params: {
    name: 'store'
    location: location
    registryPassword: acr.listCredentials().passwords[0].value
    registryUsername: acr.listCredentials().username
    containerAppEnvironmentId: env.outputs.id
    registry: acr.name
    envVars: union(shared_config, frontend_config)
    externalIngress: true
    }
    }

    main.bicep with Dapr


    param location string = resourceGroup().location

    # create the azure container registry
    resource acr 'Microsoft.ContainerRegistry/registries@2021-09-01' = {
    name: toLower('${resourceGroup().name}acr')
    location: location
    sku: {
    name: 'Basic'
    }
    properties: {
    adminUserEnabled: true
    }
    }

    # create the aca environment
    module env 'environment.bicep' = {
    name: 'containerAppEnvironment'
    params: {
    location: location
    }
    }

    # create the various config pairs
    var shared_config = [
    {
    name: 'ASPNETCORE_ENVIRONMENT'
    value: 'Development'
    }
    {
    name: 'APPINSIGHTS_INSTRUMENTATIONKEY'
    value: env.outputs.appInsightsInstrumentationKey
    }
    {
    name: 'APPLICATIONINSIGHTS_CONNECTION_STRING'
    value: env.outputs.appInsightsConnectionString
    }
    ]

    # create the products api container app
    module products 'container_app.bicep' = {
    name: 'products'
    params: {
    name: 'products'
    location: location
    registryPassword: acr.listCredentials().passwords[0].value
    registryUsername: acr.listCredentials().username
    containerAppEnvironmentId: env.outputs.id
    registry: acr.name
    envVars: shared_config
    externalIngress: false
    }
    }

    # create the inventory api container app
    module inventory 'container_app.bicep' = {
    name: 'inventory'
    params: {
    name: 'inventory'
    location: location
    registryPassword: acr.listCredentials().passwords[0].value
    registryUsername: acr.listCredentials().username
    containerAppEnvironmentId: env.outputs.id
    registry: acr.name
    envVars: shared_config
    externalIngress: false
    }
    }

    # create the store api container app
    module store 'container_app.bicep' = {
    name: 'store'
    params: {
    name: 'store'
    location: location
    registryPassword: acr.listCredentials().passwords[0].value
    registryUsername: acr.listCredentials().username
    containerAppEnvironmentId: env.outputs.id
    registry: acr.name
    envVars: shared_config
    externalIngress: true
    }
    }


    Bicep Modules

    The main.bicep file references modules to create resources, such as module products. Modules are a feature of Bicep templates that enable you to abstract resource declarations into their own files or sub-templates. As the main.bicep file is processed, the defined modules are also evaluated. Modules allow you to create resources in a more organized and reusable way. They can also define input and output parameters that are passed to and from the parent template, such as the name of a resource.

    For example, the environment.bicep module extracts the details of creating a container apps environment into a reusable template. The module defines necessary resource dependencies such as Log Analytics Workspaces and an Application Insights instance.

    environment.bicep without Dapr

    param baseName string = resourceGroup().name
    param location string = resourceGroup().location

    resource logs 'Microsoft.OperationalInsights/workspaces@2021-06-01' = {
    name: '${baseName}logs'
    location: location
    properties: any({
    retentionInDays: 30
    features: {
    searchVersion: 1
    }
    sku: {
    name: 'PerGB2018'
    }
    })
    }

    resource appInsights 'Microsoft.Insights/components@2020-02-02' = {
    name: '${baseName}ai'
    location: location
    kind: 'web'
    properties: {
    Application_Type: 'web'
    WorkspaceResourceId: logs.id
    }
    }

    resource env 'Microsoft.App/managedEnvironments@2022-01-01-preview' = {
    name: '${baseName}env'
    location: location
    properties: {
    appLogsConfiguration: {
    destination: 'log-analytics'
    logAnalyticsConfiguration: {
    customerId: logs.properties.customerId
    sharedKey: logs.listKeys().primarySharedKey
    }
    }
    }
    }

    output id string = env.id
    output appInsightsInstrumentationKey string = appInsights.properties.InstrumentationKey
    output appInsightsConnectionString string = appInsights.properties.ConnectionString

    environment.bicep with Dapr


    param baseName string = resourceGroup().name
    param location string = resourceGroup().location

    resource logs 'Microsoft.OperationalInsights/workspaces@2021-06-01' = {
    name: '${baseName}logs'
    location: location
    properties: any({
    retentionInDays: 30
    features: {
    searchVersion: 1
    }
    sku: {
    name: 'PerGB2018'
    }
    })
    }

    resource appInsights 'Microsoft.Insights/components@2020-02-02' = {
    name: '${baseName}ai'
    location: location
    kind: 'web'
    properties: {
    Application_Type: 'web'
    WorkspaceResourceId: logs.id
    }
    }

    resource env 'Microsoft.App/managedEnvironments@2022-01-01-preview' = {
    name: '${baseName}env'
    location: location
    properties: {
    appLogsConfiguration: {
    destination: 'log-analytics'
    logAnalyticsConfiguration: {
    customerId: logs.properties.customerId
    sharedKey: logs.listKeys().primarySharedKey
    }
    }
    }
    }

    output id string = env.id
    output appInsightsInstrumentationKey string = appInsights.properties.InstrumentationKey
    output appInsightsConnectionString string = appInsights.properties.ConnectionString


    The container_apps.bicep template defines numerous parameters to provide a reusable template for creating container apps. This allows the module to be used in other CI/CD pipelines as well.

    container_app.bicep without Dapr

    param name string
    param location string = resourceGroup().location
    param containerAppEnvironmentId string
    param repositoryImage string = 'mcr.microsoft.com/azuredocs/containerapps-helloworld:latest'
    param envVars array = []
    param registry string
    param minReplicas int = 1
    param maxReplicas int = 1
    param port int = 80
    param externalIngress bool = false
    param allowInsecure bool = true
    param transport string = 'http'
    param registryUsername string
    @secure()
    param registryPassword string

    resource containerApp 'Microsoft.App/containerApps@2022-01-01-preview' ={
    name: name
    location: location
    properties:{
    managedEnvironmentId: containerAppEnvironmentId
    configuration: {
    activeRevisionsMode: 'single'
    secrets: [
    {
    name: 'container-registry-password'
    value: registryPassword
    }
    ]
    registries: [
    {
    server: registry
    username: registryUsername
    passwordSecretRef: 'container-registry-password'
    }
    ]
    ingress: {
    external: externalIngress
    targetPort: port
    transport: transport
    allowInsecure: allowInsecure
    }
    }
    template: {
    containers: [
    {
    image: repositoryImage
    name: name
    env: envVars
    }
    ]
    scale: {
    minReplicas: minReplicas
    maxReplicas: maxReplicas
    }
    }
    }
    }

    output fqdn string = containerApp.properties.configuration.ingress.fqdn

    container_app.bicep with Dapr


    param name string
    param location string = resourceGroup().location
    param containerAppEnvironmentId string
    param repositoryImage string = 'mcr.microsoft.com/azuredocs/containerapps-helloworld:latest'
    param envVars array = []
    param registry string
    param minReplicas int = 1
    param maxReplicas int = 1
    param port int = 80
    param externalIngress bool = false
    param allowInsecure bool = true
    param transport string = 'http'
    param appProtocol string = 'http'
    param registryUsername string
    @secure()
    param registryPassword string

    resource containerApp 'Microsoft.App/containerApps@2022-01-01-preview' ={
    name: name
    location: location
    properties:{
    managedEnvironmentId: containerAppEnvironmentId
    configuration: {
    dapr: {
    enabled: true
    appId: name
    appPort: port
    appProtocol: appProtocol
    }
    activeRevisionsMode: 'single'
    secrets: [
    {
    name: 'container-registry-password'
    value: registryPassword
    }
    ]
    registries: [
    {
    server: registry
    username: registryUsername
    passwordSecretRef: 'container-registry-password'
    }
    ]
    ingress: {
    external: externalIngress
    targetPort: port
    transport: transport
    allowInsecure: allowInsecure
    }
    }
    template: {
    containers: [
    {
    image: repositoryImage
    name: name
    env: envVars
    }
    ]
    scale: {
    minReplicas: minReplicas
    maxReplicas: maxReplicas
    }
    }
    }
    }

    output fqdn string = containerApp.properties.configuration.ingress.fqdn


    Understanding configuration differences with Dapr

    The code for this specific sample application is largely the same whether or not Dapr is integrated. However, even with this simple app, there are a few benefits and configuration differences when using Dapr that are worth exploring.

    In this scenario most of the changes are related to communication between the container apps. However, you can explore the full range of Dapr benefits by reading the Dapr integration with Azure Container Apps article in the conceptual documentation.

    Without Dapr

    Without Dapr the main.bicep template handles wiring up the front-end store app to communicate with the back-end apis by manually managing environment variables. The bicep template retrieves the fully qualified domains (fqdn) of the API apps as output parameters when they are created. Those configurations are then set as environment variables on the store container app.


    # Retrieve environment variables from API container creation
    var frontend_config = [
    {
    name: 'ProductsApi'
    value: 'http://${products.outputs.fqdn}'
    }
    {
    name: 'InventoryApi'
    value: 'http://${inventory.outputs.fqdn}'
    }
    ]

    # create the store api container app, passing in config
    module store 'container_app.bicep' = {
    name: 'store'
    params: {
    name: 'store'
    location: location
    registryPassword: acr.listCredentials().passwords[0].value
    registryUsername: acr.listCredentials().username
    containerAppEnvironmentId: env.outputs.id
    registry: acr.name
    envVars: union(shared_config, frontend_config)
    externalIngress: true
    }
    }

    The environment variables are then retrieved inside of the program class and used to configure the base URLs of the corresponding HTTP clients.


    builder.Services.AddHttpClient("Products", (httpClient) => httpClient.BaseAddress = new Uri(builder.Configuration.GetValue<string>("ProductsApi")));
    builder.Services.AddHttpClient("Inventory", (httpClient) => httpClient.BaseAddress = new Uri(builder.Configuration.GetValue<string>("InventoryApi")));

    With Dapr

    Dapr can be enabled on a container app when it is created, as seen below. This configuration adds a Dapr sidecar to the app to streamline discovery and communication features between the different container apps in your environment.


    # Create the container app with Dapr enabled
    resource containerApp 'Microsoft.App/containerApps@2022-01-01-preview' ={
    name: name
    location: location
    properties:{
    managedEnvironmentId: containerAppEnvironmentId
    configuration: {
    dapr: {
    enabled: true
    appId: name
    appPort: port
    appProtocol: appProtocol
    }
    activeRevisionsMode: 'single'
    secrets: [
    {
    name: 'container-registry-password'
    value: registryPassword
    }
    ]

    # Rest of template omitted for brevity...
    }
    }

    Some of these Dapr features can be surfaced through the program file. You can configure your HttpClient to leverage Dapr configurations when communicating with other apps in your environment.


    // reconfigure code to make requests to Dapr sidecar
    var baseURL = (Environment.GetEnvironmentVariable("BASE_URL") ?? "http://localhost") + ":" + (Environment.GetEnvironmentVariable("DAPR_HTTP_PORT") ?? "3500");
    builder.Services.AddHttpClient("Products", (httpClient) =>
    {
    httpClient.BaseAddress = new Uri(baseURL);
    httpClient.DefaultRequestHeaders.Add("dapr-app-id", "Products");
    });

    builder.Services.AddHttpClient("Inventory", (httpClient) =>
    {
    httpClient.BaseAddress = new Uri(baseURL);
    httpClient.DefaultRequestHeaders.Add("dapr-app-id", "Inventory");
    });


    Clean up resources

    If you're not going to continue to use this application, you can delete the Azure Container Apps and all the associated services by removing the resource group.

    Follow these steps in the Azure portal to remove the resources you created:

    1. In the Azure portal, navigate to the msdocswebappsapi resource group using the left navigation or search bar.
    2. Select the Delete resource group button at the top of the resource group Overview.
    3. Enter the resource group name msdocswebappsapi in the Are you sure you want to delete "msdocswebappsapi" confirmation dialog.
    4. Select Delete.
      The process to delete the resource group may take a few minutes to complete.
    - + \ No newline at end of file diff --git a/blog/tags/dapr/page/4/index.html b/blog/tags/dapr/page/4/index.html index 26901c2aae..df53f833f6 100644 --- a/blog/tags/dapr/page/4/index.html +++ b/blog/tags/dapr/page/4/index.html @@ -14,7 +14,7 @@ - + @@ -22,7 +22,7 @@

    16 posts tagged with "dapr"

    View All Tags

    · 10 min read
    Ayca Bas

    Welcome to Day 20 of #30DaysOfServerless!

    Every day millions of people spend their precious time in productivity tools. What if you use data and intelligence behind the Microsoft applications (Microsoft Teams, Outlook, and many other Office apps) to build seamless automations and custom apps to boost productivity?

    In this post, we'll learn how to build a seamless onboarding experience for new employees joining a company with the power of Microsoft Graph, integrated with Event Hubs and Logic Apps!


    What We'll Cover

    • ✨ The power of Microsoft Graph
    • 🖇️ How do Microsoft Graph and Event Hubs work together?
    • 🛠 Let's Build an Onboarding Workflow!
      • 1️⃣ Setup Azure Event Hubs + Key Vault
      • 2️⃣ Subscribe to users, receive change notifications from Logic Apps
      • 3️⃣ Create Onboarding workflow in the Logic Apps
    • 🚀 Debug: Your onboarding experience
    • ✋ Exercise: Try this tutorial out yourself!
    • 📚 Resources: For Self-Study


    ✨ The Power of Microsoft Graph

    Microsoft Graph is the gateway to data and intelligence in Microsoft 365 platform. Microsoft Graph exploses Rest APIs and client libraries to access data across Microsoft 365 core services such as Calendar, Teams, To Do, Outlook, People, Planner, OneDrive, OneNote and more.

    Overview of Microsoft Graph

    You can build custom experiences by using Microsoft Graph such as automating the onboarding process for new employees. When new employees are created in the Azure Active Directory, they will be automatically added in the Onboarding team on Microsoft Teams.

    Solution architecture


    🖇️ Microsoft Graph with Event Hubs

    Microsoft Graph uses a webhook mechanism to track changes in resources and deliver change notifications to the clients. For example, with Microsoft Graph Change Notifications, you can receive change notifications when:

    • a new task is added in the to-do list
    • a user changes the presence status from busy to available
    • an event is deleted/cancelled from the calendar

    If you'd like to track a large set of resources at a high frequency, use Azure Events Hubs instead of traditional webhooks to receive change notifications. Azure Event Hubs is a popular real-time events ingestion and distribution service built for scale.

    EVENT GRID - PARTNER EVENTS

    Microsoft Graph Change Notifications can be also received by using Azure Event Grid -- currently available for Microsoft Partners! Read the Partner Events Overview documentation for details.

    Setup Azure Event Hubs + Key Vault.

    To get Microsoft Graph Change Notifications delivered to Azure Event Hubs, we'll have to setup Azure Event Hubs and Azure Key Vault. We'll use Azure Key Vault to access to Event Hubs connection string.

    1️⃣ Create Azure Event Hubs

    1. Go to Azure Portal and select Create a resource, type Event Hubs and select click Create.
    2. Fill in the Event Hubs namespace creation details, and then click Create.
    3. Go to the newly created Event Hubs namespace page, select Event Hubs tab from the left pane and + Event Hub:
      • Name your Event Hub as Event Hub
      • Click Create.
    4. Click the name of the Event Hub, and then select Shared access policies and + Add to add a new policy:
      • Give a name to the policy
      • Check Send and Listen
      • Click Create.
    5. After the policy has been created, click the name of the policy to open the details panel, and then copy the Connection string-primary key value. Write it down; you'll need it for the next step.
    6. Go to Consumer groups tab in the left pane and select + Consumer group, give a name for your consumer group as onboarding and select Create.

    2️⃣ Create Azure Key Vault

    1. Go to Azure Portal and select Create a resource, type Key Vault and select Create.
    2. Fill in the Key Vault creation details, and then click Review + Create.
    3. Go to newly created Key Vault and select Secrets tab from the left pane and click + Generate/Import:
      • Give a name to the secret
      • For the value, paste in the connection string you generated at the Event Hubs step
      • Click Create
      • Copy the name of the secret.
    4. Select Access Policies from the left pane and + Add Access Policy:
      • For Secret permissions, select Get
      • For Principal, select Microsoft Graph Change Tracking
      • Click Add.
    5. Select Overview tab from the left pane and copy the Vault URI.

    Subscribe for Logic Apps change notifications

    To start receiving Microsoft Graph Change Notifications, we'll need to create subscription to the resource that we'd like to track - here, 'users'. We'll use Azure Logic Apps to create subscription.

    To create subscription for Microsoft Graph Change Notifications, we'll need to make a http post request to https://graph.microsoft.com/v1.0/subscriptions. Microsoft Graph requires Azure Active Directory authentication make API calls. First, we'll need to register an app to Azure Active Directory, and then we will make the Microsoft Graph Subscription API call with Azure Logic Apps.

    1️⃣ Create an app in Azure Active Directory

    1. In the Azure Portal, go to Azure Active Directory and select App registrations from the left pane and select + New registration. Fill in the details for the new App registration form as below:
      • Name: Graph Subscription Flow Auth
      • Supported account types: Accounts in any organizational directory (Any Azure AD directory - Multitenant) and personal Microsoft accounts (e.g. Skype, Xbox)
      • Select Register.
    2. Go to newly registered app in Azure Active Directory, select API permissions:
      • Select + Add a permission and Microsoft Graph
      • Select Application permissions and add User.Read.All and Directory.Read.All.
      • Select Grant admin consent for the organization
    3. Select Certificates & secrets tab from the left pane, select + New client secret:
      • Choose desired expiry duration
      • Select Add
      • Copy the value of the secret.
    4. Go to Overview from the left pane, copy Application (client) ID and Directory (tenant) ID.

    2️⃣ Create subscription with Azure Logic Apps

    1. Go to Azure Portal and select Create a resource, type Logic apps and select click Create.

    2. Fill in the Logic Apps creation details, and then click Create.

    3. Go to the newly created Logic Apps page, select Workflows tab from the left pane and select + Add:

      • Give a name to the new workflow as graph-subscription-flow
      • Select Stateful as a state type
      • Click Create.
    4. Go to graph-subscription-flow, and then select Designer tab.

    5. In the Choose an operation section, search for Schedule and select Recurrence as a trigger. Fill in the parameters as below:

      • Interval: 61
      • Frequency: Minute
      • Time zone: Select your own time zone
      • Start time: Set a start time
    6. Select + button in the flow and select add an action. Search for HTTP and select HTTP as an action. Fill in the parameters as below:

      • Method: POST
      • URI: https://graph.microsoft.com/v1.0/subscriptions
      • Headers:
        • Key: Content-type
        • Value: application/json
      • Body:
      {
      "changeType": "created, updated",
      "clientState": "secretClientValue",
      "expirationDateTime": "@{addHours(utcNow(), 1)}",
      "notificationUrl": "EventHub:https://<YOUR-VAULT-URI>/secrets/<YOUR-KEY-VAULT-SECRET-NAME>?tenantId=72f988bf-86f1-41af-91ab-2d7cd011db47",
      "resource": "users"
      }

      In notificationUrl, make sure to replace <YOUR-VAULT-URI> with the vault uri and <YOUR-KEY-VAULT-SECRET-NAME> with the secret name that you copied from the Key Vault.

      In resource, define the resource type you'd like to track changes. For our example, we will track changes for users resource.

      • Authentication:
        • Authentication type: Active Directory OAuth
        • Authority: https://login.microsoft.com
        • Tenant: Directory (tenant) ID copied from AAD app
        • Audience: https://graph.microsoft.com
        • Client ID: Application (client) ID copied from AAD app
        • Credential Type: Secret
        • Secret: value of the secret copied from AAD app
    7. Select Save and run your workflow from the Overview tab.

      Check your subscription in Graph Explorer: If you'd like to make sure that your subscription is created successfully by Logic Apps, you can go to Graph Explorer, login with your Microsoft 365 account and make GET request to https://graph.microsoft.com/v1.0/subscriptions. Your subscription should appear in the response after it's created successfully.

    Subscription workflow success

    After subscription is created successfully by Logic Apps, Azure Event Hubs will receive notifications whenever there is a new user created in Azure Active Directory.


    Create Onboarding workflow in Logic Apps

    We'll create a second workflow in the Logic Apps to receive change notifications from Event Hubs when there is a new user created in the Azure Active Directory and add new user in Onboarding team on Microsoft Teams.

    1. Go to the Logic Apps you created in the previous steps, select Workflows tab and create a new workflow by selecting + Add:
      • Give a name to the new workflow as teams-onboarding-flow
      • Select Stateful as a state type
      • Click Create.
    2. Go to teams-onboarding-flow, and then select Designer tab.
    3. In the Choose an operation section, search for Event Hub, select When events are available in Event Hub as a trigger. Setup Event Hub connection as below:
      • Create Connection:
        • Connection name: Connection
        • Authentication Type: Connection String
        • Connection String: Go to Event Hubs > Shared Access Policies > RootManageSharedAccessKey and copy Connection string–primary key
        • Select Create.
      • Parameters:
        • Event Hub Name: Event Hub
        • Consumer Group Name: onboarding
    4. Select + in the flow and add an action, search for Control and add For each as an action. Fill in For each action as below:
      • Select output from previous steps: Events
    5. Inside For each, select + in the flow and add an action, search for Data operations and select Parse JSON. Fill in Parse JSON action as below:
      • Content: Events Content
      • Schema: Copy the json content from schema-parse.json and paste as a schema
    6. Select + in the flow and add an action, search for Control and add For each as an action. Fill in For each action as below:
      • Select output from previous steps: value
      1. Inside For each, select + in the flow and add an action, search for Microsoft Teams and select Add a member to a team. Login with your Microsoft 365 account to create a connection and fill in Add a member to a team action as below:
      • Team: Create an Onboarding team on Microsoft Teams and select
      • A user AAD ID for the user to add to a team: id
    7. Select Save.

    🚀 Debug your onboarding experience

    To debug our onboarding experience, we'll need to create a new user in Azure Active Directory and see if it's added in Microsoft Teams Onboarding team automatically.

    1. Go to Azure Portal and select Azure Active Directory from the left pane and go to Users. Select + New user and Create new user. Fill in the details as below:

      • User name: JaneDoe
      • Name: Jane Doe

      new user in Azure Active Directory

    2. When you added Jane Doe as a new user, it should trigger the teams-onboarding-flow to run. teams onboarding flow success

    3. Once the teams-onboarding-flow runs successfully, you should be able to see Jane Doe as a member of the Onboarding team on Microsoft Teams! 🥳 new member in Onboarding team on Microsoft Teams

    Congratulations! 🎉

    You just built an onboarding experience using Azure Logic Apps, Azure Event Hubs and Azure Key Vault.


    📚 Resources

    - + \ No newline at end of file diff --git a/blog/tags/dapr/page/5/index.html b/blog/tags/dapr/page/5/index.html index 36e544fb30..6ad40c0e13 100644 --- a/blog/tags/dapr/page/5/index.html +++ b/blog/tags/dapr/page/5/index.html @@ -14,14 +14,14 @@ - +

    16 posts tagged with "dapr"

    View All Tags

    · 5 min read
    Mike Morton

    Welcome to Day 19 of #30DaysOfServerless!

    Today, we have a special set of posts from our Zero To Hero 🚀 initiative, featuring blog posts authored by our Product Engineering teams for #ServerlessSeptember. Posts were originally published on the Apps on Azure blog on Microsoft Tech Community.


    What We'll Cover

    • Log Streaming - in Azure Portal
    • Console Connect - in Azure Portal
    • Metrics - using Azure Monitor
    • Log Analytics - using Azure Monitor
    • Metric Alerts and Log Alerts - using Azure Monitor


    In past weeks, @kendallroden wrote about what it means to be Cloud-Native and @Anthony Chu the various ways to get your apps running on Azure Container Apps. Today, we will talk about the observability tools you can use to observe, debug, and diagnose your Azure Container Apps.

    Azure Container Apps provides several observability features to help you debug and diagnose your apps. There are both Azure portal and CLI options you can use to help understand the health of your apps and help identify when issues arise.

    While these features are helpful throughout your container app’s lifetime, there are two that are especially helpful. Log streaming and console connect can be a huge help in the initial stages when issues often rear their ugly head. Let's dig into both of these a little.

    Log Streaming

    Log streaming allows you to use the Azure portal to view the streaming logs from your app. You’ll see the logs written from the app to the container’s console (stderr and stdout). If your app is running multiple revisions, you can choose from which revision to view logs. You can also select a specific replica if your app is configured to scale. Lastly, you can choose from which container to view the log output. This is useful when you are running a custom or Dapr sidecar container. view streaming logs

    Here’s an example CLI command to view the logs of a container app.

    az containerapp logs show -n MyContainerapp -g MyResourceGroup

    You can find more information about the different options in our CLI docs.

    Console Connect

    In the Azure portal, you can connect to the console of a container in your app. Like log streaming, you can select the revision, replica, and container if applicable. After connecting to the console of the container, you can execute shell commands and utilities that you have installed in your container. You can view files and their contents, monitor processes, and perform other debugging tasks.

    This can be great for checking configuration files or even modifying a setting or library your container is using. Of course, updating a container in this fashion is not something you should do to a production app, but tweaking and re-testing an app in a non-production environment can speed up development.

    Here’s an example CLI command to connect to the console of a container app.

    az containerapp exec -n MyContainerapp -g MyResourceGroup

    You can find more information about the different options in our CLI docs.

    Metrics

    Azure Monitor collects metric data from your container app at regular intervals to help you gain insights into the performance and health of your container app. Container apps provide these metrics:

    • CPU usage
    • Memory working set bytes
    • Network in bytes
    • Network out bytes
    • Requests
    • Replica count
    • Replica restart count

    Here you can see the metrics explorer showing the replica count for an app as it scaled from one replica to fifteen, and then back down to one.

    You can also retrieve metric data through the Azure CLI.

    Log Analytics

    Azure Monitor Log Analytics is great for viewing your historical logs emitted from your container apps. There are two custom tables of interest, the ContainerAppConsoleLogs_CL which contains all the log messages written by your app (stdout and stderr), and the ContainerAppSystemLogs_CL which contain the system messages from the Azure Container Apps service.

    You can also query Log Analytics through the Azure CLI.

    Alerts

    Azure Monitor alerts notify you so that you can respond quickly to critical issues. There are two types of alerts that you can define:

    You can create alert rules from metric charts in the metric explorer and from queries in Log Analytics. You can also define and manage alerts from the Monitor|Alerts page.

    Here is what creating an alert looks like in the Azure portal. In this case we are setting an alert rule from the metric explorer to trigger an alert if the replica restart count for a specific container app is greater than two within the last fifteen minutes.

    To learn more about alerts, refer to Overview of alerts in Microsoft Azure.

    Conclusion

    In this article, we looked at the several ways to observe, debug, and diagnose your Azure Container Apps. As you can see there are rich portal tools and a complete set of CLI commands to use. All the tools are helpful throughout the lifecycle of your app, be sure to take advantage of them when having an issue and/or to prevent issues.

    To learn more, visit Azure Container Apps | Microsoft Azure today!

    ASK THE EXPERT: LIVE Q&A

    The Azure Container Apps team will answer questions live on September 29.

    - + \ No newline at end of file diff --git a/blog/tags/dapr/page/6/index.html b/blog/tags/dapr/page/6/index.html index c6e3a691de..fb54127c15 100644 --- a/blog/tags/dapr/page/6/index.html +++ b/blog/tags/dapr/page/6/index.html @@ -14,14 +14,14 @@ - +

    16 posts tagged with "dapr"

    View All Tags

    · 10 min read
    Brian Benz

    Welcome to Day 18 of #30DaysOfServerless!

    Yesterday my Serverless September post introduced you to making Azure Logic Apps and Azure Cosmos DB work together with a sample application that collects weather data. Today I'm sharing a more robust solution that actually reads my mail. Let's learn about Teaching the cloud to read your mail!

    Ready? Let's go!


    What We'll Cover

    • Introduction to the ReadMail solution
    • Setting up Azure storage, Cosmos DB and Computer Vision
    • Connecting it all together with a Logic App
    • Resources: For self-study!


    Introducing the ReadMail solution

    The US Postal system offers a subscription service that sends you images of mail it will be delivering to your home. I decided it would be cool to try getting Azure to collect data based on these images, so that I could categorize my mail and track the types of mail that I received.

    To do this, I used Azure storage, Cosmos DB, Logic Apps, and computer vision. When a new email comes in from the US Postal service (USPS), it triggers a logic app that:

    • Posts attachments to Azure storage
    • Triggers Azure Computer vision to perform an OCR function on attachments
    • Extracts any results into a JSON document
    • Writes the JSON document to Cosmos DB

    workflow for the readmail solution

    In this post I'll walk you through setting up the solution for yourself.

    Prerequisites

    Setup Azure Services

    First, we'll create all of the target environments we need to be used by our Logic App, then we;ll create the Logic App.

    1. Azure Storage

    We'll be using Azure storage to collect attached images from emails as they arrive. Adding images to Azure storage will also trigger a workflow that performs OCR on new attached images and stores the OCR data in Cosmos DB.

    To create a new Azure storage account from the portal dashboard, Select Create a resource > Storage account > Create.

    The Basics tab covers all of the features and information that we will need for this solution:

    SectionFieldRequired or optionalDescription
    Project detailsSubscriptionRequiredSelect the subscription for the new storage account.
    Project detailsResource groupRequiredCreate a new resource group that you will use for storage, Cosmos DB, Computer Vision and the Logic App.
    Instance detailsStorage account nameRequiredChoose a unique name for your storage account. Storage account names must be between 3 and 24 characters in length and may contain numbers and lowercase letters only.
    Instance detailsRegionRequiredSelect the appropriate region for your storage account.
    Instance detailsPerformanceRequiredSelect Standard performance for general-purpose v2 storage accounts (default).
    Instance detailsRedundancyRequiredSelect locally-redundant Storage (LRS) for this example.

    Select Review + create to accept the remaining default options, then validate and create the account.

    2. Azure CosmosDB

    CosmosDB will be used to store the JSON documents returned by the COmputer Vision OCR process.

    See more details and screen shots for setting up CosmosDB in yesterday's Serverless September post - Using Logic Apps with Cosmos DB

    To get started with Cosmos DB, you create an account, then a database, then a container to store JSON documents. To create a new Cosmos DB account from the portal dashboard, Select Create a resource > Azure Cosmos DB > Create. Choose core SQL for the API.

    Select your subscription, then for simplicity use the same resource group you created when you set up storage. Enter an account name and choose a location, select provisioned throughput capacity mode and apply the free tier discount. From here you can select Review and Create, then Create

    Next, create a new database and container. Go to the Data Explorer in your new Cosmos DB account, and choose New Container. Name the database, and keep all the other defaults except:

    SettingAction
    Container IDid
    Container partition/id

    Press OK to create a database and container

    3. Azure Computer Vision

    Azure Cognitive Services' Computer Vision will perform an OCR process on each image attachment that is stored in Azure storage.

    From the portal dashboard, Select Create a resource > AI + Machine Learning > Computer Vision > Create.

    The Basics and Identity tabs cover all of the features and information that we will need for this solution:

    Basics Tab

    SectionFieldRequired or optionalDescription
    Project detailsSubscriptionRequiredSelect the subscription for the new service.
    Project detailsResource groupRequiredUse the same resource group that you used for Azure storage and Cosmos DB.
    Instance detailsRegionRequiredSelect the appropriate region for your Computer Vision service.
    Instance detailsNameRequiredChoose a unique name for your Computer Vision service.
    Instance detailsPricingRequiredSelect the free tier for this example.

    Identity Tab

    SectionFieldRequired or optionalDescription
    System assigned managed identityStatusRequiredEnable system assigned identity to grant the resource access to other existing resources.

    Select Review + create to accept the remaining default options, then validate and create the account.


    Connect it all with a Logic App

    Now we're ready to put this all together in a Logic App workflow!

    1. Create Logic App

    From the portal dashboard, Select Create a resource > Integration > Logic App > Create. Name your Logic App and select a location, the rest of the settings can be left at their defaults.

    2. Create Workflow: Add Trigger

    Once the Logic App is created, select Create a workflow from designer.

    A workflow is a series of steps that defines a task or process. Each workflow starts with a single trigger, after which you must add one or more actions.

    When in designer, search for outlook.com on the right under Add a trigger. Choose outlook.com. Choose When a new email arrives as the trigger.

    A trigger is always the first step in any workflow and specifies the condition for running any further steps in that workflow.

    Set the following values:

    ParameterValue
    FolderInbox
    ImportanceAny
    Only With AttachmentsYes
    Include AttachmentsYes

    Then add a new parameter:

    ParameterValue
    FromAdd the email address that sends you the email with attachments
    3. Create Workflow: Add Action (for Trigger)

    Choose add an action and choose control > for-each.

    logic app for each

    Inside the for-each action, in Select an output from previous steps, choose attachments. Then, again inside the for-each action, add the create blob action:

    Set the following values:

    ParameterValue
    Folder Path/mailreaderinbox
    Blob NameAttachments Name
    Blob ContentAttachments Content

    This extracts attachments from the email and created a new blob for each attachment.

    Next, inside the same for-each action, add the get blob content action.

    Set the following values:

    ParameterValue
    Blobid
    Infer content typeYes

    We create and read from a blob for each attachment because Computer Vision needs a non-virtual source to read from when performing an OCR process. Because we enabled system assigned identity to grant Computer Vision to other existing resources, it can access the blob but not the outlook.com attachment. Also, we pass the ID of the blob to use as a unique ID when writing to Cosmos DB.

    create blob from attachments

    Next, inside the same for-each action, choose add an action and choose control > condition. Set the value to Media Type > is equal to > image/JPEG

    The USPS sends attachments of multiple types, but we only want to scan attachments that have images of our mail, which are always JPEG images. If the condition is true, we will process the image with Computer Vision OCR and write the results to a JSON document in CosmosDB.

    In the True section of the condition, add an action and choose Computer Vision API > Optical Character Recognition (OCR) to JSON.

    Set the following values:

    ParameterValue
    Image SourceImage Content
    Image contentFile Content

    In the same True section of the condition, choose add an action and choose Cosmos DB. Choose Create or Update Document from the actions. Select Access Key, and provide the primary read-write key (found under keys in Cosmos DB), and the Cosmos DB account ID (without 'documents.azure.com').

    Next, fill in your Cosmos DB Database ID and Collection ID. Create a JSON document by selecting dynamic content elements and wrapping JSON formatting around them.

    Be sure to use the ID passed from blob storage as your unique ID for CosmosDB. That way you can troubleshoot and JSON or OCR issues by tracing back the JSON document in Cosmos Db to the blob in Azure storage. Also, include the Computer Vision JSON response, as it contains the results of the Computer Vision OCR scan. all other elements are optional.

    4. TEST WORKFLOW

    When complete, you should have an action the Logic App designer that looks something like this:

    Logic App workflow create or update document in cosmosdb

    Save the workflow and test the connections by clicking Run Trigger > Run. If connections are working, you should see documents flowing into Cosmos DB each time that an email arrives with image attachments.

    Check the data in Cosmos Db by opening the Data explorer, then choosing the container you created and selecting items. You should see documents similar to this:

    Logic App workflow with trigger and action

    1. Congratulations

    You just built your personal ReadMail solution with Logic Apps! 🎉


    Resources: For self-study!

    Once you have an understanding of the basics in ths post, there is so much more to learn!

    Thanks for stopping by!

    - + \ No newline at end of file diff --git a/blog/tags/dapr/page/7/index.html b/blog/tags/dapr/page/7/index.html index a3879a35ac..1deb8f7237 100644 --- a/blog/tags/dapr/page/7/index.html +++ b/blog/tags/dapr/page/7/index.html @@ -14,14 +14,14 @@ - +

    16 posts tagged with "dapr"

    View All Tags

    · 6 min read
    Brian Benz

    Welcome to Day 17 of #30DaysOfServerless!

    In past weeks, we've covered serverless technologies that provide core capabilities (functions, containers, microservices) for building serverless solutions. This week we're looking at technologies that make service integrations more seamless, starting with Logic Apps. Let's look at one usage example today!

    Ready? Let's Go!


    What We'll Cover

    • Introduction to Logic Apps
    • Settng up Cosmos DB for Logic Apps
    • Setting up a Logic App connection and event
    • Writing data to Cosmos DB from a Logic app
    • Resources: For self-study!


    Introduction to Logic Apps

    Previously in Serverless September, we've covered Azure Functions, where the event triggers code. In Logic Apps, the event triggers a workflow that you design. Logic Apps enable serverless applications to connect to external sources for data then automate business processes via workflows.

    In this post I'll walk you through setting up a Logic App that works with Cosmos DB. For this example, we'll connect to the MSN weather service, an design a logic app workflow that collects data when weather changes, and writes the data to Cosmos DB.

    PREREQUISITES

    Setup Cosmos DB for Logic Apps

    Cosmos DB has many APIs to choose from, but to use the default Logic App connection, we need to choose the a Cosmos DB SQL API. We'll set this up via the Azure Portal.

    To get started with Cosmos DB, you create an account, then a database, then a container to store JSON documents. To create a new Cosmos DB account from the portal dashboard, Select Create a resource > Azure Cosmos DB > Create. Choose core SQL for the API.

    Select your subscription, then create a new resource group called CosmosWeather. Enter an account name and choose a location, select provisioned throughput capacity mode and apply the free tier discount. From here you can select Review and Create, then Create

    Azure Cosmos DB is available in two different capacity modes: provisioned throughput and serverless. You can perform the same database operations in both modes, but the way you get billed for these operations is different. We wil be using provisioned throughput and the free tier for this example.

    Setup the CosmosDB account

    Next, create a new database and container. Go to the Data Explorer in your new Cosmos DB account, and choose New Container. Name the database, and keep all the orher defaults except:

    SettingAction
    Container IDid
    Container partition/id

    Press OK to create a database and container

    A database is analogous to a traditional DBMS namespace. It's used to organize one or more containers.

    Setup the CosmosDB Container

    Now we're ready to set up our logic app an write to Cosmos DB!

    Setup Logic App connection + event

    Once the Cosmos DB SQL API account is created, we can set up our Logic App. From the portal dashboard, Select Create a resource > Integration > Logic App > Create. Name your Logic App and select a location, the rest fo the settings can be left at their defaults. Once you new Logic App is created, select Create a workflow from designer to get started.

    A workflow is a series of steps that defines a task or process. Each workflow starts with a single trigger, after which you must add one or more actions.

    When in designer, search for weather on the right under Add a trigger. Choose MSN Weather. Choose When the current conditions change as the trigger.

    A trigger is always the first step in any workflow and specifies the condition for running any further steps in that workflow.

    Add a location. Valid locations are City, Region, State, Country, Landmark, Postal Code, latitude and longitude. This triggers a new workflow when the conditions change for a location.

    Write data from Logic App to Cosmos DB

    Now we are ready to set up the action to write data to Cosmos DB. Choose add an action and choose Cosmos DB.

    An action is each step in a workflow after the trigger. Every action runs some operation in a workflow.

    In this case, we will be writing a JSON document to the Cosmos DB container we created earlier. Choose Create or Update Document from the actions. At this point you should have a workflow in designer that looks something like this:

    Logic App workflow with trigger

    Start wth the connection for set up the Cosmos DB action. Select Access Key, and provide the primary read-write key (found under keys in Cosmos DB), and the Cosmos DB account ID (without 'documents.azure.com').

    Next, fill in your Cosmos DB Database ID and Collection ID. Create a JSON document bt selecting dynamic content elements and wrapping JSON formatting around them.

    You will need a unique ID for each document that you write to Cosmos DB, for that you can use an expression. Because we declared id to be our unique ID in Cosmos DB, we will use use that for the name. Under expressions, type guid() and press enter to add a unique ID to the JSON document. When complete, you should have a workflow in designer that looks something like this:

    Logic App workflow with trigger and action

    Save the workflow and test the connections by clicking Run Trigger > Run. If connections are working, you should see documents flowing into Cosmos DB over the next few minutes.

    Check the data in Cosmos Db by opening the Data explorer, then choosing the container you created and selecting items. You should see documents similar to this:

    Logic App workflow with trigger and action

    Resources: For self-study!

    Once you've grasped the basics in ths post, there is so much more to learn!

    Thanks for stopping by!

    - + \ No newline at end of file diff --git a/blog/tags/dapr/page/8/index.html b/blog/tags/dapr/page/8/index.html index ca47982abe..6976d7861b 100644 --- a/blog/tags/dapr/page/8/index.html +++ b/blog/tags/dapr/page/8/index.html @@ -14,13 +14,13 @@ - +

    16 posts tagged with "dapr"

    View All Tags

    · 4 min read
    Nitya Narasimhan
    Devanshi Joshi

    Welcome to Day 15 of #30DaysOfServerless!

    This post marks the midpoint of our Serverless on Azure journey! Our Week 2 Roadmap showcased two key technologies - Azure Container Apps (ACA) and Dapr - for building serverless microservices. We'll also look at what happened elsewhere in #ServerlessSeptember, then set the stage for our next week's focus: Serverless Integrations.

    Ready? Let's Go!


    What We'll Cover

    • ICYMI: This Week on #ServerlessSeptember
    • Recap: Microservices, Azure Container Apps & Dapr
    • Coming Next: Serverless Integrations
    • Exercise: Take the Cloud Skills Challenge
    • Resources: For self-study!

    This Week In Events

    We had a number of activities happen this week - here's a quick summary:

    This Week in #30Days

    In our #30Days series we focused on Azure Container Apps and Dapr.

    • In Hello Container Apps we learned how Azure Container Apps helps you run microservices and containerized apps on serverless platforms. And we build and deployed our first ACA.
    • In Microservices Communication we explored concepts like environments and virtual networking, with a hands-on example to show how two microservices communicate in a deployed ACA.
    • In Scaling Your Container Apps we learned about KEDA (Kubernetes Event-Driven Autoscaler) and how to configure autoscaling for your ACA based on KEDA-supported triggers.
    • In Build with Dapr we introduced the Distributed Application Runtime (Dapr) and learned how its Building Block APIs and sidecar architecture make it easier to develop microservices with ACA.
    • In Secure ACA Access we learned how to secure ACA access to external services with - and without - Dapr, covering Secret Stores and Managed Identity.
    • Finally, Build ACA with Dapr tied it all together with a enterprise app scenario where an orders processor (ACA) uses Dapr APIs (PubSub, State Management) to receive and store order messages from Azure Service Bus.

    Here's a visual recap:

    Self Study: Code Samples & Tutorials

    There's no better way to get familiar with the concepts, than to dive in and play with code samples and hands-on tutorials. Here are 4 resources to bookmark and try out:

    1. Dapr Quickstarts - these walk you through samples showcasing individual Building Block APIs - with multiple language options available.
    2. Dapr Tutorials provides more complex examples of microservices applications and tools usage, including a Distributed Calculator polyglot app.
    3. Next, try to Deploy a Dapr application to Azure Container Apps to get familiar with the process of setting up the environment, then deploying the app.
    4. Or, explore the many Azure Container Apps samples showcasing various features and more complex architectures tied to real world scenarios.

    What's Next: Serverless Integrations!

    So far we've talked about core technologies (Azure Functions, Azure Container Apps, Dapr) that provide foundational support for your serverless solution. Next, we'll look at Serverless Integrations - specifically at technologies like Azure Logic Apps and Azure Event Grid that automate workflows and create seamless end-to-end solutions that integrate other Azure services in serverless-friendly ways.

    Take the Challenge!

    The Cloud Skills Challenge is still going on, and we've already had hundreds of participants join and complete the learning modules to skill up on Serverless.

    There's still time to join and get yourself on the leaderboard. Get familiar with Azure Functions, SignalR, Logic Apps, Azure SQL and more - in serverless contexts!!


    - + \ No newline at end of file diff --git a/blog/tags/dapr/page/9/index.html b/blog/tags/dapr/page/9/index.html index fa6684f1df..0406c8d749 100644 --- a/blog/tags/dapr/page/9/index.html +++ b/blog/tags/dapr/page/9/index.html @@ -14,7 +14,7 @@ - + @@ -24,7 +24,7 @@ Image showing container apps role assignment

  • Lastly, we need to restart the container app revision, to do so run the command below:

     ##Get revision name and assign it to a variable
    $REVISION_NAME = (az containerapp revision list `
    --name $BACKEND_SVC_NAME `
    --resource-group $RESOURCE_GROUP `
    --query [0].name)

    ##Restart revision by name
    az containerapp revision restart `
    --resource-group $RESOURCE_GROUP `
    --name $BACKEND_SVC_NAME `
    --revision $REVISION_NAME
  • Run end-to-end Test on Azure

    From the Azure Portal, select the Azure Container App orders-processor and navigate to Log stream under Monitoring tab, leave the stream connected and opened. From the Azure Portal, select the Azure Service Bus Namespace ordersservices, select the topic orderreceivedtopic, select the subscription named orders-processor-subscription, then click on Service Bus Explorer (preview). From there we need to publish/send a message. Use the JSON payload below

    ```json
    {
    "data": {
    "reference": "Order 150",
    "quantity": 150,
    "createdOn": "2022-05-10T12:45:22.0983978Z"
    }
    }
    ```

    If all is configured correctly, you should start seeing the information logs in Container Apps Log stream, similar to the images below Image showing publishing messages from Azure Service

    Information logs on the Log stream of the deployed Azure Container App Image showing ACA Log Stream

    🎉 CONGRATULATIONS

    You have successfully deployed to the cloud an Azure Container App and configured Dapr Pub/Sub API with Azure Service Bus.

    9. Clean up

    If you are done with the tutorial, use the following command to delete the resource group and all its contained resources to avoid incurring further costs.

    az group delete --name $RESOURCE_GROUP

    Exercise

    I left for you the configuration of the Dapr State Store API with Azure Cosmos DB :)

    When you look at the action method OrderReceived in controller ExternalOrdersController, you will see that I left a line with ToDo: note, this line is responsible to save the received message (OrderModel) into Azure Cosmos DB.

    There is no need to change anything on the code base (other than removing this commented line), that's the beauty of Dapr Building Blocks and how easy it allows us to plug components to our microservice application without any plumping and brining external SDKs.

    For sure you need to work on the configuration part of Dapr State Store by creating a new component file like what we have done with the Pub/Sub API, things that you need to work on are:

    • Provision Azure Cosmos DB Account and obtain its masterKey.
    • Create a Dapr Component file adhering to Dapr Specs.
    • Create an Azure Container Apps component file adhering to ACA component specs.
    • Test locally on your dev machine using Dapr Component file.
    • Register the new Dapr State Store component with Azure Container Apps Environment and set the Cosmos Db masterKey from the Azure Portal. If you want to challenge yourself more, use the Managed Identity approach as done in this post! The right way to protect your keys and you will not worry about managing CosmosDb keys anymore!
    • Build a new image of the application and push it to Azure Container Registry.
    • Update Azure Container Apps and create a new revision which contains the updated code.
    • Verify the results by checking Azure Cosmos DB, you should see the Order Model stored in Cosmos DB.

    If you need help, you can always refer to my blog post Azure Container Apps State Store With Dapr State Management API which contains exactly what you need to implement here, so I'm very confident you will be able to complete this exercise with no issues, happy coding :)

    What's Next?

    If you enjoyed working with Dapr and Azure Container Apps, and you want to have a deep dive with more complex scenarios (Dapr bindings, service discovery, auto scaling with KEDA, sync services communication, distributed tracing, health probes, etc...) where multiple services deployed to a single Container App Environment; I have created a detailed tutorial which should walk you through step by step with through details to build the application.

    So far, the published posts below, and I'm publishing more posts on weekly basis, so stay tuned :)

    Resources

    - + \ No newline at end of file diff --git a/blog/tags/devtools/index.html b/blog/tags/devtools/index.html index fd25bf1595..16d1c41819 100644 --- a/blog/tags/devtools/index.html +++ b/blog/tags/devtools/index.html @@ -14,13 +14,13 @@ - +

    2 posts tagged with "devtools"

    View All Tags

    · 5 min read
    Savannah Ostrowski

    Welcome to Beyond #30DaysOfServerless! in October!

    Yes, it's October!! And since we ended #ServerlessSeptember with a focus on End-to-End Development for Serverless on Azure, we thought it would be good to share updates in October that can help you skill up even further.

    Today, we're following up on the Code to Cloud with azd blog post (Day #29) where we introduced the Azure Developer CLI (azd), an open-source tool for streamlining your end-to-end developer experience going from local development environment to Azure cloud. In today's post, we celebrate the October 2022 release of the tool, with three cool new features.

    And if it's October, it must be #Hacktoberfest!! Read on to learn about how you can take advantage of one of the new features, to contribute to the azd open-source community and ecosystem!

    Ready? Let's go!


    What We'll Cover

    • Azure Friday: Introducing the Azure Developer CLI (Video)
    • October 2022 Release: What's New in the Azure Developer CLI?
      • Azure Pipelines for CI/CD: Learn more
      • Improved Infrastructure as Code structure via Bicep modules: Learn more
      • A new azd template gallery: The new azd-templates gallery for community use! Learn more
    • Awesome-Azd: The new azd-templates gallery for Community use
      • Features: discover, create, contribute, request - templates
      • Hacktoberfest: opportunities to contribute in October - and beyond!


    Azure Friday

    This post is a follow-up to our #ServerlessSeptember post on Code to Cloud with Azure Developer CLI where we introduced azd, a new open-source tool that makes it quick and simple for you to move your application from a local development environment to Azure, streamlining your end-to-end developer workflow in the process.

    Prefer to watch a video overview? I have you covered! Check out my recent conversation with Scott Hanselman on Azure Friday where we:

    • talked about the code-to-cloud developer journey
    • walkthrough the ins and outs of an azd template
    • explored Azure Developer CLI commands in the terminal and VS Code, and
    • (probably most importantly) got a web app up and running on Azure with a database, Key Vault and monitoring all in a couple of minutes

    October Release

    We're pleased to announce the October 2022 release of the Azure Developer CLI (currently 0.3.0-beta.2). Read the release announcement for more details. Here are the highlights:

    • Azure Pipelines for CI/CD: This addresses azure-dev#101, adding support for Azure Pipelines (alongside GitHub Actions) as a CI/CD provider. Learn more about usage and related documentation.
    • Improved Infrastructure as Code structure via Bicep modules: This addresses azure-dev#543, which recognized the complexity of using a single resources.bicep file for all resources. With this release, azd templates now come with Bicep modules organized by purpose making it easier to edit and understand. Learn more about this structure, and how to use it.
    • New Templates Gallery - awesome-azd: This addresses azure-dev#398, which aimed to make templates more discoverable and easier to contribute. Learn more about how the new gallery improves the template discovery experience.

    In the next section, we'll dive briefly into the last feature, introducing the new awesome-azd site and resource for templates discovery and contribution. And, since it's #Hacktoberfest season, we'll talk about the Contributor Guide and the many ways you can contribute to this project - with, or without, code.


    It's awesome-azd

    Welcome to awesome-azd a new template gallery hosted on GitHub Pages, and meant to be a destination site for discovering, requesting, and contributing azd-templates for community use!

    In addition, it's README reflects the awesome-list resource format, providing a location for the community to share "best of" resources for Azure Developer CLI - from blog posts and videos, to full-scale tutorials and templates.

    The Gallery is organized into three main areas:

    Take a minute to explore the Gallery and note the features:

    • Search for templates by name
    • Requested Templates - indicating asks from the community
    • Featured Templates - highlighting high-quality templates
    • Filters - to discover templates by and/or query combinations

    Check back often to see the latest contributed templates and requests!


    Hacktoberfest

    So, why is this a good time to talk about the Gallery? Because October means it's time for #Hacktoberfest - a month-long celebration of open-source projects and their maintainers, and an opportunity for first-time contributors to get support and guidance making their first pull-requests! Check out the #Hacktoberfest topic on GitHub for projects you can contribute to.

    And we hope you think of awesome-azd as another possible project to contribute to.

    Check out the FAQ section to learn how to create, discover, and contribute templates. Or take a couple of minutes to watch this video walkthrough from Jon Gallant:

    And don't hesitate to reach out to us - either via Issues on the repo, or in the Discussions section of this site, to give us feedback!

    Happy Hacking! 🎃


    - + \ No newline at end of file diff --git a/blog/tags/devtools/page/2/index.html b/blog/tags/devtools/page/2/index.html index bbef846be6..9ae4114570 100644 --- a/blog/tags/devtools/page/2/index.html +++ b/blog/tags/devtools/page/2/index.html @@ -14,13 +14,13 @@ - +

    2 posts tagged with "devtools"

    View All Tags

    · 9 min read
    Nitya Narasimhan

    Welcome to Day 3 of #30DaysOfServerless!

    Yesterday we learned core concepts and terminology for Azure Functions, the signature Functions-as-a-Service option on Azure. Today we take our first steps into building and deploying an Azure Functions app, and validate local development setup.

    Ready? Let's go.


    What We'll Cover


    Developer Guidance

    Before we jump into development, let's familiarize ourselves with language-specific guidance from the Azure Functions Developer Guide. We'll review the JavaScript version but guides for F#, Java, Python, C# and PowerShell are also available.

    1. A function is defined by two things: code (written in a supported programming language) and configuration (specified in a functions.json file, declaring the triggers, bindings and other context for execution).

    2. A function app is the unit of deployment for your functions, and is associated with a single execution context or runtime. It can contain multiple functions, but they must be in the same language.

    3. A host configuration is runtime-specific configuration that affects all functions running in a given function app instance. It is defined in a host.json file.

    4. A recommended folder structure is defined for the function app, but may vary based on the programming language used. Check the documentation on folder structures to learn the default for your preferred language.

    Here's an example of the JavaScript folder structure for a function app containing two functions with some shared dependencies. Note that host.json (runtime configuration) is defined once, in the root directory. And function.json is defined separately for each function.

    FunctionsProject
    | - MyFirstFunction
    | | - index.js
    | | - function.json
    | - MySecondFunction
    | | - index.js
    | | - function.json
    | - SharedCode
    | | - myFirstHelperFunction.js
    | | - mySecondHelperFunction.js
    | - node_modules
    | - host.json
    | - package.json
    | - local.settings.json

    We'll dive into what the contents of these files look like, when we build and deploy the first function. We'll cover local.settings.json in the About Local Testing section at the end.


    My First Function App

    The documentation provides quickstart options for all supported languages. We'll walk through the JavaScript versions in this article. You have two options for development:

    I'm a huge fan of VS Code - so I'll be working through that tutorial today.

    PRE-REQUISITES

    Don't forget to validate your setup by checking the versions of installed software.

    Install VSCode Extension

    Installing the Visual Studio Code extension should automatically open this page in your IDE with similar quickstart instructions, but potentially more recent screenshots.

    Visual Studio Code Extension for VS Code

    Note that it may make sense to install the Azure tools for Visual Studio Code extensions pack if you plan on working through the many projects in Serverless September. This includes the Azure Functions extension by default.

    Create First Function App

    Walk through the Create local [project] steps of the quickstart. The process is quick and painless and scaffolds out this folder structure and files. Note the existence (and locations) of functions.json and host.json files.

    Final screenshot for VS Code workflow

    Explore the Code

    Check out the functions.json configuration file. It shows that the function is activated by an httpTrigger with an input binding (tied to req payload) and an output binding (tied to res payload). And it supports both GET and POST requests on the exposed URL.

    {
    "bindings": [
    {
    "authLevel": "anonymous",
    "type": "httpTrigger",
    "direction": "in",
    "name": "req",
    "methods": [
    "get",
    "post"
    ]
    },
    {
    "type": "http",
    "direction": "out",
    "name": "res"
    }
    ]
    }

    Check out index.js - the function implementation. We see it logs a message to the console when invoked. It then extracts a name value from the input payload (req) and crafts a different responseMessage based on the presence/absence of a valid name. It returns this response in the output payload (res).

    module.exports = async function (context, req) {
    context.log('JavaScript HTTP trigger function processed a request.');

    const name = (req.query.name || (req.body && req.body.name));
    const responseMessage = name
    ? "Hello, " + name + ". This HTTP triggered function executed successfully."
    : "This HTTP triggered function executed successfully. Pass a name in the query string or in the request body for a personalized response.";

    context.res = {
    // status: 200, /* Defaults to 200 */
    body: responseMessage
    };
    }

    Preview Function App Locally

    You can now run this function app locally using Azure Functions Core Tools. VS Code integrates seamlessly with this CLI-based tool, making it possible for you to exploit all its capabilities without leaving the IDE. In fact, the workflow will even prompt you to install those tools if they didn't already exist in your local dev environment.

    Now run the function app locally by clicking on the "Run and Debug" icon in the activity bar (highlighted, left) and pressing the "▶️" (Attach to Node Functions) to start execution. On success, your console output should show something like this.

    Final screenshot for VS Code workflow

    You can test the function locally by visiting the Function Url shown (http://localhost:7071/api/HttpTrigger1) or by opening the Workspace region of the Azure extension, and selecting the Execute Function now menu item as shown.

    Final screenshot for VS Code workflow

    In the latter case, the Enter request body popup will show a pre-populated request of {"name":"Azure"} that you can submit.

    Final screenshot for VS Code workflow

    On successful execution, your VS Code window will show a notification as follows. Take note of the console output - it shows the message encoded in index.js.

    Final screenshot for VS Code workflow

    You can also visit the deployed function URL directly in a local browser - testing the case for a request made with no name payload attached. Note how the response in the browser now shows the non-personalized version of the message!

    Final screenshot for VS Code workflow

    🎉 Congratulations

    You created and ran a function app locally!

    (Re)Deploy to Azure

    Now, just follow the creating a function app in Azure steps to deploy it to Azure, using an active subscription! The deployed app resource should now show up under the Function App Resources where you can click Execute Function Now to test the Azure-deployed version instead. You can also look up the function URL in the portal and visit that link in your local browser to trigger the function without the name context.

    🎉 Congratulations

    You have an Azure-hosted serverless function app!

    Challenge yourself and try to change the code and redeploy to Azure to return something different. You have effectively created a serverless API endpoint!


    About Core Tools

    That was a lot to cover! In the next few days we'll have more examples for Azure Functions app development - focused on different programming languages. So let's wrap today's post by reviewing two helpful resources.

    First, let's talk about Azure Functions Core Tools - the command-line tool that lets you develop, manage, and deploy, Azure Functions projects from your local development environment. It is used transparently by the VS Code extension - but you can use it directly from a terminal for a powerful command-line end-to-end developer experience! The Core Tools commands are organized into the following contexts:

    Learn how to work with Azure Functions Core Tools. Not only can it help with quick command execution, it can also be invaluable for debugging issues that may not always be visible or understandable in an IDE.

    About Local Testing

    You might have noticed that the scaffold also produced a local.settings.json file. What is that and why is it useful? By definition, the local.settings.json file "stores app settings and settings used by local development tools. Settings in the local.settings.json file are used only when you're running your project locally."

    Read the guidance on Code and test Azure Functions Locally to learn more about how to configure development environments locally, for your preferred programming language, to support testing and debugging on the local Functions runtime.

    Exercise

    We made it! Now it's your turn!! Here are a few things you can try to apply what you learned and reinforce your understanding:

    Resources

    Bookmark and visit the #30DaysOfServerless Collection. It's the one-stop collection of resources we will keep updated with links to relevant documentation and learning resources.

    - + \ No newline at end of file diff --git a/blog/tags/docker-compose/index.html b/blog/tags/docker-compose/index.html index 2dbc029d1c..7e5e4b09d6 100644 --- a/blog/tags/docker-compose/index.html +++ b/blog/tags/docker-compose/index.html @@ -14,13 +14,13 @@ - +

    One post tagged with "docker-compose"

    View All Tags

    · 8 min read
    Paul Yu

    Welcome to Day 10 of #30DaysOfServerless!

    We continue our exploraton into Azure Container Apps, with today's focus being communication between microservices, and how to configure your Azure Container Apps environment in the context of a deployment example.


    What We'll Cover

    • ACA Environments & Virtual Networking
    • Basic Microservices Communications
    • Walkthrough: ACA Deployment Example
    • Summary and Next Steps
    • Exercise: Try this yourself!
    • Resources: For self-study!


    Introduction

    In yesterday's post, we learned what the Azure Container Apps (ACA) service is and the problems it aims to solve. It is considered to be a Container-as-a-Service platform since much of the complex implementation details of running a Kubernetes cluster is managed for you.

    Some of the use cases for ACA include event-driven processing jobs and background tasks, but this article will focus on hosting microservices, and how they can communicate with each other within the ACA service. At the end of this article, you will have a solid understanding of how networking and communication is handled and will leave you with a few tutorials to try.

    Environments and virtual networking in ACA

    Before we jump into microservices communication, we should review how networking works within ACA. With ACA being a managed service, Azure will take care of most of your underlying infrastructure concerns. As you provision an ACA resource, Azure provisions an Environment to deploy Container Apps into. This environment is your isolation boundary.

    Azure Container Apps Environment

    By default, Azure creates and manages a new Virtual Network (VNET) for you and the VNET is associated with the environment. As you deploy container apps, they are deployed into the same VNET and the environment is assigned a static public IP address which allows your apps to be accessible over the internet. This VNET is not visible or manageable.

    If you need control of the networking flows within the VNET, you can pre-provision one and tell Azure to deploy an environment within it. This "bring-your-own" VNET model allows you to deploy an environment in either External or Internal modes. Deploying an environment in External mode gives you the flexibility of managing your own VNET, while still allowing your containers to be accessible from outside the environment; a static public IP address is assigned to the environment. When deploying in Internal mode, your containers are accessible within the environment and/or VNET but not accessible from the internet.

    Bringing your own VNET will require some planning and you will need dedicate an empty subnet which will be used exclusively by the ACA environment. The size of your subnet will be dependant on how many containers you plan on deploying and your scaling requirements and one requirement to know is that the subnet address range must have have a /23 CIDR prefix at minimum. You will also need to think about your deployment strategy since ACA has the concept of Revisions which will also consume IPs from your subnet.

    Some additional restrictions to consider when planning your subnet address space is listed in the Resources section below and can be addressed in future posts, so be sure to follow us on dev.to and bookmark the ServerlessSeptember site.

    Basic microservices communication in ACA

    When it comes to communications between containers, ACA addresses this concern with its Ingress capabilities. With HTTP Ingress enabled on your container app, you can expose your app on a HTTPS endpoint.

    If your environment is deployed using default networking and your containers needs to be accessible from outside the environment, you will need to set the Ingress traffic option to Accepting traffic from anywhere. This will generate a Full-Qualified Domain Name (FQDN) which you can use to access your app right away. The ingress feature also generates and assigns a Secure Socket Layer (SSL) certificate for the FQDN.

    External ingress on Container App

    If your environment is deployed using default networking and your containers only need to communicate with other containers in the environment, you'll need to set the Ingress traffic option to Limited to Container Apps Environment. You get a FQDN here as well, but in the section below we'll see how that changes.

    Internal ingress on Container App

    As mentioned in the networking section above, if you deploy your ACA environment into a VNET in internal mode, your options will be Limited to Container Apps Environment or Limited to VNet.

    Ingress on internal virtual network

    Note how the Accepting traffic from anywhere option is greyed out. If your VNET is deployed in external mode, then the option will be available.

    Let's walk though an example ACA deployment

    The diagram below illustrates a simple microservices application that I deployed to ACA. The three container apps all have ingress enabled. The greeting-service app calls two backend services; a hello-service that returns the text Hello (in random casing) and a world-service that returns the text World (in a few random languages). The greeting-service concatenates the two strings together and returns Hello World to the browser. The greeting-service is the only service accessible via external ingress while two backend services are only accessible via internal ingress.

    Greeting Service overview

    With ingress enabled, let's take a quick look at the FQDN structures. Here is the FQDN of the external greeting-service.

    https://greeting-service.victoriouswave-3749d046.eastus.azurecontainerapps.io

    We can break it down into these components:

    https://[YOUR-CONTAINER-APP-NAME].[RANDOM-NAME]-[RANDOM-CHARACTERS].[AZURE-REGION].containerapps.io

    And here is the FQDN of the internal hello-service.

    https://hello-service.internal.victoriouswave-3749d046.eastus.azurecontainerapps.io

    Can you spot the difference between FQDNs?

    That was too easy 😉... the word internal is added as a subdomain in the FQDN between your container app name and the random name for all internal ingress endpoints.

    https://[YOUR-CONTAINER-APP-NAME].internal.[RANDOM-NAME]-[RANDOM-CHARACTERS].[AZURE-REGION].containerapps.io

    Now that we know the internal service FQDNs, we use them in the greeting-service app to achieve basic service-to-service communications.

    So we can inject FQDNs of downstream APIs to upstream apps using environment variables, but the downside to this approach is that need to deploy downstream containers ahead of time and this dependency will need to be planned for during your deployment process. There are ways around this and one option is to leverage the auto-injected environment variables within your app code.

    If I use the Console blade for the hello-service container app and run the env command, you will see environment variables named CONTAINER_APP_NAME and CONTAINER_APP_ENV_DNS_SUFFIX. You can use these values to determine FQDNs within your upstream app.

    hello-service environment variables

    Back in the greeting-service container I can invoke the hello-service container's sayhello method. I know the container app name is hello-service and this service is exposed over an internal ingress, therefore, if I add the internal subdomain to the CONTAINER_APP_ENV_DNS_SUFFIX I can invoke a HTTP request to the hello-service from my greeting-service container.

    Invoke the sayHello method from the greeting-service container

    As you can see, the ingress feature enables communications to other container apps over HTTP/S and ACA will inject environment variables into our container to help determine what the ingress FQDNs would be. All we need now is a little bit of code modification in the greeting-service app and build the FQDNs of our backend APIs by retrieving these environment variables.

    Greeting service code

    ... and now we have a working microservices app on ACA! 🎉

    Hello World

    Summary and next steps

    We've covered Container Apps networking and the basics of how containers communicate with one another. However, there is a better way to address service-to-service invocation using Dapr, which is an open-source framework for building microservices. It is natively integrated into the ACA service and in a future post, you'll learn how to enable it in your Container App to address microservices concerns and more. So stay tuned!

    Exercises

    As a takeaway for today's post, I encourage you to complete this tutorial and if you'd like to deploy the sample app that was presented in this article, my teammate @StevenMurawski is hosting a docker-compose-examples repo which includes samples for deploying to ACA using Docker Compose files. To learn more about the az containerapp compose command, a link to his blog articles are listed in the Resources section below.

    If you have any questions or feedback, please let us know in the comments below or reach out on Twitter @pauldotyu

    Have fun packing and shipping containers! See you in the next post!

    Resources

    The sample app presented here was inspired by services demonstrated in the book Introducing Distributed Application Runtime (Dapr): Simplifying Microservices Applications Development Through Proven and Reusable Patterns and Practices. Go check it out to leran more about Dapr!

    - + \ No newline at end of file diff --git a/blog/tags/dotnet/index.html b/blog/tags/dotnet/index.html index 4a155462ab..7e8c19e741 100644 --- a/blog/tags/dotnet/index.html +++ b/blog/tags/dotnet/index.html @@ -14,13 +14,13 @@ - +

    2 posts tagged with "dotnet"

    View All Tags

    · 19 min read
    Alex Wolf

    Welcome to Day 24 of #30DaysOfServerless!

    We continue exploring E2E scenarios with this tutorial where you'll deploy a containerized ASP.NET Core 6.0 application to Azure Container Apps.

    The application consists of a front-end web app built using Blazor Server, as well as two Web API projects to manage data. These projects will exist as three separate containers inside of a shared container apps environment.


    What We'll Cover

    • Deploy ASP.NET Core 6.0 app to Azure Container Apps
    • Automate deployment workflows using GitHub Actions
    • Provision and deploy resources using Azure Bicep
    • Exercise: Try this yourself!
    • Resources: For self-study!


    Introduction

    Azure Container Apps enables you to run microservices and containerized applications on a serverless platform. With Container Apps, you enjoy the benefits of running containers while leaving behind the concerns of manually configuring cloud infrastructure and complex container orchestrators.

    In this tutorial, you'll deploy a containerized ASP.NET Core 6.0 application to Azure Container Apps. The application consists of a front-end web app built using Blazor Server, as well as two Web API projects to manage data. These projects will exist as three separate containers inside of a shared container apps environment.

    You will use GitHub Actions in combination with Bicep to deploy the application. These tools provide an approachable and sustainable solution for building CI/CD pipelines and working with Container Apps.

    PRE-REQUISITES

    Architecture

    In this tutorial, we'll setup a container app environment with a separate container for each project in the sample store app. The major components of the sample project include:

    • A Blazor Server front-end web app to display product information
    • A products API to list available products
    • An inventory API to determine how many products are in stock
    • GitHub Actions and Bicep templates to provision Azure resources and then build and deploy the sample app.

    You will explore these templates later in the tutorial.

    Public internet traffic should be proxied to the Blazor app. The back-end APIs should only be reachable via requests from the Blazor app inside the container apps environment. This setup can be achieved using container apps environment ingress configurations during deployment.

    An architecture diagram of the shopping app


    Project Sources

    Want to follow along? Fork the sample below. The tutorial can be completed with or without Dapr integration. Pick the path you feel comfortable in. Dapr provides various benefits that make working with Microservices easier - you can learn more in the docs. For this tutorial you will need GitHub and Azure CLI.

    PICK YOUR PATH

    To follow along with this tutorial, fork the relevant sample project below.

    You can run the app locally from Visual Studio:

    • Right click on the Blazor Store project and select Set as Startup Project.
    • Press the start button at the top of Visual Studio to run the app.
    • (Once running) start each API in the background by
    • right-clicking on the project node
    • selecting Debug --> Start without debugging.

    Once the Blazor app is running, you should see something like this:

    An architecture diagram of the shopping app


    Configuring Azure credentials

    In order to deploy the application to Azure through GitHub Actions, you first need to create a service principal. The service principal will allow the GitHub Actions process to authenticate to your Azure subscription to create resources and deploy code. You can learn more about Service Principals in the Azure CLI documentation. For this step you'll need to be logged into the Azure CLI.

    1) If you have not done so already, make sure to fork the sample project to your own GitHub account or organization.

    1) Once you have completed this step, create a service principal using the Azure CLI command below:

    ```azurecli
    $subscriptionId=$(az account show --query id --output tsv)
    az ad sp create-for-rbac --sdk-auth --name WebAndApiSample --role Contributor --scopes /subscriptions/$subscriptionId
    ```

    1) Copy the JSON output of the CLI command to your clipboard

    1) Under the settings tab of your forked GitHub repo, create a new secret named AzureSPN. The name is important to match the Bicep templates included in the project, which we'll review later. Paste the copied service principal values on your clipboard into the secret and save your changes. This new secret will be used by the GitHub Actions workflow to authenticate to Azure.

    :::image type="content" source="./img/dotnet/github-secrets.png" alt-text="A screenshot of adding GitHub secrets.":::

    Deploy using Github Actions

    You are now ready to deploy the application to Azure Container Apps using GitHub Actions. The sample application includes a GitHub Actions template that is configured to build and deploy any changes to a branch named deploy. The deploy branch does not exist in your forked repository by default, but you can easily create it through the GitHub user interface.

    1) Switch to the Actions tab along the top navigation of your GitHub repository. If you have not done so already, ensure that workflows are enabled by clicking the button in the center of the page.

    A screenshot showing how to enable GitHub actions

    1) Navigate to the main Code tab of your repository and select the main dropdown. Enter deploy into the branch input box, and then select Create branch: deploy from 'main'.

    A screenshot showing how to create the deploy branch

    1) On the new deploy branch, navigate down into the .github/workflows folder. You should see a file called deploy.yml, which contains the main GitHub Actions workflow script. Click on the file to view its content. You'll learn more about this file later in the tutorial.

    1) Click the pencil icon in the upper right to edit the document.

    1) Change the RESOURCE_GROUP_NAME: value to msdocswebappapis or another valid resource group name of your choosing.

    1) In the upper right of the screen, select Start commit and then Commit changes to commit your edit. This will persist the change to the file and trigger the GitHub Actions workflow to build and deploy the app.

    A screenshot showing how to commit changes

    1) Switch to the Actions tab along the top navigation again. You should see the workflow running to create the necessary resources and deploy the app. The workflow may take several minutes to run. When it completes successfully, all of the jobs should have a green checkmark icon next to them.

    The completed GitHub workflow.

    Explore the Azure resources

    Once the GitHub Actions workflow has completed successfully you can browse the created resources in the Azure portal.

    1) On the left navigation, select Resource Groups. Next,choose the msdocswebappapis resource group that was created by the GitHub Actions workflow.

    2) You should see seven resources available that match the screenshot and table descriptions below.

    The resources created in Azure.

    Resource nameTypeDescription
    inventoryContainer appThe containerized inventory API.
    msdocswebappapisacrContainer registryA registry that stores the built Container images for your apps.
    msdocswebappapisaiApplication insightsApplication insights provides advanced monitoring, logging and metrics for your apps.
    msdocswebappapisenvContainer apps environmentA container environment that manages networking, security and resource concerns. All of your containers live in this environment.
    msdocswebappapislogsLog Analytics workspaceA workspace environment for managing logging and analytics for the container apps environment
    productsContainer appThe containerized products API.
    storeContainer appThe Blazor front-end web app.

    3) You can view your running app in the browser by clicking on the store container app. On the overview page, click the Application Url link on the upper right of the screen.

    :::image type="content" source="./img/dotnet/application-url.png" alt-text="The link to browse the app.":::

    Understanding the GitHub Actions workflow

    The GitHub Actions workflow created and deployed resources to Azure using the deploy.yml file in the .github folder at the root of the project. The primary purpose of this file is to respond to events - such as commits to a branch - and run jobs to accomplish tasks. The deploy.yml file in the sample project has three main jobs:

    • Provision: Create the necessary resources in Azure, such as the container apps environment. This step leverages Bicep templates to create the Azure resources, which you'll explore in a moment.
    • Build: Create the container images for the three apps in the project and store them in the container registry.
    • Deploy: Deploy the container images to the different container apps created during the provisioning job.

    The deploy.yml file also accepts parameters to make the workflow more dynamic, such as setting the resource group name or the Azure region resources will be provisioned to.

    Below is a commented version of the deploy.yml file that highlights the essential steps.

    name: Build and deploy .NET application to Container Apps

    # Trigger the workflow on pushes to the deploy branch
    on:
    push:
    branches:
    - deploy

    env:
    # Set workflow variables
    RESOURCE_GROUP_NAME: msdocswebappapis

    REGION: eastus

    STORE_DOCKER: Store/Dockerfile
    STORE_IMAGE: store

    INVENTORY_DOCKER: Store.InventoryApi/Dockerfile
    INVENTORY_IMAGE: inventory

    PRODUCTS_DOCKER: Store.ProductApi/Dockerfile
    PRODUCTS_IMAGE: products

    jobs:
    # Create the required Azure resources
    provision:
    runs-on: ubuntu-latest

    steps:

    - name: Checkout to the branch
    uses: actions/checkout@v2

    - name: Azure Login
    uses: azure/login@v1
    with:
    creds: ${{ secrets.AzureSPN }}

    - name: Create resource group
    uses: azure/CLI@v1
    with:
    inlineScript: >
    echo "Creating resource group in Azure"
    echo "Executing 'az group create -l ${{ env.REGION }} -n ${{ env.RESOURCE_GROUP_NAME }}'"
    az group create -l ${{ env.REGION }} -n ${{ env.RESOURCE_GROUP_NAME }}

    # Use Bicep templates to create the resources in Azure
    - name: Creating resources
    uses: azure/CLI@v1
    with:
    inlineScript: >
    echo "Creating resources"
    az deployment group create --resource-group ${{ env.RESOURCE_GROUP_NAME }} --template-file '/github/workspace/Azure/main.bicep' --debug

    # Build the three app container images
    build:
    runs-on: ubuntu-latest
    needs: provision

    steps:

    - name: Checkout to the branch
    uses: actions/checkout@v2

    - name: Azure Login
    uses: azure/login@v1
    with:
    creds: ${{ secrets.AzureSPN }}

    - name: Set up Docker Buildx
    uses: docker/setup-buildx-action@v1

    - name: Login to ACR
    run: |
    set -euo pipefail
    access_token=$(az account get-access-token --query accessToken -o tsv)
    refresh_token=$(curl https://${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io/oauth2/exchange -v -d "grant_type=access_token&service=${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io&access_token=$access_token" | jq -r .refresh_token)
    docker login -u 00000000-0000-0000-0000-000000000000 --password-stdin ${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io <<< "$refresh_token"

    - name: Build the products api image and push it to ACR
    uses: docker/build-push-action@v2
    with:
    push: true
    tags: ${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io/${{ env.PRODUCTS_IMAGE }}:${{ github.sha }}
    file: ${{ env.PRODUCTS_DOCKER }}

    - name: Build the inventory api image and push it to ACR
    uses: docker/build-push-action@v2
    with:
    push: true
    tags: ${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io/${{ env.INVENTORY_IMAGE }}:${{ github.sha }}
    file: ${{ env.INVENTORY_DOCKER }}

    - name: Build the frontend image and push it to ACR
    uses: docker/build-push-action@v2
    with:
    push: true
    tags: ${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io/${{ env.STORE_IMAGE }}:${{ github.sha }}
    file: ${{ env.STORE_DOCKER }}

    # Deploy the three container images
    deploy:
    runs-on: ubuntu-latest
    needs: build

    steps:

    - name: Checkout to the branch
    uses: actions/checkout@v2

    - name: Azure Login
    uses: azure/login@v1
    with:
    creds: ${{ secrets.AzureSPN }}

    - name: Installing Container Apps extension
    uses: azure/CLI@v1
    with:
    inlineScript: >
    az config set extension.use_dynamic_install=yes_without_prompt

    az extension add --name containerapp --yes

    - name: Login to ACR
    run: |
    set -euo pipefail
    access_token=$(az account get-access-token --query accessToken -o tsv)
    refresh_token=$(curl https://${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io/oauth2/exchange -v -d "grant_type=access_token&service=${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io&access_token=$access_token" | jq -r .refresh_token)
    docker login -u 00000000-0000-0000-0000-000000000000 --password-stdin ${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io <<< "$refresh_token"

    - name: Deploy Container Apps
    uses: azure/CLI@v1
    with:
    inlineScript: >
    az containerapp registry set -n products -g ${{ env.RESOURCE_GROUP_NAME }} --server ${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io

    az containerapp update -n products -g ${{ env.RESOURCE_GROUP_NAME }} -i ${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io/${{ env.PRODUCTS_IMAGE }}:${{ github.sha }}

    az containerapp registry set -n inventory -g ${{ env.RESOURCE_GROUP_NAME }} --server ${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io

    az containerapp update -n inventory -g ${{ env.RESOURCE_GROUP_NAME }} -i ${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io/${{ env.INVENTORY_IMAGE }}:${{ github.sha }}

    az containerapp registry set -n store -g ${{ env.RESOURCE_GROUP_NAME }} --server ${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io

    az containerapp update -n store -g ${{ env.RESOURCE_GROUP_NAME }} -i ${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io/${{ env.STORE_IMAGE }}:${{ github.sha }}

    - name: logout
    run: >
    az logout

    Understanding the Bicep templates

    During the provisioning stage of the GitHub Actions workflow, the main.bicep file is processed. Bicep files provide a declarative way of generating resources in Azure and are ideal for managing infrastructure as code. You can learn more about Bicep in the related documentation. The main.bicep file in the sample project creates the following resources:

    • The container registry to store images of the containerized apps.
    • The container apps environment, which handles networking and resource management for the container apps.
    • Three container apps - one for the Blazor front-end and two for the back-end product and inventory APIs.
    • Configuration values to connect these services together

    main.bicep without Dapr

    param location string = resourceGroup().location

    # create the azure container registry
    resource acr 'Microsoft.ContainerRegistry/registries@2021-09-01' = {
    name: toLower('${resourceGroup().name}acr')
    location: location
    sku: {
    name: 'Basic'
    }
    properties: {
    adminUserEnabled: true
    }
    }

    # create the aca environment
    module env 'environment.bicep' = {
    name: 'containerAppEnvironment'
    params: {
    location: location
    }
    }

    # create the various configuration pairs
    var shared_config = [
    {
    name: 'ASPNETCORE_ENVIRONMENT'
    value: 'Development'
    }
    {
    name: 'APPINSIGHTS_INSTRUMENTATIONKEY'
    value: env.outputs.appInsightsInstrumentationKey
    }
    {
    name: 'APPLICATIONINSIGHTS_CONNECTION_STRING'
    value: env.outputs.appInsightsConnectionString
    }
    ]

    # create the products api container app
    module products 'container_app.bicep' = {
    name: 'products'
    params: {
    name: 'products'
    location: location
    registryPassword: acr.listCredentials().passwords[0].value
    registryUsername: acr.listCredentials().username
    containerAppEnvironmentId: env.outputs.id
    registry: acr.name
    envVars: shared_config
    externalIngress: false
    }
    }

    # create the inventory api container app
    module inventory 'container_app.bicep' = {
    name: 'inventory'
    params: {
    name: 'inventory'
    location: location
    registryPassword: acr.listCredentials().passwords[0].value
    registryUsername: acr.listCredentials().username
    containerAppEnvironmentId: env.outputs.id
    registry: acr.name
    envVars: shared_config
    externalIngress: false
    }
    }

    # create the store api container app
    var frontend_config = [
    {
    name: 'ProductsApi'
    value: 'http://${products.outputs.fqdn}'
    }
    {
    name: 'InventoryApi'
    value: 'http://${inventory.outputs.fqdn}'
    }
    ]

    module store 'container_app.bicep' = {
    name: 'store'
    params: {
    name: 'store'
    location: location
    registryPassword: acr.listCredentials().passwords[0].value
    registryUsername: acr.listCredentials().username
    containerAppEnvironmentId: env.outputs.id
    registry: acr.name
    envVars: union(shared_config, frontend_config)
    externalIngress: true
    }
    }

    main.bicep with Dapr


    param location string = resourceGroup().location

    # create the azure container registry
    resource acr 'Microsoft.ContainerRegistry/registries@2021-09-01' = {
    name: toLower('${resourceGroup().name}acr')
    location: location
    sku: {
    name: 'Basic'
    }
    properties: {
    adminUserEnabled: true
    }
    }

    # create the aca environment
    module env 'environment.bicep' = {
    name: 'containerAppEnvironment'
    params: {
    location: location
    }
    }

    # create the various config pairs
    var shared_config = [
    {
    name: 'ASPNETCORE_ENVIRONMENT'
    value: 'Development'
    }
    {
    name: 'APPINSIGHTS_INSTRUMENTATIONKEY'
    value: env.outputs.appInsightsInstrumentationKey
    }
    {
    name: 'APPLICATIONINSIGHTS_CONNECTION_STRING'
    value: env.outputs.appInsightsConnectionString
    }
    ]

    # create the products api container app
    module products 'container_app.bicep' = {
    name: 'products'
    params: {
    name: 'products'
    location: location
    registryPassword: acr.listCredentials().passwords[0].value
    registryUsername: acr.listCredentials().username
    containerAppEnvironmentId: env.outputs.id
    registry: acr.name
    envVars: shared_config
    externalIngress: false
    }
    }

    # create the inventory api container app
    module inventory 'container_app.bicep' = {
    name: 'inventory'
    params: {
    name: 'inventory'
    location: location
    registryPassword: acr.listCredentials().passwords[0].value
    registryUsername: acr.listCredentials().username
    containerAppEnvironmentId: env.outputs.id
    registry: acr.name
    envVars: shared_config
    externalIngress: false
    }
    }

    # create the store api container app
    module store 'container_app.bicep' = {
    name: 'store'
    params: {
    name: 'store'
    location: location
    registryPassword: acr.listCredentials().passwords[0].value
    registryUsername: acr.listCredentials().username
    containerAppEnvironmentId: env.outputs.id
    registry: acr.name
    envVars: shared_config
    externalIngress: true
    }
    }


    Bicep Modules

    The main.bicep file references modules to create resources, such as module products. Modules are a feature of Bicep templates that enable you to abstract resource declarations into their own files or sub-templates. As the main.bicep file is processed, the defined modules are also evaluated. Modules allow you to create resources in a more organized and reusable way. They can also define input and output parameters that are passed to and from the parent template, such as the name of a resource.

    For example, the environment.bicep module extracts the details of creating a container apps environment into a reusable template. The module defines necessary resource dependencies such as Log Analytics Workspaces and an Application Insights instance.

    environment.bicep without Dapr

    param baseName string = resourceGroup().name
    param location string = resourceGroup().location

    resource logs 'Microsoft.OperationalInsights/workspaces@2021-06-01' = {
    name: '${baseName}logs'
    location: location
    properties: any({
    retentionInDays: 30
    features: {
    searchVersion: 1
    }
    sku: {
    name: 'PerGB2018'
    }
    })
    }

    resource appInsights 'Microsoft.Insights/components@2020-02-02' = {
    name: '${baseName}ai'
    location: location
    kind: 'web'
    properties: {
    Application_Type: 'web'
    WorkspaceResourceId: logs.id
    }
    }

    resource env 'Microsoft.App/managedEnvironments@2022-01-01-preview' = {
    name: '${baseName}env'
    location: location
    properties: {
    appLogsConfiguration: {
    destination: 'log-analytics'
    logAnalyticsConfiguration: {
    customerId: logs.properties.customerId
    sharedKey: logs.listKeys().primarySharedKey
    }
    }
    }
    }

    output id string = env.id
    output appInsightsInstrumentationKey string = appInsights.properties.InstrumentationKey
    output appInsightsConnectionString string = appInsights.properties.ConnectionString

    environment.bicep with Dapr


    param baseName string = resourceGroup().name
    param location string = resourceGroup().location

    resource logs 'Microsoft.OperationalInsights/workspaces@2021-06-01' = {
    name: '${baseName}logs'
    location: location
    properties: any({
    retentionInDays: 30
    features: {
    searchVersion: 1
    }
    sku: {
    name: 'PerGB2018'
    }
    })
    }

    resource appInsights 'Microsoft.Insights/components@2020-02-02' = {
    name: '${baseName}ai'
    location: location
    kind: 'web'
    properties: {
    Application_Type: 'web'
    WorkspaceResourceId: logs.id
    }
    }

    resource env 'Microsoft.App/managedEnvironments@2022-01-01-preview' = {
    name: '${baseName}env'
    location: location
    properties: {
    appLogsConfiguration: {
    destination: 'log-analytics'
    logAnalyticsConfiguration: {
    customerId: logs.properties.customerId
    sharedKey: logs.listKeys().primarySharedKey
    }
    }
    }
    }

    output id string = env.id
    output appInsightsInstrumentationKey string = appInsights.properties.InstrumentationKey
    output appInsightsConnectionString string = appInsights.properties.ConnectionString


    The container_apps.bicep template defines numerous parameters to provide a reusable template for creating container apps. This allows the module to be used in other CI/CD pipelines as well.

    container_app.bicep without Dapr

    param name string
    param location string = resourceGroup().location
    param containerAppEnvironmentId string
    param repositoryImage string = 'mcr.microsoft.com/azuredocs/containerapps-helloworld:latest'
    param envVars array = []
    param registry string
    param minReplicas int = 1
    param maxReplicas int = 1
    param port int = 80
    param externalIngress bool = false
    param allowInsecure bool = true
    param transport string = 'http'
    param registryUsername string
    @secure()
    param registryPassword string

    resource containerApp 'Microsoft.App/containerApps@2022-01-01-preview' ={
    name: name
    location: location
    properties:{
    managedEnvironmentId: containerAppEnvironmentId
    configuration: {
    activeRevisionsMode: 'single'
    secrets: [
    {
    name: 'container-registry-password'
    value: registryPassword
    }
    ]
    registries: [
    {
    server: registry
    username: registryUsername
    passwordSecretRef: 'container-registry-password'
    }
    ]
    ingress: {
    external: externalIngress
    targetPort: port
    transport: transport
    allowInsecure: allowInsecure
    }
    }
    template: {
    containers: [
    {
    image: repositoryImage
    name: name
    env: envVars
    }
    ]
    scale: {
    minReplicas: minReplicas
    maxReplicas: maxReplicas
    }
    }
    }
    }

    output fqdn string = containerApp.properties.configuration.ingress.fqdn

    container_app.bicep with Dapr


    param name string
    param location string = resourceGroup().location
    param containerAppEnvironmentId string
    param repositoryImage string = 'mcr.microsoft.com/azuredocs/containerapps-helloworld:latest'
    param envVars array = []
    param registry string
    param minReplicas int = 1
    param maxReplicas int = 1
    param port int = 80
    param externalIngress bool = false
    param allowInsecure bool = true
    param transport string = 'http'
    param appProtocol string = 'http'
    param registryUsername string
    @secure()
    param registryPassword string

    resource containerApp 'Microsoft.App/containerApps@2022-01-01-preview' ={
    name: name
    location: location
    properties:{
    managedEnvironmentId: containerAppEnvironmentId
    configuration: {
    dapr: {
    enabled: true
    appId: name
    appPort: port
    appProtocol: appProtocol
    }
    activeRevisionsMode: 'single'
    secrets: [
    {
    name: 'container-registry-password'
    value: registryPassword
    }
    ]
    registries: [
    {
    server: registry
    username: registryUsername
    passwordSecretRef: 'container-registry-password'
    }
    ]
    ingress: {
    external: externalIngress
    targetPort: port
    transport: transport
    allowInsecure: allowInsecure
    }
    }
    template: {
    containers: [
    {
    image: repositoryImage
    name: name
    env: envVars
    }
    ]
    scale: {
    minReplicas: minReplicas
    maxReplicas: maxReplicas
    }
    }
    }
    }

    output fqdn string = containerApp.properties.configuration.ingress.fqdn


    Understanding configuration differences with Dapr

    The code for this specific sample application is largely the same whether or not Dapr is integrated. However, even with this simple app, there are a few benefits and configuration differences when using Dapr that are worth exploring.

    In this scenario most of the changes are related to communication between the container apps. However, you can explore the full range of Dapr benefits by reading the Dapr integration with Azure Container Apps article in the conceptual documentation.

    Without Dapr

    Without Dapr the main.bicep template handles wiring up the front-end store app to communicate with the back-end apis by manually managing environment variables. The bicep template retrieves the fully qualified domains (fqdn) of the API apps as output parameters when they are created. Those configurations are then set as environment variables on the store container app.


    # Retrieve environment variables from API container creation
    var frontend_config = [
    {
    name: 'ProductsApi'
    value: 'http://${products.outputs.fqdn}'
    }
    {
    name: 'InventoryApi'
    value: 'http://${inventory.outputs.fqdn}'
    }
    ]

    # create the store api container app, passing in config
    module store 'container_app.bicep' = {
    name: 'store'
    params: {
    name: 'store'
    location: location
    registryPassword: acr.listCredentials().passwords[0].value
    registryUsername: acr.listCredentials().username
    containerAppEnvironmentId: env.outputs.id
    registry: acr.name
    envVars: union(shared_config, frontend_config)
    externalIngress: true
    }
    }

    The environment variables are then retrieved inside of the program class and used to configure the base URLs of the corresponding HTTP clients.


    builder.Services.AddHttpClient("Products", (httpClient) => httpClient.BaseAddress = new Uri(builder.Configuration.GetValue<string>("ProductsApi")));
    builder.Services.AddHttpClient("Inventory", (httpClient) => httpClient.BaseAddress = new Uri(builder.Configuration.GetValue<string>("InventoryApi")));

    With Dapr

    Dapr can be enabled on a container app when it is created, as seen below. This configuration adds a Dapr sidecar to the app to streamline discovery and communication features between the different container apps in your environment.


    # Create the container app with Dapr enabled
    resource containerApp 'Microsoft.App/containerApps@2022-01-01-preview' ={
    name: name
    location: location
    properties:{
    managedEnvironmentId: containerAppEnvironmentId
    configuration: {
    dapr: {
    enabled: true
    appId: name
    appPort: port
    appProtocol: appProtocol
    }
    activeRevisionsMode: 'single'
    secrets: [
    {
    name: 'container-registry-password'
    value: registryPassword
    }
    ]

    # Rest of template omitted for brevity...
    }
    }

    Some of these Dapr features can be surfaced through the program file. You can configure your HttpClient to leverage Dapr configurations when communicating with other apps in your environment.


    // reconfigure code to make requests to Dapr sidecar
    var baseURL = (Environment.GetEnvironmentVariable("BASE_URL") ?? "http://localhost") + ":" + (Environment.GetEnvironmentVariable("DAPR_HTTP_PORT") ?? "3500");
    builder.Services.AddHttpClient("Products", (httpClient) =>
    {
    httpClient.BaseAddress = new Uri(baseURL);
    httpClient.DefaultRequestHeaders.Add("dapr-app-id", "Products");
    });

    builder.Services.AddHttpClient("Inventory", (httpClient) =>
    {
    httpClient.BaseAddress = new Uri(baseURL);
    httpClient.DefaultRequestHeaders.Add("dapr-app-id", "Inventory");
    });


    Clean up resources

    If you're not going to continue to use this application, you can delete the Azure Container Apps and all the associated services by removing the resource group.

    Follow these steps in the Azure portal to remove the resources you created:

    1. In the Azure portal, navigate to the msdocswebappsapi resource group using the left navigation or search bar.
    2. Select the Delete resource group button at the top of the resource group Overview.
    3. Enter the resource group name msdocswebappsapi in the Are you sure you want to delete "msdocswebappsapi" confirmation dialog.
    4. Select Delete.
      The process to delete the resource group may take a few minutes to complete.
    - + \ No newline at end of file diff --git a/blog/tags/dotnet/page/2/index.html b/blog/tags/dotnet/page/2/index.html index 681f84af25..72a73ab77b 100644 --- a/blog/tags/dotnet/page/2/index.html +++ b/blog/tags/dotnet/page/2/index.html @@ -14,13 +14,13 @@ - +

    2 posts tagged with "dotnet"

    View All Tags

    · 10 min read
    Mike James
    Matt Soucoup

    Welcome to Day 6 of #30DaysOfServerless!

    The theme for this week is Azure Functions. Today we're going to talk about why Azure Functions are a great fit for .NET developers.


    What We'll Cover

    • What is serverless computing?
    • How does Azure Functions fit in?
    • Let's build a simple Azure Function in .NET
    • Developer Guide, Samples & Scenarios
    • Exercise: Explore the Create Serverless Applications path.
    • Resources: For self-study!

    A banner image that has the title of this article with the author&#39;s photo and a drawing that summarizes the demo application.


    The leaves are changing colors and there's a chill in the air, or for those lucky folks in the Southern Hemisphere, the leaves are budding and a warmth is in the air. Either way, that can only mean one thing - it's Serverless September!🍂 So today, we're going to take a look at Azure Functions - what they are, and why they're a great fit for .NET developers.

    What is serverless computing?

    For developers, serverless computing means you write highly compact individual functions that do one thing - and run in the cloud. These functions are triggered by some external event. That event could be a record being inserted into a database, a file uploaded into BLOB storage, a timer interval elapsed, or even a simple HTTP request.

    But... servers are still definitely involved! What has changed from other types of cloud computing is that the idea and ownership of the server has been abstracted away.

    A lot of the time you'll hear folks refer to this as Functions as a Service or FaaS. The defining characteristic is all you need to do is put together your application logic. Your code is going to be invoked in response to events - and the cloud provider takes care of everything else. You literally get to focus on only the business logic you need to run in response to something of interest - no worries about hosting.

    You do not need to worry about wiring up the plumbing between the service that originates the event and the serverless runtime environment. The cloud provider will handle the mechanism to call your function in response to whatever event you chose to have the function react to. And it passes along any data that is relevant to the event to your code.

    And here's a really neat thing. You only pay for the time the serverless function is running. So, if you have a function that is triggered by an HTTP request, and you rarely get requests to your function, you would rarely pay.

    How does Azure Functions fit in?

    Microsoft's Azure Functions is a modern serverless architecture, offering event-driven cloud computing that is easy for developers to use. It provides a way to run small pieces of code or Functions in the cloud without developers having to worry themselves about the infrastructure or platform the Function is running on.

    That means we're only concerned about writing the logic of the Function. And we can write that logic in our choice of languages... like C#. We are also able to add packages from NuGet to Azure Functions—this way, we don't have to reinvent the wheel and can use well-tested libraries.

    And the Azure Functions runtime takes care of a ton of neat stuff for us, like passing in information about the event that caused it to kick off - in a strongly typed variable. It also "binds" to other services, like Azure Storage, we can easily access those services from our code without having to worry about new'ing them up.

    Let's build an Azure Function!

    Scaffold the Function

    Don't worry about having an Azure subscription or even being connected to the internet—we can develop and debug Azure Functions locally using either Visual Studio or Visual Studio Code!

    For this example, I'm going to use Visual Studio Code to build up a Function that responds to an HTTP trigger and then writes a message to an Azure Storage Queue.

    Diagram of the how the Azure Function will use the HTTP trigger and the Azure Storage Queue Binding

    The incoming HTTP call is the trigger and the message queue the Function writes to is an output binding. Let's have at it!

    info

    You do need to have some tools downloaded and installed to get started. First and foremost, you'll need Visual Studio Code. Then you'll need the Azure Functions extension for VS Code to do the development with. Finally, you'll need the Azurite Emulator installed as well—this will allow us to write to a message queue locally.

    Oh! And of course, .NET 6!

    Now with all of the tooling out of the way, let's write a Function!

    1. Fire up Visual Studio Code. Then, from the command palette, type: Azure Functions: Create New Project

      Screenshot of create a new function dialog in VS Code

    2. Follow the steps as to which directory you want to create the project in and which .NET runtime and language you want to use.

      Screenshot of VS Code prompting which directory and language to use

    3. Pick .NET 6 and C#.

      It will then prompt you to pick the folder in which your Function app resides and then select a template.

      Screenshot of VS Code prompting you to pick the Function trigger template

      Pick the HTTP trigger template. When prompted for a name, call it: PostToAQueue.

    Execute the Function Locally

    1. After giving it a namespace, it prompts for an authorization level—pick Anonymous. Now we have a Function! Let's go ahead and hit F5 and see it run!
    info

    After the templates have finished installing, you may get a prompt to download additional components—these are NuGet packages. Go ahead and do that.

    When it runs, you'll see the Azure Functions logo appear in the Terminal window with the URL the Function is located at. Copy that link.

    Screenshot of the Azure Functions local runtime starting up

    1. Type the link into a browser, adding a name parameter as shown in this example: http://localhost:7071/api/PostToAQueue?name=Matt. The Function will respond with a message. You can even set breakpoints in Visual Studio Code and step through the code!

    Write To Azure Storage Queue

    Next, we'll get this HTTP trigger Function to write to a local Azure Storage Queue. First we need to add the Storage NuGet package to our project. In the terminal, type:

    dotnet add package Microsoft.Azure.WebJobs.Extensions.Storage

    Then set a configuration setting to tell the Function runtime where to find the Storage. Open up local.settings.json and set "AzureWebJobsStorage" to "UseDevelopmentStorage=true". The full file will look like:

    {
    "IsEncrypted": false,
    "Values": {
    "AzureWebJobsStorage": "UseDevelopmentStorage=true",
    "AzureWebJobsDashboard": ""
    }
    }

    Then create a new class within your project. This class will hold nothing but properties. Call it whatever you want and add whatever properties you want to it. I called mine TheMessage and added an Id and Name properties to it.

    public class TheMessage
    {
    public string Id { get; set; }
    public string Name { get; set; }
    }

    Finally, change your PostToAQueue Function, so it looks like the following:


    public static class PostToAQueue
    {
    [FunctionName("PostToAQueue")]
    public static async Task<IActionResult> Run(
    [HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)] HttpRequest req,
    [Queue("demoqueue", Connection = "AzureWebJobsStorage")] IAsyncCollector<TheMessage> messages,
    ILogger log)
    {
    string name = req.Query["name"];

    await messages.AddAsync(new TheMessage { Id = System.Guid.NewGuid().ToString(), Name = name });

    return new OkResult();
    }
    }

    Note the addition of the messages variable. This is telling the Function to use the storage connection we specified before via the Connection property. And it is also specifying which queue to use in that storage account, in this case demoqueue.

    All the code is doing is pulling out the name from the query string, new'ing up a new TheMessage class and adding that to the IAsyncCollector variable.

    That will add the new message to the queue!

    Make sure Azurite is started within VS Code (both the queue and blob emulators). Run the app and send the same GET request as before: http://localhost:7071/api/PostToAQueue?name=Matt.

    If you have the Azure Storage Explorer installed, you can browse your local Queue and see the new message in there!

    Screenshot of Azure Storage Explorer with the new message in the queue

    Summing Up

    We had a quick look at what Microsoft's serverless offering, Azure Functions, is comprised of. It's a full-featured FaaS offering that enables you to write functions in your language of choice, including reusing packages such as those from NuGet.

    A highlight of Azure Functions is the way they are triggered and bound. The triggers define how a Function starts, and bindings are akin to input and output parameters on it that correspond to external services. The best part is that the Azure Function runtime takes care of maintaining the connection to the external services so you don't have to worry about new'ing up or disposing of the connections yourself.

    We then wrote a quick Function that gets triggered off an HTTP request and then writes a query string parameters from that request into a local Azure Storage Queue.

    What's Next

    So, where can you go from here?

    Think about how you can build real-world scenarios by integrating other Azure services. For example, you could use serverless integrations to build a workflow where the input payload received using an HTTP Trigger, is now stored in Blob Storage (output binding), which in turn triggers another service (e.g., Cognitive Services) that processes the blob and returns an enhanced result.

    Keep an eye out for an update to this post where we walk through a scenario like this with code. Check out the resources below to help you get started on your own.

    Exercise

    This brings us close to the end of Week 1 with Azure Functions. We've learned core concepts, built and deployed our first Functions app, and explored quickstarts and scenarios for different programming languages. So, what can you do to explore this topic on your own?

    • Explore the Create Serverless Applications learning path which has several modules that explore Azure Functions integrations with various services.
    • Take up the Cloud Skills Challenge and complete those modules in a fun setting where you compete with peers for a spot on the leaderboard!

    Then come back tomorrow as we wrap up the week with a discussion on end-to-end scenarios, a recap of what we covered this week, and a look at what's ahead next week.

    Resources

    Start here for developer guidance in getting started with Azure Functions as a .NET/C# developer:

    Then learn about supported Triggers and Bindings for C#, with code snippets to show how they are used.

    Finally, explore Azure Functions samples for C# and learn to implement serverless solutions. Examples include:

    - + \ No newline at end of file diff --git a/blog/tags/event-hubs/index.html b/blog/tags/event-hubs/index.html index 564f72ef13..9ee92693d5 100644 --- a/blog/tags/event-hubs/index.html +++ b/blog/tags/event-hubs/index.html @@ -14,7 +14,7 @@ - + @@ -22,7 +22,7 @@

    One post tagged with "event-hubs"

    View All Tags

    · 10 min read
    Ayca Bas

    Welcome to Day 20 of #30DaysOfServerless!

    Every day millions of people spend their precious time in productivity tools. What if you use data and intelligence behind the Microsoft applications (Microsoft Teams, Outlook, and many other Office apps) to build seamless automations and custom apps to boost productivity?

    In this post, we'll learn how to build a seamless onboarding experience for new employees joining a company with the power of Microsoft Graph, integrated with Event Hubs and Logic Apps!


    What We'll Cover

    • ✨ The power of Microsoft Graph
    • 🖇️ How do Microsoft Graph and Event Hubs work together?
    • 🛠 Let's Build an Onboarding Workflow!
      • 1️⃣ Setup Azure Event Hubs + Key Vault
      • 2️⃣ Subscribe to users, receive change notifications from Logic Apps
      • 3️⃣ Create Onboarding workflow in the Logic Apps
    • 🚀 Debug: Your onboarding experience
    • ✋ Exercise: Try this tutorial out yourself!
    • 📚 Resources: For Self-Study


    ✨ The Power of Microsoft Graph

    Microsoft Graph is the gateway to data and intelligence in Microsoft 365 platform. Microsoft Graph exploses Rest APIs and client libraries to access data across Microsoft 365 core services such as Calendar, Teams, To Do, Outlook, People, Planner, OneDrive, OneNote and more.

    Overview of Microsoft Graph

    You can build custom experiences by using Microsoft Graph such as automating the onboarding process for new employees. When new employees are created in the Azure Active Directory, they will be automatically added in the Onboarding team on Microsoft Teams.

    Solution architecture


    🖇️ Microsoft Graph with Event Hubs

    Microsoft Graph uses a webhook mechanism to track changes in resources and deliver change notifications to the clients. For example, with Microsoft Graph Change Notifications, you can receive change notifications when:

    • a new task is added in the to-do list
    • a user changes the presence status from busy to available
    • an event is deleted/cancelled from the calendar

    If you'd like to track a large set of resources at a high frequency, use Azure Events Hubs instead of traditional webhooks to receive change notifications. Azure Event Hubs is a popular real-time events ingestion and distribution service built for scale.

    EVENT GRID - PARTNER EVENTS

    Microsoft Graph Change Notifications can be also received by using Azure Event Grid -- currently available for Microsoft Partners! Read the Partner Events Overview documentation for details.

    Setup Azure Event Hubs + Key Vault.

    To get Microsoft Graph Change Notifications delivered to Azure Event Hubs, we'll have to setup Azure Event Hubs and Azure Key Vault. We'll use Azure Key Vault to access to Event Hubs connection string.

    1️⃣ Create Azure Event Hubs

    1. Go to Azure Portal and select Create a resource, type Event Hubs and select click Create.
    2. Fill in the Event Hubs namespace creation details, and then click Create.
    3. Go to the newly created Event Hubs namespace page, select Event Hubs tab from the left pane and + Event Hub:
      • Name your Event Hub as Event Hub
      • Click Create.
    4. Click the name of the Event Hub, and then select Shared access policies and + Add to add a new policy:
      • Give a name to the policy
      • Check Send and Listen
      • Click Create.
    5. After the policy has been created, click the name of the policy to open the details panel, and then copy the Connection string-primary key value. Write it down; you'll need it for the next step.
    6. Go to Consumer groups tab in the left pane and select + Consumer group, give a name for your consumer group as onboarding and select Create.

    2️⃣ Create Azure Key Vault

    1. Go to Azure Portal and select Create a resource, type Key Vault and select Create.
    2. Fill in the Key Vault creation details, and then click Review + Create.
    3. Go to newly created Key Vault and select Secrets tab from the left pane and click + Generate/Import:
      • Give a name to the secret
      • For the value, paste in the connection string you generated at the Event Hubs step
      • Click Create
      • Copy the name of the secret.
    4. Select Access Policies from the left pane and + Add Access Policy:
      • For Secret permissions, select Get
      • For Principal, select Microsoft Graph Change Tracking
      • Click Add.
    5. Select Overview tab from the left pane and copy the Vault URI.

    Subscribe for Logic Apps change notifications

    To start receiving Microsoft Graph Change Notifications, we'll need to create subscription to the resource that we'd like to track - here, 'users'. We'll use Azure Logic Apps to create subscription.

    To create subscription for Microsoft Graph Change Notifications, we'll need to make a http post request to https://graph.microsoft.com/v1.0/subscriptions. Microsoft Graph requires Azure Active Directory authentication make API calls. First, we'll need to register an app to Azure Active Directory, and then we will make the Microsoft Graph Subscription API call with Azure Logic Apps.

    1️⃣ Create an app in Azure Active Directory

    1. In the Azure Portal, go to Azure Active Directory and select App registrations from the left pane and select + New registration. Fill in the details for the new App registration form as below:
      • Name: Graph Subscription Flow Auth
      • Supported account types: Accounts in any organizational directory (Any Azure AD directory - Multitenant) and personal Microsoft accounts (e.g. Skype, Xbox)
      • Select Register.
    2. Go to newly registered app in Azure Active Directory, select API permissions:
      • Select + Add a permission and Microsoft Graph
      • Select Application permissions and add User.Read.All and Directory.Read.All.
      • Select Grant admin consent for the organization
    3. Select Certificates & secrets tab from the left pane, select + New client secret:
      • Choose desired expiry duration
      • Select Add
      • Copy the value of the secret.
    4. Go to Overview from the left pane, copy Application (client) ID and Directory (tenant) ID.

    2️⃣ Create subscription with Azure Logic Apps

    1. Go to Azure Portal and select Create a resource, type Logic apps and select click Create.

    2. Fill in the Logic Apps creation details, and then click Create.

    3. Go to the newly created Logic Apps page, select Workflows tab from the left pane and select + Add:

      • Give a name to the new workflow as graph-subscription-flow
      • Select Stateful as a state type
      • Click Create.
    4. Go to graph-subscription-flow, and then select Designer tab.

    5. In the Choose an operation section, search for Schedule and select Recurrence as a trigger. Fill in the parameters as below:

      • Interval: 61
      • Frequency: Minute
      • Time zone: Select your own time zone
      • Start time: Set a start time
    6. Select + button in the flow and select add an action. Search for HTTP and select HTTP as an action. Fill in the parameters as below:

      • Method: POST
      • URI: https://graph.microsoft.com/v1.0/subscriptions
      • Headers:
        • Key: Content-type
        • Value: application/json
      • Body:
      {
      "changeType": "created, updated",
      "clientState": "secretClientValue",
      "expirationDateTime": "@{addHours(utcNow(), 1)}",
      "notificationUrl": "EventHub:https://<YOUR-VAULT-URI>/secrets/<YOUR-KEY-VAULT-SECRET-NAME>?tenantId=72f988bf-86f1-41af-91ab-2d7cd011db47",
      "resource": "users"
      }

      In notificationUrl, make sure to replace <YOUR-VAULT-URI> with the vault uri and <YOUR-KEY-VAULT-SECRET-NAME> with the secret name that you copied from the Key Vault.

      In resource, define the resource type you'd like to track changes. For our example, we will track changes for users resource.

      • Authentication:
        • Authentication type: Active Directory OAuth
        • Authority: https://login.microsoft.com
        • Tenant: Directory (tenant) ID copied from AAD app
        • Audience: https://graph.microsoft.com
        • Client ID: Application (client) ID copied from AAD app
        • Credential Type: Secret
        • Secret: value of the secret copied from AAD app
    7. Select Save and run your workflow from the Overview tab.

      Check your subscription in Graph Explorer: If you'd like to make sure that your subscription is created successfully by Logic Apps, you can go to Graph Explorer, login with your Microsoft 365 account and make GET request to https://graph.microsoft.com/v1.0/subscriptions. Your subscription should appear in the response after it's created successfully.

    Subscription workflow success

    After subscription is created successfully by Logic Apps, Azure Event Hubs will receive notifications whenever there is a new user created in Azure Active Directory.


    Create Onboarding workflow in Logic Apps

    We'll create a second workflow in the Logic Apps to receive change notifications from Event Hubs when there is a new user created in the Azure Active Directory and add new user in Onboarding team on Microsoft Teams.

    1. Go to the Logic Apps you created in the previous steps, select Workflows tab and create a new workflow by selecting + Add:
      • Give a name to the new workflow as teams-onboarding-flow
      • Select Stateful as a state type
      • Click Create.
    2. Go to teams-onboarding-flow, and then select Designer tab.
    3. In the Choose an operation section, search for Event Hub, select When events are available in Event Hub as a trigger. Setup Event Hub connection as below:
      • Create Connection:
        • Connection name: Connection
        • Authentication Type: Connection String
        • Connection String: Go to Event Hubs > Shared Access Policies > RootManageSharedAccessKey and copy Connection string–primary key
        • Select Create.
      • Parameters:
        • Event Hub Name: Event Hub
        • Consumer Group Name: onboarding
    4. Select + in the flow and add an action, search for Control and add For each as an action. Fill in For each action as below:
      • Select output from previous steps: Events
    5. Inside For each, select + in the flow and add an action, search for Data operations and select Parse JSON. Fill in Parse JSON action as below:
      • Content: Events Content
      • Schema: Copy the json content from schema-parse.json and paste as a schema
    6. Select + in the flow and add an action, search for Control and add For each as an action. Fill in For each action as below:
      • Select output from previous steps: value
      1. Inside For each, select + in the flow and add an action, search for Microsoft Teams and select Add a member to a team. Login with your Microsoft 365 account to create a connection and fill in Add a member to a team action as below:
      • Team: Create an Onboarding team on Microsoft Teams and select
      • A user AAD ID for the user to add to a team: id
    7. Select Save.

    🚀 Debug your onboarding experience

    To debug our onboarding experience, we'll need to create a new user in Azure Active Directory and see if it's added in Microsoft Teams Onboarding team automatically.

    1. Go to Azure Portal and select Azure Active Directory from the left pane and go to Users. Select + New user and Create new user. Fill in the details as below:

      • User name: JaneDoe
      • Name: Jane Doe

      new user in Azure Active Directory

    2. When you added Jane Doe as a new user, it should trigger the teams-onboarding-flow to run. teams onboarding flow success

    3. Once the teams-onboarding-flow runs successfully, you should be able to see Jane Doe as a member of the Onboarding team on Microsoft Teams! 🥳 new member in Onboarding team on Microsoft Teams

    Congratulations! 🎉

    You just built an onboarding experience using Azure Logic Apps, Azure Event Hubs and Azure Key Vault.


    📚 Resources

    - + \ No newline at end of file diff --git a/blog/tags/hacktoberfest/index.html b/blog/tags/hacktoberfest/index.html index 5c2bfb7e19..b19f6e7860 100644 --- a/blog/tags/hacktoberfest/index.html +++ b/blog/tags/hacktoberfest/index.html @@ -14,13 +14,13 @@ - +

    One post tagged with "hacktoberfest"

    View All Tags

    · 5 min read
    Savannah Ostrowski

    Welcome to Beyond #30DaysOfServerless! in October!

    Yes, it's October!! And since we ended #ServerlessSeptember with a focus on End-to-End Development for Serverless on Azure, we thought it would be good to share updates in October that can help you skill up even further.

    Today, we're following up on the Code to Cloud with azd blog post (Day #29) where we introduced the Azure Developer CLI (azd), an open-source tool for streamlining your end-to-end developer experience going from local development environment to Azure cloud. In today's post, we celebrate the October 2022 release of the tool, with three cool new features.

    And if it's October, it must be #Hacktoberfest!! Read on to learn about how you can take advantage of one of the new features, to contribute to the azd open-source community and ecosystem!

    Ready? Let's go!


    What We'll Cover

    • Azure Friday: Introducing the Azure Developer CLI (Video)
    • October 2022 Release: What's New in the Azure Developer CLI?
      • Azure Pipelines for CI/CD: Learn more
      • Improved Infrastructure as Code structure via Bicep modules: Learn more
      • A new azd template gallery: The new azd-templates gallery for community use! Learn more
    • Awesome-Azd: The new azd-templates gallery for Community use
      • Features: discover, create, contribute, request - templates
      • Hacktoberfest: opportunities to contribute in October - and beyond!


    Azure Friday

    This post is a follow-up to our #ServerlessSeptember post on Code to Cloud with Azure Developer CLI where we introduced azd, a new open-source tool that makes it quick and simple for you to move your application from a local development environment to Azure, streamlining your end-to-end developer workflow in the process.

    Prefer to watch a video overview? I have you covered! Check out my recent conversation with Scott Hanselman on Azure Friday where we:

    • talked about the code-to-cloud developer journey
    • walkthrough the ins and outs of an azd template
    • explored Azure Developer CLI commands in the terminal and VS Code, and
    • (probably most importantly) got a web app up and running on Azure with a database, Key Vault and monitoring all in a couple of minutes

    October Release

    We're pleased to announce the October 2022 release of the Azure Developer CLI (currently 0.3.0-beta.2). Read the release announcement for more details. Here are the highlights:

    • Azure Pipelines for CI/CD: This addresses azure-dev#101, adding support for Azure Pipelines (alongside GitHub Actions) as a CI/CD provider. Learn more about usage and related documentation.
    • Improved Infrastructure as Code structure via Bicep modules: This addresses azure-dev#543, which recognized the complexity of using a single resources.bicep file for all resources. With this release, azd templates now come with Bicep modules organized by purpose making it easier to edit and understand. Learn more about this structure, and how to use it.
    • New Templates Gallery - awesome-azd: This addresses azure-dev#398, which aimed to make templates more discoverable and easier to contribute. Learn more about how the new gallery improves the template discovery experience.

    In the next section, we'll dive briefly into the last feature, introducing the new awesome-azd site and resource for templates discovery and contribution. And, since it's #Hacktoberfest season, we'll talk about the Contributor Guide and the many ways you can contribute to this project - with, or without, code.


    It's awesome-azd

    Welcome to awesome-azd a new template gallery hosted on GitHub Pages, and meant to be a destination site for discovering, requesting, and contributing azd-templates for community use!

    In addition, it's README reflects the awesome-list resource format, providing a location for the community to share "best of" resources for Azure Developer CLI - from blog posts and videos, to full-scale tutorials and templates.

    The Gallery is organized into three main areas:

    Take a minute to explore the Gallery and note the features:

    • Search for templates by name
    • Requested Templates - indicating asks from the community
    • Featured Templates - highlighting high-quality templates
    • Filters - to discover templates by and/or query combinations

    Check back often to see the latest contributed templates and requests!


    Hacktoberfest

    So, why is this a good time to talk about the Gallery? Because October means it's time for #Hacktoberfest - a month-long celebration of open-source projects and their maintainers, and an opportunity for first-time contributors to get support and guidance making their first pull-requests! Check out the #Hacktoberfest topic on GitHub for projects you can contribute to.

    And we hope you think of awesome-azd as another possible project to contribute to.

    Check out the FAQ section to learn how to create, discover, and contribute templates. Or take a couple of minutes to watch this video walkthrough from Jon Gallant:

    And don't hesitate to reach out to us - either via Issues on the repo, or in the Discussions section of this site, to give us feedback!

    Happy Hacking! 🎃


    - + \ No newline at end of file diff --git a/blog/tags/hello/index.html b/blog/tags/hello/index.html index 3d9f867782..92c7c08154 100644 --- a/blog/tags/hello/index.html +++ b/blog/tags/hello/index.html @@ -14,13 +14,13 @@ - +

    2 posts tagged with "hello"

    View All Tags

    · 5 min read
    Nitya Narasimhan
    Devanshi Joshi

    What We'll Cover

    • What is Serverless September? (6 initiatives)
    • How can I participate? (3 actions)
    • How can I skill up (30 days)
    • Who is behind this? (Team Contributors)
    • How can you contribute? (Custom Issues)
    • Exercise: Take the Cloud Skills Challenge!
    • Resources: #30DaysOfServerless Collection.

    Serverless September

    Welcome to Day 01 of 🍂 #ServerlessSeptember! Today, we kick off a full month of content and activities to skill you up on all things Serverless on Azure with content, events, and community interactions! Read on to learn about what we have planned!


    Explore our initiatives

    We have a number of initiatives planned for the month to help you learn and skill up on relevant technologies. Click on the links to visit the relevant pages for each.

    We'll go into more details about #30DaysOfServerless in this post - don't forget to subscribe to the blog to get daily posts delivered directly to your preferred feed reader!


    Register for events!

    What are 3 things you can do today, to jumpstart your learning journey?

    Serverless Hacks


    #30DaysOfServerless

    #30DaysOfServerless is a month-long series of daily blog posts grouped into 4 themed weeks - taking you from core concepts to end-to-end solution examples in 30 days. Each article will be short (5-8 mins reading time) and provide exercises and resources to help you reinforce learnings and take next steps.

    This series focuses on the Serverless On Azure learning journey in four stages, each building on the previous week to help you skill up in a beginner-friendly way:

    We have a tentative roadmap for the topics we hope to cover and will keep this updated as we go with links to actual articles as they get published.

    Week 1: FOCUS ON FUNCTIONS ⚡️

    Here's a sneak peek at what we have planned for week 1. We'll start with a broad look at fundamentals, walkthrough examples for each targeted programming language, then wrap with a post that showcases the role of Azure Functions in powering different serverless scenarios.

    • Sep 02: Learn Core Concepts for Azure Functions
    • Sep 03: Build and deploy your first Function
    • Sep 04: Azure Functions - for Java Developers!
    • Sep 05: Azure Functions - for JavaScript Developers!
    • Sep 06: Azure Functions - for .NET Developers!
    • Sep 07: Azure Functions - for Python Developers!
    • Sep 08: Wrap: Azure Functions + Serverless on Azure

    Ways to Participate..

    We hope you are as excited as we are, to jumpstart this journey. We want to make this a useful, beginner-friendly journey and we need your help!

    Here are the many ways you can participate:

    • Follow Azure on dev.to - we'll republish posts under this series page and welcome comments and feedback there!
    • Discussions on GitHub - Use this if you have feedback for us (on how we can improve these resources), or want to chat with your peers about serverless topics.
    • Custom Issues - just pick a template, create a new issue by filling in the requested details, and submit. You can use these to:
      • submit questions for AskTheExpert (live Q&A) ahead of time
      • submit your own articles or projects for community to learn from
      • share your ServerlessHack and get listed in our Hall Of Fame!
      • report bugs or share ideas for improvements

    Here's the list of custom issues currently defined.

    Community Buzz

    Let's Get Started!

    Now you know everything! We hope you are as excited as we are to dive into a full month of active learning and doing! Don't forget to subscribe for updates in your favorite feed reader! And look out for our first Azure Functions post tomorrow!


    - + \ No newline at end of file diff --git a/blog/tags/hello/page/2/index.html b/blog/tags/hello/page/2/index.html index 8f6c788b70..5268e02eb7 100644 --- a/blog/tags/hello/page/2/index.html +++ b/blog/tags/hello/page/2/index.html @@ -14,13 +14,13 @@ - +

    2 posts tagged with "hello"

    View All Tags

    · 3 min read
    Nitya Narasimhan
    Devanshi Joshi

    🍂 It's September?

    Well, almost! September 1 is a few days away and I'm excited! Why? Because it's the perfect time to revisit #Serverless September, a month of

    ".. content-driven learning where experts and practitioners share their insights and tutorials on how to use serverless technologies effectively in today's ecosystems"

    If the words look familiar, it's because I actually wrote them 2 years ago when we launched the 2020 edition of this series. You might even recall this whimsical image I drew to capture the concept of September (fall) and Serverless (event-driven on-demand compute). Since then, a lot has happened in the serverless ecosystem!

    You can still browse the 2020 Content Collection to find great talks, articles and code samples to get started using Serverless on Azure. But read on to learn what's new!

    🧐 What's New?

    Well - quite a few things actually. This year, Devanshi Joshi and I expanded the original concept in a number of ways. Here's just a few of them that come to mind.

    New Website

    This year, we created this website (shortcut: https://aka.ms/serverless-september) to serve as a permanent home for content in 2022 and beyond - making it a canonical source for the #serverless posts we publish to tech communities like dev.to, Azure Developer Community and Apps On Azure. We hope this also makes it easier for you to search for, or discover, current and past articles that support your learning journey!

    Start by bookmarking these two sites:

    More Options

    Previous years focused on curating and sharing content authored by Microsoft and community contributors, showcasing serverless examples and best practices. This was perfect for those who already had experience with the core devtools and concepts.

    This year, we wanted to combine beginner-friendly options (for those just starting their serverless journey) with more advanced insights (for those looking to skill up further). Here's a sneak peek at some of the initiatives we've got planned!

    We'll also explore the full spectrum of serverless - from Functions-as-a-Service (for granularity) to Containerization (for deployment) and Microservices (for scalability). Here are a few services and technologies you'll get to learn more about:

    ⚡️ Join us!

    This has been a labor of love from multiple teams at Microsoft! We can't wait to share all the resources that we hope will help you skill up on all things Serverless this September! Here are a couple of ways to participate:

    - + \ No newline at end of file diff --git a/blog/tags/index.html b/blog/tags/index.html index e38c290f38..5bea12ce1c 100644 --- a/blog/tags/index.html +++ b/blog/tags/index.html @@ -14,13 +14,13 @@ - +

    Tags

    - + \ No newline at end of file diff --git a/blog/tags/java/index.html b/blog/tags/java/index.html index 35f9a86ac2..8eb4b5d8fc 100644 --- a/blog/tags/java/index.html +++ b/blog/tags/java/index.html @@ -14,13 +14,13 @@ - +

    One post tagged with "java"

    View All Tags

    · 8 min read
    Rory Preddy

    Welcome to Day 4 of #30DaysOfServerless!

    Yesterday we walked through an Azure Functions Quickstart with JavaScript, and used it to understand the general Functions App structure, tooling and developer experience.

    Today we'll look at developing Functions app with a different programming language - namely, Java - and explore developer guidance, tools and resources to build serverless Java solutions on Azure.


    What We'll Cover


    Developer Guidance

    If you're a Java developer new to serverless on Azure, start by exploring the Azure Functions Java Developer Guide. It covers:

    In this blog post, we'll dive into one quickstart, and discuss other resources briefly, for awareness! Do check out the recommended exercises and resources for self-study!


    My First Java Functions App

    In today's post, we'll walk through the Quickstart: Azure Functions tutorial using Visual Studio Code. In the process, we'll setup our development environment with the relevant command-line tools and VS Code extensions to make building Functions app simpler.

    Note: Completing this exercise may incur a a cost of a few USD cents based on your Azure subscription. Explore pricing details to learn more.

    First, make sure you have your development environment setup and configured.

    PRE-REQUISITES
    1. An Azure account with an active subscription - Create an account for free
    2. The Java Development Kit, version 11 or 8. - Install
    3. Apache Maven, version 3.0 or above. - Install
    4. Visual Studio Code. - Install
    5. The Java extension pack - Install
    6. The Azure Functions extension for Visual Studio Code - Install

    VS Code Setup

    NEW TO VISUAL STUDIO CODE?

    Start with the Java in Visual Studio Code tutorial to jumpstart your learning!

    Install the Extension Pack for Java (shown below) to install 6 popular extensions to help development workflow from creation to testing, debugging, and deployment.

    Extension Pack for Java

    Now, it's time to get started on our first Java-based Functions app.

    1. Create App

    1. Open a command-line terminal and create a folder for your project. Use the code command to launch Visual Studio Code from that directory as shown:

      $ mkdir java-function-resource-group-api
      $ cd java-function-resource-group-api
      $ code .
    2. Open the Visual Studio Command Palette (Ctrl + Shift + p) and select Azure Functions: create new project to kickstart the create workflow. Alternatively, you can click the Azure icon (on activity sidebar), to get the Workspace window, click "+" and pick the "Create Function" option as shown below.

      Screenshot of creating function in Azure from Visual Studio Code.

    3. This triggers a multi-step workflow. Fill in the information for each step as shown in the following prompts. Important: Start this process from an empty folder - the workflow will populate it with the scaffold for your Java-based Functions app.

      PromptValue
      Choose the directory location.You should either create a new folder or choose an empty folder for the project workspace. Don't choose a project folder that is already part of a workspace.
      Select a languageChoose Java.
      Select a version of JavaChoose Java 11 or Java 8, the Java version on which your functions run in Azure. Choose a Java version that you've verified locally.
      Provide a group IDChoose com.function.
      Provide an artifact IDEnter myFunction.
      Provide a versionChoose 1.0-SNAPSHOT.
      Provide a package nameChoose com.function.
      Provide an app nameEnter HttpExample.
      Select the build tool for Java projectChoose Maven.

    Visual Studio Code uses the provided information and generates an Azure Functions project. You can view the local project files in the Explorer - it should look like this:

    Azure Functions Scaffold For Java

    2. Preview App

    Visual Studio Code integrates with the Azure Functions Core tools to let you run this project on your local development computer before you publish to Azure.

    1. To build and run the application, use the following Maven command. You should see output similar to that shown below.

      $ mvn clean package azure-functions:run
      ..
      ..
      Now listening on: http://0.0.0.0:7071
      Application started. Press Ctrl+C to shut down.

      Http Functions:

      HttpExample: [GET,POST] http://localhost:7071/api/HttpExample
      ...
    2. Copy the URL of your HttpExample function from this output to a browser and append the query string ?name=<YOUR_NAME>, making the full URL something like http://localhost:7071/api/HttpExample?name=Functions. The browser should display a message that echoes back your query string value. The terminal in which you started your project also shows log output as you make requests.

    🎉 CONGRATULATIONS

    You created and ran a function app locally!

    With the Terminal panel focused, press Ctrl + C to stop Core Tools and disconnect the debugger. After you've verified that the function runs correctly on your local computer, it's time to use Visual Studio Code and Maven to publish and test the project on Azure.

    3. Sign into Azure

    Before you can deploy, sign in to your Azure subscription.

    az login

    The az login command signs you into your Azure account.

    Use the following command to deploy your project to a new function app.

    mvn clean package azure-functions:deploy

    When the creation is complete, the following Azure resources are created in your subscription:

    • Resource group. Named as java-functions-group.
    • Storage account. Required by Functions. The name is generated randomly based on Storage account name requirements.
    • Hosting plan. Serverless hosting for your function app.The name is java-functions-app-service-plan.
    • Function app. A function app is the deployment and execution unit for your functions. The name is randomly generated based on your artifactId, appended with a randomly generated number.

    4. Deploy App

    1. Back in the Resources area in the side bar, expand your subscription, your new function app, and Functions. Right-click (Windows) or Ctrl - click (macOS) the HttpExample function and choose Execute Function Now....

      Screenshot of executing function in Azure from Visual Studio Code.

    2. In Enter request body you see the request message body value of { "name": "Azure" }. Press Enter to send this request message to your function.

    3. When the function executes in Azure and returns a response, a notification is raised in Visual Studio Code.

    You can also copy the complete Invoke URL shown in the output of the publish command into a browser address bar, appending the query parameter ?name=Functions. The browser should display similar output as when you ran the function locally.

    🎉 CONGRATULATIONS

    You deployed your function app to Azure, and invoked it!

    5. Clean up

    Use the following command to delete the resource group and all its contained resources to avoid incurring further costs.

    az group delete --name java-functions-group

    Next Steps

    So, where can you go from here? The example above used a familiar HTTP Trigger scenario with a single Azure service (Azure Functions). Now, think about how you can build richer workflows by using other triggers and integrating with other Azure or third-party services.

    Other Triggers, Bindings

    Check out Azure Functions Samples In Java for samples (and short use cases) that highlight other triggers - with code! This includes triggers to integrate with CosmosDB, Blob Storage, Event Grid, Event Hub, Kafka and more.

    Scenario with Integrations

    Once you've tried out the samples, try building an end-to-end scenario by using these triggers to integrate seamlessly with other Services. Here are a couple of useful tutorials:

    Exercise

    Time to put this into action and validate your development workflow:

    Resources

    - + \ No newline at end of file diff --git a/blog/tags/javascript/index.html b/blog/tags/javascript/index.html index a9ebd80e71..5475a37aa3 100644 --- a/blog/tags/javascript/index.html +++ b/blog/tags/javascript/index.html @@ -14,13 +14,13 @@ - +

    One post tagged with "javascript"

    View All Tags

    · 7 min read
    Aaron Powell

    Welcome to Day 5 of #30DaysOfServerless!

    Yesterday we looked at Azure Functions from the perspective of a Java developer. Today, we'll do a similar walkthrough from the perspective of a JavaScript developer.

    And, we'll use this to explore another popular usage scenario for Azure Functions: building a serverless HTTP API using JavaScript.

    Ready? Let's go.


    What We'll Cover

    • Developer Guidance
    • Create Azure Function with CLI
    • Calling an external API
    • Azure Samples & Scenarios for JS
    • Exercise: Support searching
    • Resources: For self-study!


    Developer Guidance

    If you're a JavaScript developer new to serverless on Azure, start by exploring the Azure Functions JavaScript Developers Guide. It covers:

    • Quickstarts for Node.js - using Visual Code, CLI or Azure Portal
    • Guidance on hosting options and performance considerations
    • Azure Functions bindings and (code samples) for JavaScript
    • Scenario examples - integrations with other Azure Services

    Node.js 18 Support

    Node.js 18 Support (Public Preview)

    Azure Functions support for Node.js 18 entered Public Preview on Aug 31, 2022 and is supported by the Azure Functions v.4.x runtime!

    As we continue to explore how we can use Azure Functions, today we're going to look at using JavaScript to create one, and we're going to be using the newly released Node.js 18 support for Azure Functions to make the most out of the platform.

    Ensure you have Node.js 18 and Azure Functions v4.x versions installed, along with a text editor (I'll use VS Code in this post), and a terminal, then we're ready to go.

    Scenario: Calling The GitHub API

    The application we're going to be building today will use the GitHub API to return a random commit message, so that we don't need to come up with one ourselves! After all, naming things can be really hard! 🤣

    Creating the Azure Function

    To create our Azure Function, we're going to use the Azure Functions CLI, which we can install using npm:

    npm install --global azure-function-core-tools

    Once that's installed, we can use the new func command to initalise our project:

    func init --worker-runtime node --language javascript

    When running func init we can either provide the worker-runtime and language as arguments, or use the menu system that the tool will provide us. For brevity's stake, I've used the arguments here, specifying that we want node as the runtime and javascript as the language, but you could change that to typescript if you'd prefer to use TypeScript.

    Once the init command is completed, you should have a .vscode folder, and the files .gitignore, host.json, local.settings.json, and package.json.

    Files generated by func initFiles generated by func init

    Adding a HTTP Trigger

    We have an empty Functions app so far, what we need to do next is create a Function that it will run, and we're going to make a HTTP Trigger Function, which is a Function that responds to HTTP requests. We'll use the func new command to create that:

    func new --template "HTTP Trigger" --name "get-commit-message"

    When this completes, we'll have a folder for the Function, using the name we provided, that contains the filesfunction.json and index.js. Let's open the function.json to understand it a little bit:

    {
    "bindings": [
    {
    "authLevel": "function",
    "type": "httpTrigger",
    "direction": "in",
    "name": "req",
    "methods": [
    "get",
    "post"
    ]
    },
    {
    "type": "http",
    "direction": "out",
    "name": "res"
    }
    ]
    }

    This file is used to tell Functions about the Function that we've created and what it does, so it knows to handle the appropriate events. We have a bindings node which contains the event bindings for our Azure Function. The first binding is using the type httpTrigger, which indicates that it'll be executed, or triggered, by a HTTP event, and the methods indicates that it's listening to both GET and POST (you can change this for the right HTTP methods that you want to support). The HTTP request information will be bound to a property in the Functions context called req, so we can access query strings, the request body, etc.

    The other binding we have has the direction of out, meaning that it's something that the Function will return to the called, and since this is a HTTP API, the type is http, indicating that we'll return a HTTP response, and that response will be on a property called res that we add to the Functions context.

    Let's go ahead and start the Function and call it:

    func start

    Starting the FunctionStarting the Function

    With the Function started, access the endpoint http://localhost:7071/api/get-commit-message via a browser or using cURL:

    curl http://localhost:7071/api/get-commit-message\?name\=ServerlessSeptember

    Hello from Azure FunctionsHello from Azure Functions

    🎉 CONGRATULATIONS

    You created and ran a JavaScript function app locally!

    Calling an external API

    It's time to update the Function to do what we want to do - call the GitHub Search API and get some commit messages. The endpoint that we'll be calling is https://api.github.com/search/commits?q=language:javascript.

    Note: The GitHub API is rate limited and this sample will call it unauthenticated, so be aware of that in your own testing.

    To call this API, we'll leverage the newly released fetch support in Node 18 and async/await, to make for a very clean Function.

    Open up the index.js file, and delete the contents of the existing Function, so we have a empty one:

    module.exports = async function (context, req) {

    }

    The default template uses CommonJS, but you can use ES Modules with Azure Functions if you prefer.

    Now we'll use fetch to call the API, and unpack the JSON response:

    module.exports = async function (context, req) {
    const res = await fetch("https://api.github.com/search/commits?q=language:javascript");
    const json = await res.json();
    const messages = json.items.map(item => item.commit.message);
    context.res = {
    body: {
    messages
    }
    };
    }

    To send a response to the client, we're setting the context.res property, where res is the name of the output binding in our function.json, and giving it a body that contains the commit messages.

    Run func start again, and call the endpoint:

    curl http://localhost:7071/api/get-commit-message

    The you'll get some commit messages:

    A series of commit messages from the GitHub Search APIA series of commit messages from the GitHub Search API

    🎉 CONGRATULATIONS

    There we go, we've created an Azure Function which is used as a proxy to another API, that we call (using native fetch in Node.js 18) and from which we return a subset of the JSON payload.

    Next Steps

    Other Triggers, Bindings

    This article focused on using the HTTPTrigger and relevant bindings, to build a serverless API using Azure Functions. How can you explore other supported bindings, with code samples to illustrate usage?

    Scenarios with Integrations

    Once you've tried out the samples, try building an end-to-end scenario by using these triggers to integrate seamlessly with other services. Here are some suggestions:

    Exercise: Support searching

    The GitHub Search API allows you to provide search parameters via the q query string. In this sample, we hard-coded it to be language:javascript, but as a follow-on exercise, expand the Function to allow the caller to provide the search terms as a query string to the Azure Function, which is passed to the GitHub Search API. Hint - have a look at the req argument.

    Resources

    - + \ No newline at end of file diff --git a/blog/tags/keda/index.html b/blog/tags/keda/index.html index 8ba0a47842..65e6e1309e 100644 --- a/blog/tags/keda/index.html +++ b/blog/tags/keda/index.html @@ -14,13 +14,13 @@ - +

    One post tagged with "keda"

    View All Tags

    · 7 min read
    Paul Yu

    Welcome to Day 11 of #30DaysOfServerless!

    Yesterday we explored Azure Container Concepts related to environments, networking and microservices communication - and illustrated these with a deployment example. Today, we turn our attention to scaling your container apps with demand.


    What We'll Cover

    • What makes ACA Serverless?
    • What is Keda?
    • Scaling Your ACA
    • ACA Scaling In Action
    • Exercise: Explore azure-opensource-labs examples
    • Resources: For self-study!


    So, what makes Azure Container Apps "serverless"?

    Today we are going to focus on what makes Azure Container Apps (ACA) a "serverless" offering. But what does the term "serverless" really mean? As much as we'd like to think there aren't any servers involved, that is certainly not the case. In general, "serverless" means that most (if not all) server maintenance has been abstracted away from you.

    With serverless, you don't spend any time managing and patching servers. This concern is offloaded to Azure and you simply focus on adding business value through application delivery. In addition to operational efficiency, cost efficiency can be achieved with serverless on-demand pricing models. Your workload horizontally scales out based on need and you only pay for what you use. To me, this is serverless, and my teammate @StevenMurawski said it best... "being able to scale to zero is what gives ACA it's serverless magic."

    Scaling your Container Apps

    If you don't know by now, ACA is built on a solid open-source foundation. Behind the scenes, it runs on a managed Kubernetes cluster and includes several open-source components out-of-the box including Dapr to help you build and run microservices, Envoy Proxy for ingress capabilities, and KEDA for event-driven autoscaling. Again, you do not need to install these components yourself. All you need to be concerned with is enabling and/or configuring your container app to leverage these components.

    Let's take a closer look at autoscaling in ACA to help you optimize your container app.

    What is KEDA?

    KEDA stands for Kubernetes Event-Driven Autoscaler. It is an open-source project initially started by Microsoft and Red Hat and has been donated to the Cloud-Native Computing Foundation (CNCF). It is being maintained by a community of 200+ contributors and adopted by many large organizations. In terms of its status as a CNCF project it is currently in the Incubating Stage which means the project has gone through significant due diligence and on its way towards the Graduation Stage.

    Prior to KEDA, horizontally scaling your Kubernetes deployment was achieved through the Horizontal Pod Autoscaler (HPA) which relies on resource metrics such as CPU and memory to determine when additional replicas should be deployed. Being limited to CPU and memory falls a bit short for certain workloads. This is especially true for apps that need to processes messages from a queue or HTTP-based apps that can handle a specific amount of incoming HTTP requests at a time. KEDA aims to fill that gap and provides a much more robust framework for scaling by working in conjunction with HPA. It offers many scalers for you to implement and even allows your deployments to scale to zero! 🥳

    KEDA architecture

    Configuring ACA scale rules

    As I mentioned above, ACA's autoscaling feature leverages KEDA and gives you the ability to configure the number of replicas to deploy based on rules (event triggers). The number of replicas can be configured as a static number or a range (minimum and maximum). So if you need your containers to run 24/7, set the min and max to be the same value. By default, when you deploy a container app, it is set to scale from 0 to 10 replicas. The default scaling rule uses HTTP scaling and defaults to a minimum of 10 concurrent requests per second. Once the threshold of 10 concurrent request per second is met, another replica will be deployed until it reaches the maximum number of replicas.

    At the time of this writing, a container app can have up to 30 replicas.

    Default autoscaler

    As a best practice, if you have a Min / max replicas range configured, you should configure a scaling rule even if it is just explicitly setting the default values.

    Adding HTTP scaling rule

    In addition to HTTP scaling, you can also configure an Azure queue rule, which allows you to use Azure Storage Queues as an event data source.

    Adding Azure Queue scaling rule

    The most flexibility comes with the Custom rule type. This opens up a LOT more options for scaling. All of KEDA's event-based scalers are supported with this option 🚀

    Adding Custom scaling rule

    Translating KEDA templates to Azure templates

    When you implement Custom rules, you need to become familiar with translating KEDA templates to Azure Resource Manager templates or ACA YAML manifests. The KEDA scaler documentation is great and it should be simple to translate KEDA template metadata to an ACA rule metadata.

    The images below shows how to translated a scaling rule which uses Azure Service Bus as an event data source. The custom rule type is set to azure-servicebus and details of the service bus is added to the Metadata section. One important thing to note here is that the connection string to the service bus was added as a secret on the container app and the trigger parameter must be set to connection.

    Azure Container App custom rule metadata

    Azure Container App custom rule metadata

    Additional examples of KEDA scaler conversion can be found in the resources section and example video below.

    See Container App scaling in action

    Now that we've built up some foundational knowledge on how ACA autoscaling is implemented and configured, let's look at a few examples.

    Autoscaling based on HTTP traffic load

    Autoscaling based on Azure Service Bus message queues

    Summary

    ACA brings you a true serverless experience and gives you the ability to configure autoscaling rules based on KEDA scaler templates. This gives you flexibility to scale based on a wide variety of data sources in an event-driven manner. With the amount built-in scalers currently available, there is probably a scaler out there for all your use cases. If not, I encourage you to get involved with the KEDA community and help make it better!

    Exercise

    By now, you've probably read and seen enough and now ready to give this autoscaling thing a try. The example I walked through in the videos above can be found at the azure-opensource-labs repo. I highly encourage you to head over to the containerapps-terraform folder and try the lab out. There you'll find instructions which will cover all the steps and tools you'll need implement autoscaling container apps within your own Azure subscription.

    If you have any questions or feedback, please let us know in the comments below or reach out on Twitter @pauldotyu

    Have fun scaling your containers!

    Resources

    - + \ No newline at end of file diff --git a/blog/tags/logic-apps/index.html b/blog/tags/logic-apps/index.html index b7c890dba8..0e142fb488 100644 --- a/blog/tags/logic-apps/index.html +++ b/blog/tags/logic-apps/index.html @@ -14,7 +14,7 @@ - + @@ -22,7 +22,7 @@

    One post tagged with "logic-apps"

    View All Tags

    · 10 min read
    Ayca Bas

    Welcome to Day 20 of #30DaysOfServerless!

    Every day millions of people spend their precious time in productivity tools. What if you use data and intelligence behind the Microsoft applications (Microsoft Teams, Outlook, and many other Office apps) to build seamless automations and custom apps to boost productivity?

    In this post, we'll learn how to build a seamless onboarding experience for new employees joining a company with the power of Microsoft Graph, integrated with Event Hubs and Logic Apps!


    What We'll Cover

    • ✨ The power of Microsoft Graph
    • 🖇️ How do Microsoft Graph and Event Hubs work together?
    • 🛠 Let's Build an Onboarding Workflow!
      • 1️⃣ Setup Azure Event Hubs + Key Vault
      • 2️⃣ Subscribe to users, receive change notifications from Logic Apps
      • 3️⃣ Create Onboarding workflow in the Logic Apps
    • 🚀 Debug: Your onboarding experience
    • ✋ Exercise: Try this tutorial out yourself!
    • 📚 Resources: For Self-Study


    ✨ The Power of Microsoft Graph

    Microsoft Graph is the gateway to data and intelligence in Microsoft 365 platform. Microsoft Graph exploses Rest APIs and client libraries to access data across Microsoft 365 core services such as Calendar, Teams, To Do, Outlook, People, Planner, OneDrive, OneNote and more.

    Overview of Microsoft Graph

    You can build custom experiences by using Microsoft Graph such as automating the onboarding process for new employees. When new employees are created in the Azure Active Directory, they will be automatically added in the Onboarding team on Microsoft Teams.

    Solution architecture


    🖇️ Microsoft Graph with Event Hubs

    Microsoft Graph uses a webhook mechanism to track changes in resources and deliver change notifications to the clients. For example, with Microsoft Graph Change Notifications, you can receive change notifications when:

    • a new task is added in the to-do list
    • a user changes the presence status from busy to available
    • an event is deleted/cancelled from the calendar

    If you'd like to track a large set of resources at a high frequency, use Azure Events Hubs instead of traditional webhooks to receive change notifications. Azure Event Hubs is a popular real-time events ingestion and distribution service built for scale.

    EVENT GRID - PARTNER EVENTS

    Microsoft Graph Change Notifications can be also received by using Azure Event Grid -- currently available for Microsoft Partners! Read the Partner Events Overview documentation for details.

    Setup Azure Event Hubs + Key Vault.

    To get Microsoft Graph Change Notifications delivered to Azure Event Hubs, we'll have to setup Azure Event Hubs and Azure Key Vault. We'll use Azure Key Vault to access to Event Hubs connection string.

    1️⃣ Create Azure Event Hubs

    1. Go to Azure Portal and select Create a resource, type Event Hubs and select click Create.
    2. Fill in the Event Hubs namespace creation details, and then click Create.
    3. Go to the newly created Event Hubs namespace page, select Event Hubs tab from the left pane and + Event Hub:
      • Name your Event Hub as Event Hub
      • Click Create.
    4. Click the name of the Event Hub, and then select Shared access policies and + Add to add a new policy:
      • Give a name to the policy
      • Check Send and Listen
      • Click Create.
    5. After the policy has been created, click the name of the policy to open the details panel, and then copy the Connection string-primary key value. Write it down; you'll need it for the next step.
    6. Go to Consumer groups tab in the left pane and select + Consumer group, give a name for your consumer group as onboarding and select Create.

    2️⃣ Create Azure Key Vault

    1. Go to Azure Portal and select Create a resource, type Key Vault and select Create.
    2. Fill in the Key Vault creation details, and then click Review + Create.
    3. Go to newly created Key Vault and select Secrets tab from the left pane and click + Generate/Import:
      • Give a name to the secret
      • For the value, paste in the connection string you generated at the Event Hubs step
      • Click Create
      • Copy the name of the secret.
    4. Select Access Policies from the left pane and + Add Access Policy:
      • For Secret permissions, select Get
      • For Principal, select Microsoft Graph Change Tracking
      • Click Add.
    5. Select Overview tab from the left pane and copy the Vault URI.

    Subscribe for Logic Apps change notifications

    To start receiving Microsoft Graph Change Notifications, we'll need to create subscription to the resource that we'd like to track - here, 'users'. We'll use Azure Logic Apps to create subscription.

    To create subscription for Microsoft Graph Change Notifications, we'll need to make a http post request to https://graph.microsoft.com/v1.0/subscriptions. Microsoft Graph requires Azure Active Directory authentication make API calls. First, we'll need to register an app to Azure Active Directory, and then we will make the Microsoft Graph Subscription API call with Azure Logic Apps.

    1️⃣ Create an app in Azure Active Directory

    1. In the Azure Portal, go to Azure Active Directory and select App registrations from the left pane and select + New registration. Fill in the details for the new App registration form as below:
      • Name: Graph Subscription Flow Auth
      • Supported account types: Accounts in any organizational directory (Any Azure AD directory - Multitenant) and personal Microsoft accounts (e.g. Skype, Xbox)
      • Select Register.
    2. Go to newly registered app in Azure Active Directory, select API permissions:
      • Select + Add a permission and Microsoft Graph
      • Select Application permissions and add User.Read.All and Directory.Read.All.
      • Select Grant admin consent for the organization
    3. Select Certificates & secrets tab from the left pane, select + New client secret:
      • Choose desired expiry duration
      • Select Add
      • Copy the value of the secret.
    4. Go to Overview from the left pane, copy Application (client) ID and Directory (tenant) ID.

    2️⃣ Create subscription with Azure Logic Apps

    1. Go to Azure Portal and select Create a resource, type Logic apps and select click Create.

    2. Fill in the Logic Apps creation details, and then click Create.

    3. Go to the newly created Logic Apps page, select Workflows tab from the left pane and select + Add:

      • Give a name to the new workflow as graph-subscription-flow
      • Select Stateful as a state type
      • Click Create.
    4. Go to graph-subscription-flow, and then select Designer tab.

    5. In the Choose an operation section, search for Schedule and select Recurrence as a trigger. Fill in the parameters as below:

      • Interval: 61
      • Frequency: Minute
      • Time zone: Select your own time zone
      • Start time: Set a start time
    6. Select + button in the flow and select add an action. Search for HTTP and select HTTP as an action. Fill in the parameters as below:

      • Method: POST
      • URI: https://graph.microsoft.com/v1.0/subscriptions
      • Headers:
        • Key: Content-type
        • Value: application/json
      • Body:
      {
      "changeType": "created, updated",
      "clientState": "secretClientValue",
      "expirationDateTime": "@{addHours(utcNow(), 1)}",
      "notificationUrl": "EventHub:https://<YOUR-VAULT-URI>/secrets/<YOUR-KEY-VAULT-SECRET-NAME>?tenantId=72f988bf-86f1-41af-91ab-2d7cd011db47",
      "resource": "users"
      }

      In notificationUrl, make sure to replace <YOUR-VAULT-URI> with the vault uri and <YOUR-KEY-VAULT-SECRET-NAME> with the secret name that you copied from the Key Vault.

      In resource, define the resource type you'd like to track changes. For our example, we will track changes for users resource.

      • Authentication:
        • Authentication type: Active Directory OAuth
        • Authority: https://login.microsoft.com
        • Tenant: Directory (tenant) ID copied from AAD app
        • Audience: https://graph.microsoft.com
        • Client ID: Application (client) ID copied from AAD app
        • Credential Type: Secret
        • Secret: value of the secret copied from AAD app
    7. Select Save and run your workflow from the Overview tab.

      Check your subscription in Graph Explorer: If you'd like to make sure that your subscription is created successfully by Logic Apps, you can go to Graph Explorer, login with your Microsoft 365 account and make GET request to https://graph.microsoft.com/v1.0/subscriptions. Your subscription should appear in the response after it's created successfully.

    Subscription workflow success

    After subscription is created successfully by Logic Apps, Azure Event Hubs will receive notifications whenever there is a new user created in Azure Active Directory.


    Create Onboarding workflow in Logic Apps

    We'll create a second workflow in the Logic Apps to receive change notifications from Event Hubs when there is a new user created in the Azure Active Directory and add new user in Onboarding team on Microsoft Teams.

    1. Go to the Logic Apps you created in the previous steps, select Workflows tab and create a new workflow by selecting + Add:
      • Give a name to the new workflow as teams-onboarding-flow
      • Select Stateful as a state type
      • Click Create.
    2. Go to teams-onboarding-flow, and then select Designer tab.
    3. In the Choose an operation section, search for Event Hub, select When events are available in Event Hub as a trigger. Setup Event Hub connection as below:
      • Create Connection:
        • Connection name: Connection
        • Authentication Type: Connection String
        • Connection String: Go to Event Hubs > Shared Access Policies > RootManageSharedAccessKey and copy Connection string–primary key
        • Select Create.
      • Parameters:
        • Event Hub Name: Event Hub
        • Consumer Group Name: onboarding
    4. Select + in the flow and add an action, search for Control and add For each as an action. Fill in For each action as below:
      • Select output from previous steps: Events
    5. Inside For each, select + in the flow and add an action, search for Data operations and select Parse JSON. Fill in Parse JSON action as below:
      • Content: Events Content
      • Schema: Copy the json content from schema-parse.json and paste as a schema
    6. Select + in the flow and add an action, search for Control and add For each as an action. Fill in For each action as below:
      • Select output from previous steps: value
      1. Inside For each, select + in the flow and add an action, search for Microsoft Teams and select Add a member to a team. Login with your Microsoft 365 account to create a connection and fill in Add a member to a team action as below:
      • Team: Create an Onboarding team on Microsoft Teams and select
      • A user AAD ID for the user to add to a team: id
    7. Select Save.

    🚀 Debug your onboarding experience

    To debug our onboarding experience, we'll need to create a new user in Azure Active Directory and see if it's added in Microsoft Teams Onboarding team automatically.

    1. Go to Azure Portal and select Azure Active Directory from the left pane and go to Users. Select + New user and Create new user. Fill in the details as below:

      • User name: JaneDoe
      • Name: Jane Doe

      new user in Azure Active Directory

    2. When you added Jane Doe as a new user, it should trigger the teams-onboarding-flow to run. teams onboarding flow success

    3. Once the teams-onboarding-flow runs successfully, you should be able to see Jane Doe as a member of the Onboarding team on Microsoft Teams! 🥳 new member in Onboarding team on Microsoft Teams

    Congratulations! 🎉

    You just built an onboarding experience using Azure Logic Apps, Azure Event Hubs and Azure Key Vault.


    📚 Resources

    - + \ No newline at end of file diff --git a/blog/tags/microservices/index.html b/blog/tags/microservices/index.html index 15870cadd6..536fb4cc9b 100644 --- a/blog/tags/microservices/index.html +++ b/blog/tags/microservices/index.html @@ -14,7 +14,7 @@ - + @@ -26,7 +26,7 @@

    ...and that's it! We've successfully deployed our application on Azure!

    But there's more!

    Best practices: Monitoring and CI/CD!

    In my opinion, it's not enough to just set up the application on Azure! I want to know that my web app is performant and serving my users reliably! I also want to make sure that I'm not inadvertently breaking my application as I continue to make changes to it. Thankfully, the Azure Developer CLI also handles all of this via two additional commands - azd monitor and azd pipeline config.

    Application Monitoring

    When we provisioned all of our infrastructure, we also set up application monitoring via a Bicep file in our .infra/ directory that spec'd out an Application Insights dashboard. By running azd monitor we can see the dashboard with live metrics that was configured for the application.

    We can also navigate to the Application Dashboard by clicking on the resource group name, where you can set a specific refresh rate for the dashboard, and see usage, reliability, and performance metrics over time.

    I don't know about everyone else but I have spent a ton of time building out similar dashboards. It can be super time-consuming to write all the queries and create the visualizations so this feels like a real time saver.

    CI/CD

    Finally let's talk about setting up CI/CD! This might be my favorite azd feature. As I mentioned before, the Azure Developer CLI has a command, azd pipeline config, which uses the files in the .github/ directory to set up a GitHub Action. More than that, if there is no upstream repo, the Developer CLI will actually help you create one. But what does this mean exactly? Because our GitHub Action is using the same commands you'd run in the CLI under the hood, we're actually going to have CI/CD set up to run on every commit into the repo, against real Azure resources. What a sweet collaboration feature!

    That's it! We've gone end-to-end with the Azure Developer CLI - initialized a project, provisioned the resources on Azure, deployed our code on Azure, set up monitoring logs and dashboards, and set up a CI/CD pipeline with GitHub Actions to run on every commit into the repo (on real Azure resources!).

    Exercise: Try it yourself or create your own template!

    As an exercise, try out the workflow above with any template on GitHub!

    Or, try turning your own project into an Azure Developer CLI-enabled template by following this guidance. If you create your own template, don't forget to tag the repo with the azd-templates topic on GitHub to help others find it (unfamiliar with GitHub topics? Learn how to add topics to your repo)! We'd also love to chat with you about your experience creating an azd template - if you're open to providing feedback around this, please fill out this form!

    Resources

    - + \ No newline at end of file diff --git a/blog/tags/microservices/page/10/index.html b/blog/tags/microservices/page/10/index.html index cbf488e860..f0d4ab5db9 100644 --- a/blog/tags/microservices/page/10/index.html +++ b/blog/tags/microservices/page/10/index.html @@ -14,13 +14,13 @@ - +

    11 posts tagged with "microservices"

    View All Tags

    · 8 min read
    Paul Yu

    Welcome to Day 10 of #30DaysOfServerless!

    We continue our exploraton into Azure Container Apps, with today's focus being communication between microservices, and how to configure your Azure Container Apps environment in the context of a deployment example.


    What We'll Cover

    • ACA Environments & Virtual Networking
    • Basic Microservices Communications
    • Walkthrough: ACA Deployment Example
    • Summary and Next Steps
    • Exercise: Try this yourself!
    • Resources: For self-study!


    Introduction

    In yesterday's post, we learned what the Azure Container Apps (ACA) service is and the problems it aims to solve. It is considered to be a Container-as-a-Service platform since much of the complex implementation details of running a Kubernetes cluster is managed for you.

    Some of the use cases for ACA include event-driven processing jobs and background tasks, but this article will focus on hosting microservices, and how they can communicate with each other within the ACA service. At the end of this article, you will have a solid understanding of how networking and communication is handled and will leave you with a few tutorials to try.

    Environments and virtual networking in ACA

    Before we jump into microservices communication, we should review how networking works within ACA. With ACA being a managed service, Azure will take care of most of your underlying infrastructure concerns. As you provision an ACA resource, Azure provisions an Environment to deploy Container Apps into. This environment is your isolation boundary.

    Azure Container Apps Environment

    By default, Azure creates and manages a new Virtual Network (VNET) for you and the VNET is associated with the environment. As you deploy container apps, they are deployed into the same VNET and the environment is assigned a static public IP address which allows your apps to be accessible over the internet. This VNET is not visible or manageable.

    If you need control of the networking flows within the VNET, you can pre-provision one and tell Azure to deploy an environment within it. This "bring-your-own" VNET model allows you to deploy an environment in either External or Internal modes. Deploying an environment in External mode gives you the flexibility of managing your own VNET, while still allowing your containers to be accessible from outside the environment; a static public IP address is assigned to the environment. When deploying in Internal mode, your containers are accessible within the environment and/or VNET but not accessible from the internet.

    Bringing your own VNET will require some planning and you will need dedicate an empty subnet which will be used exclusively by the ACA environment. The size of your subnet will be dependant on how many containers you plan on deploying and your scaling requirements and one requirement to know is that the subnet address range must have have a /23 CIDR prefix at minimum. You will also need to think about your deployment strategy since ACA has the concept of Revisions which will also consume IPs from your subnet.

    Some additional restrictions to consider when planning your subnet address space is listed in the Resources section below and can be addressed in future posts, so be sure to follow us on dev.to and bookmark the ServerlessSeptember site.

    Basic microservices communication in ACA

    When it comes to communications between containers, ACA addresses this concern with its Ingress capabilities. With HTTP Ingress enabled on your container app, you can expose your app on a HTTPS endpoint.

    If your environment is deployed using default networking and your containers needs to be accessible from outside the environment, you will need to set the Ingress traffic option to Accepting traffic from anywhere. This will generate a Full-Qualified Domain Name (FQDN) which you can use to access your app right away. The ingress feature also generates and assigns a Secure Socket Layer (SSL) certificate for the FQDN.

    External ingress on Container App

    If your environment is deployed using default networking and your containers only need to communicate with other containers in the environment, you'll need to set the Ingress traffic option to Limited to Container Apps Environment. You get a FQDN here as well, but in the section below we'll see how that changes.

    Internal ingress on Container App

    As mentioned in the networking section above, if you deploy your ACA environment into a VNET in internal mode, your options will be Limited to Container Apps Environment or Limited to VNet.

    Ingress on internal virtual network

    Note how the Accepting traffic from anywhere option is greyed out. If your VNET is deployed in external mode, then the option will be available.

    Let's walk though an example ACA deployment

    The diagram below illustrates a simple microservices application that I deployed to ACA. The three container apps all have ingress enabled. The greeting-service app calls two backend services; a hello-service that returns the text Hello (in random casing) and a world-service that returns the text World (in a few random languages). The greeting-service concatenates the two strings together and returns Hello World to the browser. The greeting-service is the only service accessible via external ingress while two backend services are only accessible via internal ingress.

    Greeting Service overview

    With ingress enabled, let's take a quick look at the FQDN structures. Here is the FQDN of the external greeting-service.

    https://greeting-service.victoriouswave-3749d046.eastus.azurecontainerapps.io

    We can break it down into these components:

    https://[YOUR-CONTAINER-APP-NAME].[RANDOM-NAME]-[RANDOM-CHARACTERS].[AZURE-REGION].containerapps.io

    And here is the FQDN of the internal hello-service.

    https://hello-service.internal.victoriouswave-3749d046.eastus.azurecontainerapps.io

    Can you spot the difference between FQDNs?

    That was too easy 😉... the word internal is added as a subdomain in the FQDN between your container app name and the random name for all internal ingress endpoints.

    https://[YOUR-CONTAINER-APP-NAME].internal.[RANDOM-NAME]-[RANDOM-CHARACTERS].[AZURE-REGION].containerapps.io

    Now that we know the internal service FQDNs, we use them in the greeting-service app to achieve basic service-to-service communications.

    So we can inject FQDNs of downstream APIs to upstream apps using environment variables, but the downside to this approach is that need to deploy downstream containers ahead of time and this dependency will need to be planned for during your deployment process. There are ways around this and one option is to leverage the auto-injected environment variables within your app code.

    If I use the Console blade for the hello-service container app and run the env command, you will see environment variables named CONTAINER_APP_NAME and CONTAINER_APP_ENV_DNS_SUFFIX. You can use these values to determine FQDNs within your upstream app.

    hello-service environment variables

    Back in the greeting-service container I can invoke the hello-service container's sayhello method. I know the container app name is hello-service and this service is exposed over an internal ingress, therefore, if I add the internal subdomain to the CONTAINER_APP_ENV_DNS_SUFFIX I can invoke a HTTP request to the hello-service from my greeting-service container.

    Invoke the sayHello method from the greeting-service container

    As you can see, the ingress feature enables communications to other container apps over HTTP/S and ACA will inject environment variables into our container to help determine what the ingress FQDNs would be. All we need now is a little bit of code modification in the greeting-service app and build the FQDNs of our backend APIs by retrieving these environment variables.

    Greeting service code

    ... and now we have a working microservices app on ACA! 🎉

    Hello World

    Summary and next steps

    We've covered Container Apps networking and the basics of how containers communicate with one another. However, there is a better way to address service-to-service invocation using Dapr, which is an open-source framework for building microservices. It is natively integrated into the ACA service and in a future post, you'll learn how to enable it in your Container App to address microservices concerns and more. So stay tuned!

    Exercises

    As a takeaway for today's post, I encourage you to complete this tutorial and if you'd like to deploy the sample app that was presented in this article, my teammate @StevenMurawski is hosting a docker-compose-examples repo which includes samples for deploying to ACA using Docker Compose files. To learn more about the az containerapp compose command, a link to his blog articles are listed in the Resources section below.

    If you have any questions or feedback, please let us know in the comments below or reach out on Twitter @pauldotyu

    Have fun packing and shipping containers! See you in the next post!

    Resources

    The sample app presented here was inspired by services demonstrated in the book Introducing Distributed Application Runtime (Dapr): Simplifying Microservices Applications Development Through Proven and Reusable Patterns and Practices. Go check it out to leran more about Dapr!

    - + \ No newline at end of file diff --git a/blog/tags/microservices/page/11/index.html b/blog/tags/microservices/page/11/index.html index cadb346eed..0f02ba689f 100644 --- a/blog/tags/microservices/page/11/index.html +++ b/blog/tags/microservices/page/11/index.html @@ -14,13 +14,13 @@ - +

    11 posts tagged with "microservices"

    View All Tags

    · 12 min read
    Nitya Narasimhan

    Welcome to Day 9 of #30DaysOfServerless!


    What We'll Cover

    • The Week Ahead
    • Hello, Container Apps!
    • Quickstart: Build Your First ACA!
    • Under The Hood: Core ACA Concepts
    • Exercise: Try this yourself!
    • Resources: For self-study!


    The Week Ahead

    Welcome to Week 2 of #ServerlessSeptember, where we put the focus on Microservices and building Cloud-Native applications that are optimized for serverless solutions on Azure. One week is not enough to do this complex topic justice so consider this a 7-part jumpstart to the longer journey.

    1. Hello, Container Apps (ACA) - Learn about Azure Container Apps, a key service that helps you run microservices and containerized apps on a serverless platform. Know the core concepts. (Tutorial 1: First ACA)
    2. Communication with Microservices - Dive deeper into two key concepts: environments and virtual networking. Learn how microservices communicate in ACA, and walkthrough an example. (Tutorial 2: ACA with 3 Microservices)
    3. Scaling Your Container Apps - Learn about KEDA. Understand how to configure your ACA for auto-scaling with KEDA-supported triggers. Put this into action by walking through a tutorial. (Tutorial 3: Configure Autoscaling)
    4. Hello, Distributed Application Runtime (Dapr) - Learn about Dapr and how its Building Block APIs simplify microservices development with ACA. Know how the sidecar pattern enables incremental adoption of Dapr APIs without requiring any Dapr code integration in app. (Tutorial 4: Setup & Explore Dapr)
    5. Building ACA with Dapr - See how Dapr works with ACA by building a Dapr-enabled Azure Container App. Walk through a .NET tutorial using Pub/Sub and State Management APIs in an enterprise scenario. (Tutorial 5: Build ACA with Dapr)
    6. Managing Secrets With Dapr - We'll look at the Secrets API (a key Building Block of Dapr) and learn how it simplifies management of sensitive information in ACA.
    7. Microservices + Serverless On Azure - We recap Week 2 (Microservices) and set the stage for Week 3 ( Integrations) of Serverless September. Plus, self-study resources including ACA development tutorials in different languages.

    Ready? Let's go!


    Azure Container Apps!

    When building your application, your first decision is about where you host your application. The Azure Architecture Center has a handy chart to help you decide between choices like Azure Functions, Azure App Service, Azure Container Instances, Azure Container Apps and more. But if you are new to this space, you'll need a good understanding of the terms and concepts behind the services Today, we'll focus on Azure Container Apps (ACA) - so let's start with the fundamentals.

    Containerized App Defined

    A containerized app is one where the application components, dependencies, and configuration, are packaged into a single file (container image), which can be instantiated in an isolated runtime environment (container) that is portable across hosts (OS). This makes containers lightweight and scalable - and ensures that applications behave consistently on different host platforms.

    Container images can be shared via container registries (public or private) helping developers discover and deploy related apps with less effort. Scaling a containerized app can be as simple as activating more instances of its container image. However, this requires container orchestrators to automate the management of container apps for efficiency. Orchestrators use technologies like Kubernetes to support capabilities like workload scheduling, self-healing and auto-scaling on demand.

    Cloud-Native & Microservices

    Containers are seen as one of the 5 pillars of Cloud-Native app development, an approach where applications are designed explicitly to take advantage of the unique benefits of modern dynamic environments (involving public, private and hybrid clouds). Containers are particularly suited to serverless solutions based on microservices.

    • With serverless - developers use managed services instead of managing their own infrastructure. Services are typically event-driven and can be configured for autoscaling with rules tied to event triggers. Serverless is cost-effective, with developers paying only for the compute cycles and resources they use.
    • With microservices - developers compose their applications from independent components. Each component can be deployed in its own container, and scaled at that granularity. This simplifies component reuse (across apps) and maintainability (over time) - with developers evolving functionality at microservice (vs. app) levels.

    Hello, Azure Container Apps!

    Azure Container Apps is the managed service that helps you run containerized apps and microservices as a serverless compute solution, on Azure. You can:

    • deploy serverless API endpoints - autoscaled by HTTP request traffic
    • host background processing apps - autoscaled by CPU or memory load
    • handle event-driven processing - autoscaled by #messages in queue
    • run microservices - autoscaled by any KEDA-supported scaler.

    Want a quick intro to the topic? Start by watching the short video below - then read these two posts from our ZeroToHero series:


    Deploy Your First ACA

    Dev Options

    We typically have three options for development:

    • Use the Azure Portal - provision and deploy from a browser.
    • Use Visual Studio Code (with relevant extensions) - if you prefer an IDE
    • Using Azure CLI - if you prefer to build and deploy from command line.

    The documentation site has quickstarts for three contexts:

    For this quickstart, we'll go with the first option (sample image) so we can move quickly to core concepts. We'll leave the others as an exercise for you to explore.

    1. Setup Resources

    PRE-REQUISITES

    You need:

    • An Azure account with an active subscription
    • An installed Azure CLI

    Start by logging into Azure from the CLI. The command should launch a browser to complete the auth flow (or give you an option to take an alternative path).

    $ az login

    Successful authentication will result in extensive command-line output detailing the status of your subscription.

    Next, install the Azure Container Apps extension for the CLI

    $ az extension add --name containerapp --upgrade
    ...
    The installed extension 'containerapp' is in preview.

    Once successfully installed, register the Microsoft.App namespace.

    $ az provider register --namespace Microsoft.App

    Then set local environment variables in that terminal - and verify they are set correctly:

    $ RESOURCE_GROUP="my-container-apps"
    $ LOCATION="canadacentral"
    $ CONTAINERAPPS_ENVIRONMENT="my-environment"

    $ echo $LOCATION $RESOURCE_GROUP $CONTAINERAPPS_ENVIRONMENT
    canadacentral my-container-apps my-environment

    Now you can use Azure CLI to provision a resource group for this tutorial. Creating a resource group also makes it easier for us to delete/reclaim all resources used at the end of this tutorial.

    az group create \
    --name $RESOURCE_GROUP \
    --location $LOCATION
    Congratulations

    You completed the Setup step!

    On completion, the console should print out the details of the newly created resource group. You should also be able to visit the Azure Portal and find the newly-active my-container-apps resource group under your active subscription.

    2. Create Environment

    An environment is like the picket fence around your property. It creates a secure boundary that contains a group of container apps - such that all apps deployed to it share the same virtual network and logging resources.

    $ az containerapp env create \
    --name $CONTAINERAPPS_ENVIRONMENT \
    --resource-group $RESOURCE_GROUP \
    --location $LOCATION

    No Log Analytics workspace provided.
    Generating a Log Analytics workspace with name ...

    This can take a few minutes. When done, you will see the terminal display more details. You can also check the resource group in the portal and see that a Container Apps Environment and a Log Analytics Workspace are created for you as part of this step.

    You've got the fence set up. Now it's time to build your home - er, container app!

    3. Create Container App

    Here's the command we'll use to create our first Azure Container App. Note that the --image argument provides the link to a pre-existing containerapps-helloworld image.

    az containerapp create \
    --name my-container-app \
    --resource-group $RESOURCE_GROUP \
    --environment $CONTAINERAPPS_ENVIRONMENT \
    --image mcr.microsoft.com/azuredocs/containerapps-helloworld:latest \
    --target-port 80 \
    --ingress 'external' \
    --query properties.configuration.ingress.fqdn
    ...
    ...

    Container app created. Access your app at <URL>

    The --ingress property shows that the app is open to external requests; in other words, it is publicly visible at the <URL> that is printed out on the terminal on successsful completion of this step.

    4. Verify Deployment

    Let's see if this works. You can verify that your container app by visitng the URL returned above in your browser. You should see something like this!

    Container App Hello World

    You can also visit the Azure Portal and look under the created Resource Group. You should see a new Container App type of resource was created after this step.

    Congratulations

    You just created and deployed your first "Hello World" Azure Container App! This validates your local development environment setup and existence of a valid Azure subscription.

    5. Clean Up Your Resources

    It's good practice to clean up resources once you are done with a tutorial.

    THIS ACTION IS IRREVERSIBLE

    This command deletes the resource group we created above - and all resources in it. So make sure you specified the right name, then confirm deletion.

    $ az group delete --name $RESOURCE_GROUP
    Are you sure you want to perform this operation? (y/n):

    Note that you can also delete the resource group from the Azure Portal interface if that feels more comfortable. For now, we'll just use the Portal to verify that deletion occurred. If you had previously opened the Resource Group page for the created resource, just refresh it. You should see something like this:

    Resource Not Found


    Core Concepts

    COMING SOON

    An illustrated guide summarizing these concepts in a single sketchnote.

    We covered a lot today - we'll stop with a quick overview of core concepts behind Azure Container Apps, each linked to documentation for self-study. We'll dive into more details on some of these concepts in upcoming articles:

    • Environments - are the secure boundary around a group of container apps that are deployed in the same virtual network. They write logs to a shared Log Analytics workspace and can communicate seamlessly using Dapr, if used.
    • Containers refer to the container image deployed in the Azure Container App. They can use any runtime, programming language, or development stack - and be discovered using any public or private container registry. A container app can support multiple containers.
    • Revisions are immutable snapshots of an Azure Container App. The first revision is created when the ACA is first deployed, with new revisions created when redeployment occurs with revision-scope changes. Multiple revisions can run concurrently in an environment.
    • Application Lifecycle Management revolves around these revisions, with a container app having three phases: deployment, update and deactivation.
    • Microservices are independent units of functionality in Cloud-Native architectures. A single container app typically represents a single microservice, and can be composed from one or more containers. Microservices can now be scaled and upgraded indepedently, giving your application more flexbility and control.
    • Networking architecture consist of a virtual network (VNET) associated with the environment. Unless you provide a custom VNET at environment creation time, a default VNET is automatically created. The VNET configuration determines access (ingress, internal vs. external) and can influence auto-scaling choices (e.g., use HTTP Edge Proxy and scale based on number of HTTP requests).
    • Observability is about monitoring the health of your application and diagnosing it to improve reliability or performance. Azure Container Apps has a number of features - from Log streaming and Container console to integration with Azure Monitor - to provide a holistic view of application status over time.
    • Easy Auth is possible with built-in support for authentication and authorization including support for popular identity providers like Facebook, Google, Twitter and GitHub - alongside the Microsoft Identity Platform.

    Keep these terms in mind as we walk through more tutorials this week, to see how they find application in real examples. Finally, a note on Dapr, the Distributed Application Runtime that abstracts away many of the challenges posed by distributed systems - and lets you focus on your application logic.

    DAPR INTEGRATION MADE EASY

    Dapr uses a sidecar architecture, allowing Azure Container Apps to communicate with Dapr Building Block APIs over either gRPC or HTTP. Your ACA can be built to run with or without Dapr - giving you the flexibility to incrementally adopt specific APIs and unlock related capabilities as the need arises.

    In later articles this week, we'll do a deeper dive into Dapr and build our first Dapr-enable Azure Container App to get a better understanding of this integration.

    Exercise

    Congratulations! You made it! By now you should have a good idea of what Cloud-Native development means, why Microservices and Containers are important to that vision - and how Azure Container Apps helps simplify the building and deployment of microservices based applications using serverless architectures on Azure.

    Now it's your turn to reinforce learning by doing.

    Resources

    Three key resources to bookmark and explore:

    - + \ No newline at end of file diff --git a/blog/tags/microservices/page/2/index.html b/blog/tags/microservices/page/2/index.html index bfca07ccbe..1bfacfe8ba 100644 --- a/blog/tags/microservices/page/2/index.html +++ b/blog/tags/microservices/page/2/index.html @@ -14,13 +14,13 @@ - +

    11 posts tagged with "microservices"

    View All Tags

    · 7 min read
    Brian Benz

    Welcome to Day 25 of #30DaysOfServerless!

    Azure Container Apps enable application code packaged in containers to run and scale without the overhead of managing cloud infrastructure and container orchestration. In this post I'll show you how to deploy a Java application running on Spring Boot in a container to Azure Container Registry and Azure Container Apps.


    What We'll Cover

    • Introduction to Deploying Java containers in the cloud
    • Step-by-step: Deploying to Azure Container Registry
    • Step-by-step: Deploying and running on Azure Container Apps
    • Resources: For self-study!


    Deploy Java containers to cloud

    We'll deploy a Java application running on Spring Boot in a container to Azure Container Registry and Azure Container Apps. Here are the main steps:

    • Create Azure Container Registry (ACR) on Azure portal
    • Create Azure Container App (ACA) on Azure portal.
    • Deploy code to Azure Container Registry from the Azure CLI.
    • Deploy container from ACR to ACA using the Azure portal.
    PRE-REQUISITES

    Sign in to Azure from the CLI using the az login command, and follow the prompts in your browser to complete the authentication process. Also, ensure you're running the latest version of the CLI by using the az upgrade command.

    1. Get Sample Code

    Fork and clone the sample GitHub repo to your local machine. Navigate to the and click Fork in the top-right corner of the page.

    The example code that we're using is a very basic containerized Spring Boot example. There are a lot more details to learn about Spring boot apps in docker, for a deep dive check out this Spring Boot Guide

    2. Run Sample Locally (Optional)

    If you have docker installed locally, you can optionally test the code on your local machine. Navigate to the root directory of the forked repository and run the following commands:

    docker build -t spring-boot-docker-aca .
    docker run -p 8080:8080 spring-boot-docker-aca

    Open a browser and go to https://localhost:8080. You should see this message:

    Hello Docker World

    That indicates the the Spring Boot app is successfully running locally in a docker container.

    Next, let's set up an Azure Container Registry an an Azure Container App and deploy this container to the cloud!


    3. Step-by-step: Deploy to ACR

    To create a container registry from the portal dashboard, Select Create a resource > Containers > Container Registry.

    Navigate to container registry in portal

    In the Basics tab, enter values for Resource group and Registry name. The registry name must be unique within Azure, and contain 5-50 alphanumeric characters. Create a new resource group in the West US location named spring-boot-docker-aca. Select the 'Basic' SKU.

    Keep the default values for the remaining settings. Then select Review + create, then Create. When the Deployment succeeded message appears, select the container registry in the portal.

    Note the registry server name ending with azurecr.io. You will use this in the following steps when you push and pull images with Docker.

    3.1 Log into registry using the Azure CLI

    Before pushing and pulling container images, you must log in to the registry instance. Sign into the Azure CLI on your local machine, then run the az acr login command. For this step, use the registry name, not the server name ending with azurecr.io.

    From the command line, type:

    az acr login --name myregistryname

    The command returns Login Succeeded once completed.

    3.2 Build & deploy with az acr build

    Next, we're going to deploy the docker container we created earlier using the AZ ACR Build command. AZ ACR Build creates a docker build from local code and pushes the container to Azure Container Registry if the build is successful.

    Go to your local clone of the spring-boot-docker-aca repo in the command line, type:

    az acr build --registry myregistryname --image spring-boot-docker-aca:v1 .

    3.3 List container images

    Once the AZ ACR Build command is complete, you should be able to view the container as a repository in the registry. In the portal, open your registry and select Repositories, then select the spring-boot-docker-aca repository you created with docker push. You should also see the v1 image under Tags.

    4. Deploy on ACA

    Now that we have an image in the Azure Container Registry, we can deploy it to Azure Container Apps. For the first deployment, we'll pull the container from our ACR as part of the ACA setup.

    4.1 Create a container app

    We'll create the container app at the same place that we created the container registry in the Azure portal. From the portal, select Create a resource > Containers > Container App. In the Basics tab, set these values:

    4.2 Enter project details

    SettingAction
    SubscriptionYour Azure subscription.
    Resource groupUse the spring-boot-docker-aca resource group
    Container app nameEnter spring-boot-docker-aca.

    4.3 Create an environment

    1. In the Create Container App environment field, select Create new.

    2. In the Create Container App Environment page on the Basics tab, enter the following values:

      SettingValue
      Environment nameEnter my-environment.
      RegionSelect westus3.
    3. Select OK.

    4. Select the Create button at the bottom of the Create Container App Environment page.

    5. Select the Next: App settings button at the bottom of the page.

    5. App settings tab

    The App settings tab is where you connect to the ACR and pull the repository image:

    SettingAction
    Use quickstart imageUncheck the checkbox.
    NameEnter spring-boot-docker-aca.
    Image sourceSelect Azure Container Registry
    RegistrySelect your ACR from the list.
    ImageSelect spring-boot-docker-aca from the list.
    Image TagSelect v1 from the list.

    5.1 Application ingress settings

    SettingAction
    IngressSelect Enabled.
    Ingress visibilitySelect External to publicly expose your container app.
    Target portEnter 8080.

    5.2 Deploy the container app

    1. Select the Review and create button at the bottom of the page.
    2. Select Create.

    Once the deployment is successfully completed, you'll see the message: Your deployment is complete.

    5.3 Verify deployment

    In the portal, go to the Overview of your spring-boot-docker-aca Azure Container App, and click on the Application Url. You should see this message in the browser:

    Hello Docker World

    That indicates the the Spring Boot app is running in a docker container in your spring-boot-docker-aca Azure Container App.

    Resources: For self-study!

    Once you have an understanding of the basics in ths post, there is so much more to learn!

    Thanks for stopping by!

    - + \ No newline at end of file diff --git a/blog/tags/microservices/page/3/index.html b/blog/tags/microservices/page/3/index.html index 8b48acf7c2..00360b9a60 100644 --- a/blog/tags/microservices/page/3/index.html +++ b/blog/tags/microservices/page/3/index.html @@ -14,13 +14,13 @@ - +

    11 posts tagged with "microservices"

    View All Tags

    · 19 min read
    Alex Wolf

    Welcome to Day 24 of #30DaysOfServerless!

    We continue exploring E2E scenarios with this tutorial where you'll deploy a containerized ASP.NET Core 6.0 application to Azure Container Apps.

    The application consists of a front-end web app built using Blazor Server, as well as two Web API projects to manage data. These projects will exist as three separate containers inside of a shared container apps environment.


    What We'll Cover

    • Deploy ASP.NET Core 6.0 app to Azure Container Apps
    • Automate deployment workflows using GitHub Actions
    • Provision and deploy resources using Azure Bicep
    • Exercise: Try this yourself!
    • Resources: For self-study!


    Introduction

    Azure Container Apps enables you to run microservices and containerized applications on a serverless platform. With Container Apps, you enjoy the benefits of running containers while leaving behind the concerns of manually configuring cloud infrastructure and complex container orchestrators.

    In this tutorial, you'll deploy a containerized ASP.NET Core 6.0 application to Azure Container Apps. The application consists of a front-end web app built using Blazor Server, as well as two Web API projects to manage data. These projects will exist as three separate containers inside of a shared container apps environment.

    You will use GitHub Actions in combination with Bicep to deploy the application. These tools provide an approachable and sustainable solution for building CI/CD pipelines and working with Container Apps.

    PRE-REQUISITES

    Architecture

    In this tutorial, we'll setup a container app environment with a separate container for each project in the sample store app. The major components of the sample project include:

    • A Blazor Server front-end web app to display product information
    • A products API to list available products
    • An inventory API to determine how many products are in stock
    • GitHub Actions and Bicep templates to provision Azure resources and then build and deploy the sample app.

    You will explore these templates later in the tutorial.

    Public internet traffic should be proxied to the Blazor app. The back-end APIs should only be reachable via requests from the Blazor app inside the container apps environment. This setup can be achieved using container apps environment ingress configurations during deployment.

    An architecture diagram of the shopping app


    Project Sources

    Want to follow along? Fork the sample below. The tutorial can be completed with or without Dapr integration. Pick the path you feel comfortable in. Dapr provides various benefits that make working with Microservices easier - you can learn more in the docs. For this tutorial you will need GitHub and Azure CLI.

    PICK YOUR PATH

    To follow along with this tutorial, fork the relevant sample project below.

    You can run the app locally from Visual Studio:

    • Right click on the Blazor Store project and select Set as Startup Project.
    • Press the start button at the top of Visual Studio to run the app.
    • (Once running) start each API in the background by
    • right-clicking on the project node
    • selecting Debug --> Start without debugging.

    Once the Blazor app is running, you should see something like this:

    An architecture diagram of the shopping app


    Configuring Azure credentials

    In order to deploy the application to Azure through GitHub Actions, you first need to create a service principal. The service principal will allow the GitHub Actions process to authenticate to your Azure subscription to create resources and deploy code. You can learn more about Service Principals in the Azure CLI documentation. For this step you'll need to be logged into the Azure CLI.

    1) If you have not done so already, make sure to fork the sample project to your own GitHub account or organization.

    1) Once you have completed this step, create a service principal using the Azure CLI command below:

    ```azurecli
    $subscriptionId=$(az account show --query id --output tsv)
    az ad sp create-for-rbac --sdk-auth --name WebAndApiSample --role Contributor --scopes /subscriptions/$subscriptionId
    ```

    1) Copy the JSON output of the CLI command to your clipboard

    1) Under the settings tab of your forked GitHub repo, create a new secret named AzureSPN. The name is important to match the Bicep templates included in the project, which we'll review later. Paste the copied service principal values on your clipboard into the secret and save your changes. This new secret will be used by the GitHub Actions workflow to authenticate to Azure.

    :::image type="content" source="./img/dotnet/github-secrets.png" alt-text="A screenshot of adding GitHub secrets.":::

    Deploy using Github Actions

    You are now ready to deploy the application to Azure Container Apps using GitHub Actions. The sample application includes a GitHub Actions template that is configured to build and deploy any changes to a branch named deploy. The deploy branch does not exist in your forked repository by default, but you can easily create it through the GitHub user interface.

    1) Switch to the Actions tab along the top navigation of your GitHub repository. If you have not done so already, ensure that workflows are enabled by clicking the button in the center of the page.

    A screenshot showing how to enable GitHub actions

    1) Navigate to the main Code tab of your repository and select the main dropdown. Enter deploy into the branch input box, and then select Create branch: deploy from 'main'.

    A screenshot showing how to create the deploy branch

    1) On the new deploy branch, navigate down into the .github/workflows folder. You should see a file called deploy.yml, which contains the main GitHub Actions workflow script. Click on the file to view its content. You'll learn more about this file later in the tutorial.

    1) Click the pencil icon in the upper right to edit the document.

    1) Change the RESOURCE_GROUP_NAME: value to msdocswebappapis or another valid resource group name of your choosing.

    1) In the upper right of the screen, select Start commit and then Commit changes to commit your edit. This will persist the change to the file and trigger the GitHub Actions workflow to build and deploy the app.

    A screenshot showing how to commit changes

    1) Switch to the Actions tab along the top navigation again. You should see the workflow running to create the necessary resources and deploy the app. The workflow may take several minutes to run. When it completes successfully, all of the jobs should have a green checkmark icon next to them.

    The completed GitHub workflow.

    Explore the Azure resources

    Once the GitHub Actions workflow has completed successfully you can browse the created resources in the Azure portal.

    1) On the left navigation, select Resource Groups. Next,choose the msdocswebappapis resource group that was created by the GitHub Actions workflow.

    2) You should see seven resources available that match the screenshot and table descriptions below.

    The resources created in Azure.

    Resource nameTypeDescription
    inventoryContainer appThe containerized inventory API.
    msdocswebappapisacrContainer registryA registry that stores the built Container images for your apps.
    msdocswebappapisaiApplication insightsApplication insights provides advanced monitoring, logging and metrics for your apps.
    msdocswebappapisenvContainer apps environmentA container environment that manages networking, security and resource concerns. All of your containers live in this environment.
    msdocswebappapislogsLog Analytics workspaceA workspace environment for managing logging and analytics for the container apps environment
    productsContainer appThe containerized products API.
    storeContainer appThe Blazor front-end web app.

    3) You can view your running app in the browser by clicking on the store container app. On the overview page, click the Application Url link on the upper right of the screen.

    :::image type="content" source="./img/dotnet/application-url.png" alt-text="The link to browse the app.":::

    Understanding the GitHub Actions workflow

    The GitHub Actions workflow created and deployed resources to Azure using the deploy.yml file in the .github folder at the root of the project. The primary purpose of this file is to respond to events - such as commits to a branch - and run jobs to accomplish tasks. The deploy.yml file in the sample project has three main jobs:

    • Provision: Create the necessary resources in Azure, such as the container apps environment. This step leverages Bicep templates to create the Azure resources, which you'll explore in a moment.
    • Build: Create the container images for the three apps in the project and store them in the container registry.
    • Deploy: Deploy the container images to the different container apps created during the provisioning job.

    The deploy.yml file also accepts parameters to make the workflow more dynamic, such as setting the resource group name or the Azure region resources will be provisioned to.

    Below is a commented version of the deploy.yml file that highlights the essential steps.

    name: Build and deploy .NET application to Container Apps

    # Trigger the workflow on pushes to the deploy branch
    on:
    push:
    branches:
    - deploy

    env:
    # Set workflow variables
    RESOURCE_GROUP_NAME: msdocswebappapis

    REGION: eastus

    STORE_DOCKER: Store/Dockerfile
    STORE_IMAGE: store

    INVENTORY_DOCKER: Store.InventoryApi/Dockerfile
    INVENTORY_IMAGE: inventory

    PRODUCTS_DOCKER: Store.ProductApi/Dockerfile
    PRODUCTS_IMAGE: products

    jobs:
    # Create the required Azure resources
    provision:
    runs-on: ubuntu-latest

    steps:

    - name: Checkout to the branch
    uses: actions/checkout@v2

    - name: Azure Login
    uses: azure/login@v1
    with:
    creds: ${{ secrets.AzureSPN }}

    - name: Create resource group
    uses: azure/CLI@v1
    with:
    inlineScript: >
    echo "Creating resource group in Azure"
    echo "Executing 'az group create -l ${{ env.REGION }} -n ${{ env.RESOURCE_GROUP_NAME }}'"
    az group create -l ${{ env.REGION }} -n ${{ env.RESOURCE_GROUP_NAME }}

    # Use Bicep templates to create the resources in Azure
    - name: Creating resources
    uses: azure/CLI@v1
    with:
    inlineScript: >
    echo "Creating resources"
    az deployment group create --resource-group ${{ env.RESOURCE_GROUP_NAME }} --template-file '/github/workspace/Azure/main.bicep' --debug

    # Build the three app container images
    build:
    runs-on: ubuntu-latest
    needs: provision

    steps:

    - name: Checkout to the branch
    uses: actions/checkout@v2

    - name: Azure Login
    uses: azure/login@v1
    with:
    creds: ${{ secrets.AzureSPN }}

    - name: Set up Docker Buildx
    uses: docker/setup-buildx-action@v1

    - name: Login to ACR
    run: |
    set -euo pipefail
    access_token=$(az account get-access-token --query accessToken -o tsv)
    refresh_token=$(curl https://${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io/oauth2/exchange -v -d "grant_type=access_token&service=${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io&access_token=$access_token" | jq -r .refresh_token)
    docker login -u 00000000-0000-0000-0000-000000000000 --password-stdin ${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io <<< "$refresh_token"

    - name: Build the products api image and push it to ACR
    uses: docker/build-push-action@v2
    with:
    push: true
    tags: ${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io/${{ env.PRODUCTS_IMAGE }}:${{ github.sha }}
    file: ${{ env.PRODUCTS_DOCKER }}

    - name: Build the inventory api image and push it to ACR
    uses: docker/build-push-action@v2
    with:
    push: true
    tags: ${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io/${{ env.INVENTORY_IMAGE }}:${{ github.sha }}
    file: ${{ env.INVENTORY_DOCKER }}

    - name: Build the frontend image and push it to ACR
    uses: docker/build-push-action@v2
    with:
    push: true
    tags: ${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io/${{ env.STORE_IMAGE }}:${{ github.sha }}
    file: ${{ env.STORE_DOCKER }}

    # Deploy the three container images
    deploy:
    runs-on: ubuntu-latest
    needs: build

    steps:

    - name: Checkout to the branch
    uses: actions/checkout@v2

    - name: Azure Login
    uses: azure/login@v1
    with:
    creds: ${{ secrets.AzureSPN }}

    - name: Installing Container Apps extension
    uses: azure/CLI@v1
    with:
    inlineScript: >
    az config set extension.use_dynamic_install=yes_without_prompt

    az extension add --name containerapp --yes

    - name: Login to ACR
    run: |
    set -euo pipefail
    access_token=$(az account get-access-token --query accessToken -o tsv)
    refresh_token=$(curl https://${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io/oauth2/exchange -v -d "grant_type=access_token&service=${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io&access_token=$access_token" | jq -r .refresh_token)
    docker login -u 00000000-0000-0000-0000-000000000000 --password-stdin ${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io <<< "$refresh_token"

    - name: Deploy Container Apps
    uses: azure/CLI@v1
    with:
    inlineScript: >
    az containerapp registry set -n products -g ${{ env.RESOURCE_GROUP_NAME }} --server ${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io

    az containerapp update -n products -g ${{ env.RESOURCE_GROUP_NAME }} -i ${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io/${{ env.PRODUCTS_IMAGE }}:${{ github.sha }}

    az containerapp registry set -n inventory -g ${{ env.RESOURCE_GROUP_NAME }} --server ${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io

    az containerapp update -n inventory -g ${{ env.RESOURCE_GROUP_NAME }} -i ${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io/${{ env.INVENTORY_IMAGE }}:${{ github.sha }}

    az containerapp registry set -n store -g ${{ env.RESOURCE_GROUP_NAME }} --server ${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io

    az containerapp update -n store -g ${{ env.RESOURCE_GROUP_NAME }} -i ${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io/${{ env.STORE_IMAGE }}:${{ github.sha }}

    - name: logout
    run: >
    az logout

    Understanding the Bicep templates

    During the provisioning stage of the GitHub Actions workflow, the main.bicep file is processed. Bicep files provide a declarative way of generating resources in Azure and are ideal for managing infrastructure as code. You can learn more about Bicep in the related documentation. The main.bicep file in the sample project creates the following resources:

    • The container registry to store images of the containerized apps.
    • The container apps environment, which handles networking and resource management for the container apps.
    • Three container apps - one for the Blazor front-end and two for the back-end product and inventory APIs.
    • Configuration values to connect these services together

    main.bicep without Dapr

    param location string = resourceGroup().location

    # create the azure container registry
    resource acr 'Microsoft.ContainerRegistry/registries@2021-09-01' = {
    name: toLower('${resourceGroup().name}acr')
    location: location
    sku: {
    name: 'Basic'
    }
    properties: {
    adminUserEnabled: true
    }
    }

    # create the aca environment
    module env 'environment.bicep' = {
    name: 'containerAppEnvironment'
    params: {
    location: location
    }
    }

    # create the various configuration pairs
    var shared_config = [
    {
    name: 'ASPNETCORE_ENVIRONMENT'
    value: 'Development'
    }
    {
    name: 'APPINSIGHTS_INSTRUMENTATIONKEY'
    value: env.outputs.appInsightsInstrumentationKey
    }
    {
    name: 'APPLICATIONINSIGHTS_CONNECTION_STRING'
    value: env.outputs.appInsightsConnectionString
    }
    ]

    # create the products api container app
    module products 'container_app.bicep' = {
    name: 'products'
    params: {
    name: 'products'
    location: location
    registryPassword: acr.listCredentials().passwords[0].value
    registryUsername: acr.listCredentials().username
    containerAppEnvironmentId: env.outputs.id
    registry: acr.name
    envVars: shared_config
    externalIngress: false
    }
    }

    # create the inventory api container app
    module inventory 'container_app.bicep' = {
    name: 'inventory'
    params: {
    name: 'inventory'
    location: location
    registryPassword: acr.listCredentials().passwords[0].value
    registryUsername: acr.listCredentials().username
    containerAppEnvironmentId: env.outputs.id
    registry: acr.name
    envVars: shared_config
    externalIngress: false
    }
    }

    # create the store api container app
    var frontend_config = [
    {
    name: 'ProductsApi'
    value: 'http://${products.outputs.fqdn}'
    }
    {
    name: 'InventoryApi'
    value: 'http://${inventory.outputs.fqdn}'
    }
    ]

    module store 'container_app.bicep' = {
    name: 'store'
    params: {
    name: 'store'
    location: location
    registryPassword: acr.listCredentials().passwords[0].value
    registryUsername: acr.listCredentials().username
    containerAppEnvironmentId: env.outputs.id
    registry: acr.name
    envVars: union(shared_config, frontend_config)
    externalIngress: true
    }
    }

    main.bicep with Dapr


    param location string = resourceGroup().location

    # create the azure container registry
    resource acr 'Microsoft.ContainerRegistry/registries@2021-09-01' = {
    name: toLower('${resourceGroup().name}acr')
    location: location
    sku: {
    name: 'Basic'
    }
    properties: {
    adminUserEnabled: true
    }
    }

    # create the aca environment
    module env 'environment.bicep' = {
    name: 'containerAppEnvironment'
    params: {
    location: location
    }
    }

    # create the various config pairs
    var shared_config = [
    {
    name: 'ASPNETCORE_ENVIRONMENT'
    value: 'Development'
    }
    {
    name: 'APPINSIGHTS_INSTRUMENTATIONKEY'
    value: env.outputs.appInsightsInstrumentationKey
    }
    {
    name: 'APPLICATIONINSIGHTS_CONNECTION_STRING'
    value: env.outputs.appInsightsConnectionString
    }
    ]

    # create the products api container app
    module products 'container_app.bicep' = {
    name: 'products'
    params: {
    name: 'products'
    location: location
    registryPassword: acr.listCredentials().passwords[0].value
    registryUsername: acr.listCredentials().username
    containerAppEnvironmentId: env.outputs.id
    registry: acr.name
    envVars: shared_config
    externalIngress: false
    }
    }

    # create the inventory api container app
    module inventory 'container_app.bicep' = {
    name: 'inventory'
    params: {
    name: 'inventory'
    location: location
    registryPassword: acr.listCredentials().passwords[0].value
    registryUsername: acr.listCredentials().username
    containerAppEnvironmentId: env.outputs.id
    registry: acr.name
    envVars: shared_config
    externalIngress: false
    }
    }

    # create the store api container app
    module store 'container_app.bicep' = {
    name: 'store'
    params: {
    name: 'store'
    location: location
    registryPassword: acr.listCredentials().passwords[0].value
    registryUsername: acr.listCredentials().username
    containerAppEnvironmentId: env.outputs.id
    registry: acr.name
    envVars: shared_config
    externalIngress: true
    }
    }


    Bicep Modules

    The main.bicep file references modules to create resources, such as module products. Modules are a feature of Bicep templates that enable you to abstract resource declarations into their own files or sub-templates. As the main.bicep file is processed, the defined modules are also evaluated. Modules allow you to create resources in a more organized and reusable way. They can also define input and output parameters that are passed to and from the parent template, such as the name of a resource.

    For example, the environment.bicep module extracts the details of creating a container apps environment into a reusable template. The module defines necessary resource dependencies such as Log Analytics Workspaces and an Application Insights instance.

    environment.bicep without Dapr

    param baseName string = resourceGroup().name
    param location string = resourceGroup().location

    resource logs 'Microsoft.OperationalInsights/workspaces@2021-06-01' = {
    name: '${baseName}logs'
    location: location
    properties: any({
    retentionInDays: 30
    features: {
    searchVersion: 1
    }
    sku: {
    name: 'PerGB2018'
    }
    })
    }

    resource appInsights 'Microsoft.Insights/components@2020-02-02' = {
    name: '${baseName}ai'
    location: location
    kind: 'web'
    properties: {
    Application_Type: 'web'
    WorkspaceResourceId: logs.id
    }
    }

    resource env 'Microsoft.App/managedEnvironments@2022-01-01-preview' = {
    name: '${baseName}env'
    location: location
    properties: {
    appLogsConfiguration: {
    destination: 'log-analytics'
    logAnalyticsConfiguration: {
    customerId: logs.properties.customerId
    sharedKey: logs.listKeys().primarySharedKey
    }
    }
    }
    }

    output id string = env.id
    output appInsightsInstrumentationKey string = appInsights.properties.InstrumentationKey
    output appInsightsConnectionString string = appInsights.properties.ConnectionString

    environment.bicep with Dapr


    param baseName string = resourceGroup().name
    param location string = resourceGroup().location

    resource logs 'Microsoft.OperationalInsights/workspaces@2021-06-01' = {
    name: '${baseName}logs'
    location: location
    properties: any({
    retentionInDays: 30
    features: {
    searchVersion: 1
    }
    sku: {
    name: 'PerGB2018'
    }
    })
    }

    resource appInsights 'Microsoft.Insights/components@2020-02-02' = {
    name: '${baseName}ai'
    location: location
    kind: 'web'
    properties: {
    Application_Type: 'web'
    WorkspaceResourceId: logs.id
    }
    }

    resource env 'Microsoft.App/managedEnvironments@2022-01-01-preview' = {
    name: '${baseName}env'
    location: location
    properties: {
    appLogsConfiguration: {
    destination: 'log-analytics'
    logAnalyticsConfiguration: {
    customerId: logs.properties.customerId
    sharedKey: logs.listKeys().primarySharedKey
    }
    }
    }
    }

    output id string = env.id
    output appInsightsInstrumentationKey string = appInsights.properties.InstrumentationKey
    output appInsightsConnectionString string = appInsights.properties.ConnectionString


    The container_apps.bicep template defines numerous parameters to provide a reusable template for creating container apps. This allows the module to be used in other CI/CD pipelines as well.

    container_app.bicep without Dapr

    param name string
    param location string = resourceGroup().location
    param containerAppEnvironmentId string
    param repositoryImage string = 'mcr.microsoft.com/azuredocs/containerapps-helloworld:latest'
    param envVars array = []
    param registry string
    param minReplicas int = 1
    param maxReplicas int = 1
    param port int = 80
    param externalIngress bool = false
    param allowInsecure bool = true
    param transport string = 'http'
    param registryUsername string
    @secure()
    param registryPassword string

    resource containerApp 'Microsoft.App/containerApps@2022-01-01-preview' ={
    name: name
    location: location
    properties:{
    managedEnvironmentId: containerAppEnvironmentId
    configuration: {
    activeRevisionsMode: 'single'
    secrets: [
    {
    name: 'container-registry-password'
    value: registryPassword
    }
    ]
    registries: [
    {
    server: registry
    username: registryUsername
    passwordSecretRef: 'container-registry-password'
    }
    ]
    ingress: {
    external: externalIngress
    targetPort: port
    transport: transport
    allowInsecure: allowInsecure
    }
    }
    template: {
    containers: [
    {
    image: repositoryImage
    name: name
    env: envVars
    }
    ]
    scale: {
    minReplicas: minReplicas
    maxReplicas: maxReplicas
    }
    }
    }
    }

    output fqdn string = containerApp.properties.configuration.ingress.fqdn

    container_app.bicep with Dapr


    param name string
    param location string = resourceGroup().location
    param containerAppEnvironmentId string
    param repositoryImage string = 'mcr.microsoft.com/azuredocs/containerapps-helloworld:latest'
    param envVars array = []
    param registry string
    param minReplicas int = 1
    param maxReplicas int = 1
    param port int = 80
    param externalIngress bool = false
    param allowInsecure bool = true
    param transport string = 'http'
    param appProtocol string = 'http'
    param registryUsername string
    @secure()
    param registryPassword string

    resource containerApp 'Microsoft.App/containerApps@2022-01-01-preview' ={
    name: name
    location: location
    properties:{
    managedEnvironmentId: containerAppEnvironmentId
    configuration: {
    dapr: {
    enabled: true
    appId: name
    appPort: port
    appProtocol: appProtocol
    }
    activeRevisionsMode: 'single'
    secrets: [
    {
    name: 'container-registry-password'
    value: registryPassword
    }
    ]
    registries: [
    {
    server: registry
    username: registryUsername
    passwordSecretRef: 'container-registry-password'
    }
    ]
    ingress: {
    external: externalIngress
    targetPort: port
    transport: transport
    allowInsecure: allowInsecure
    }
    }
    template: {
    containers: [
    {
    image: repositoryImage
    name: name
    env: envVars
    }
    ]
    scale: {
    minReplicas: minReplicas
    maxReplicas: maxReplicas
    }
    }
    }
    }

    output fqdn string = containerApp.properties.configuration.ingress.fqdn


    Understanding configuration differences with Dapr

    The code for this specific sample application is largely the same whether or not Dapr is integrated. However, even with this simple app, there are a few benefits and configuration differences when using Dapr that are worth exploring.

    In this scenario most of the changes are related to communication between the container apps. However, you can explore the full range of Dapr benefits by reading the Dapr integration with Azure Container Apps article in the conceptual documentation.

    Without Dapr

    Without Dapr the main.bicep template handles wiring up the front-end store app to communicate with the back-end apis by manually managing environment variables. The bicep template retrieves the fully qualified domains (fqdn) of the API apps as output parameters when they are created. Those configurations are then set as environment variables on the store container app.


    # Retrieve environment variables from API container creation
    var frontend_config = [
    {
    name: 'ProductsApi'
    value: 'http://${products.outputs.fqdn}'
    }
    {
    name: 'InventoryApi'
    value: 'http://${inventory.outputs.fqdn}'
    }
    ]

    # create the store api container app, passing in config
    module store 'container_app.bicep' = {
    name: 'store'
    params: {
    name: 'store'
    location: location
    registryPassword: acr.listCredentials().passwords[0].value
    registryUsername: acr.listCredentials().username
    containerAppEnvironmentId: env.outputs.id
    registry: acr.name
    envVars: union(shared_config, frontend_config)
    externalIngress: true
    }
    }

    The environment variables are then retrieved inside of the program class and used to configure the base URLs of the corresponding HTTP clients.


    builder.Services.AddHttpClient("Products", (httpClient) => httpClient.BaseAddress = new Uri(builder.Configuration.GetValue<string>("ProductsApi")));
    builder.Services.AddHttpClient("Inventory", (httpClient) => httpClient.BaseAddress = new Uri(builder.Configuration.GetValue<string>("InventoryApi")));

    With Dapr

    Dapr can be enabled on a container app when it is created, as seen below. This configuration adds a Dapr sidecar to the app to streamline discovery and communication features between the different container apps in your environment.


    # Create the container app with Dapr enabled
    resource containerApp 'Microsoft.App/containerApps@2022-01-01-preview' ={
    name: name
    location: location
    properties:{
    managedEnvironmentId: containerAppEnvironmentId
    configuration: {
    dapr: {
    enabled: true
    appId: name
    appPort: port
    appProtocol: appProtocol
    }
    activeRevisionsMode: 'single'
    secrets: [
    {
    name: 'container-registry-password'
    value: registryPassword
    }
    ]

    # Rest of template omitted for brevity...
    }
    }

    Some of these Dapr features can be surfaced through the program file. You can configure your HttpClient to leverage Dapr configurations when communicating with other apps in your environment.


    // reconfigure code to make requests to Dapr sidecar
    var baseURL = (Environment.GetEnvironmentVariable("BASE_URL") ?? "http://localhost") + ":" + (Environment.GetEnvironmentVariable("DAPR_HTTP_PORT") ?? "3500");
    builder.Services.AddHttpClient("Products", (httpClient) =>
    {
    httpClient.BaseAddress = new Uri(baseURL);
    httpClient.DefaultRequestHeaders.Add("dapr-app-id", "Products");
    });

    builder.Services.AddHttpClient("Inventory", (httpClient) =>
    {
    httpClient.BaseAddress = new Uri(baseURL);
    httpClient.DefaultRequestHeaders.Add("dapr-app-id", "Inventory");
    });


    Clean up resources

    If you're not going to continue to use this application, you can delete the Azure Container Apps and all the associated services by removing the resource group.

    Follow these steps in the Azure portal to remove the resources you created:

    1. In the Azure portal, navigate to the msdocswebappsapi resource group using the left navigation or search bar.
    2. Select the Delete resource group button at the top of the resource group Overview.
    3. Enter the resource group name msdocswebappsapi in the Are you sure you want to delete "msdocswebappsapi" confirmation dialog.
    4. Select Delete.
      The process to delete the resource group may take a few minutes to complete.
    - + \ No newline at end of file diff --git a/blog/tags/microservices/page/4/index.html b/blog/tags/microservices/page/4/index.html index 49ad0b41a6..c78130720e 100644 --- a/blog/tags/microservices/page/4/index.html +++ b/blog/tags/microservices/page/4/index.html @@ -14,7 +14,7 @@ - + @@ -22,7 +22,7 @@

    11 posts tagged with "microservices"

    View All Tags

    · 10 min read
    Ayca Bas

    Welcome to Day 20 of #30DaysOfServerless!

    Every day millions of people spend their precious time in productivity tools. What if you use data and intelligence behind the Microsoft applications (Microsoft Teams, Outlook, and many other Office apps) to build seamless automations and custom apps to boost productivity?

    In this post, we'll learn how to build a seamless onboarding experience for new employees joining a company with the power of Microsoft Graph, integrated with Event Hubs and Logic Apps!


    What We'll Cover

    • ✨ The power of Microsoft Graph
    • 🖇️ How do Microsoft Graph and Event Hubs work together?
    • 🛠 Let's Build an Onboarding Workflow!
      • 1️⃣ Setup Azure Event Hubs + Key Vault
      • 2️⃣ Subscribe to users, receive change notifications from Logic Apps
      • 3️⃣ Create Onboarding workflow in the Logic Apps
    • 🚀 Debug: Your onboarding experience
    • ✋ Exercise: Try this tutorial out yourself!
    • 📚 Resources: For Self-Study


    ✨ The Power of Microsoft Graph

    Microsoft Graph is the gateway to data and intelligence in Microsoft 365 platform. Microsoft Graph exploses Rest APIs and client libraries to access data across Microsoft 365 core services such as Calendar, Teams, To Do, Outlook, People, Planner, OneDrive, OneNote and more.

    Overview of Microsoft Graph

    You can build custom experiences by using Microsoft Graph such as automating the onboarding process for new employees. When new employees are created in the Azure Active Directory, they will be automatically added in the Onboarding team on Microsoft Teams.

    Solution architecture


    🖇️ Microsoft Graph with Event Hubs

    Microsoft Graph uses a webhook mechanism to track changes in resources and deliver change notifications to the clients. For example, with Microsoft Graph Change Notifications, you can receive change notifications when:

    • a new task is added in the to-do list
    • a user changes the presence status from busy to available
    • an event is deleted/cancelled from the calendar

    If you'd like to track a large set of resources at a high frequency, use Azure Events Hubs instead of traditional webhooks to receive change notifications. Azure Event Hubs is a popular real-time events ingestion and distribution service built for scale.

    EVENT GRID - PARTNER EVENTS

    Microsoft Graph Change Notifications can be also received by using Azure Event Grid -- currently available for Microsoft Partners! Read the Partner Events Overview documentation for details.

    Setup Azure Event Hubs + Key Vault.

    To get Microsoft Graph Change Notifications delivered to Azure Event Hubs, we'll have to setup Azure Event Hubs and Azure Key Vault. We'll use Azure Key Vault to access to Event Hubs connection string.

    1️⃣ Create Azure Event Hubs

    1. Go to Azure Portal and select Create a resource, type Event Hubs and select click Create.
    2. Fill in the Event Hubs namespace creation details, and then click Create.
    3. Go to the newly created Event Hubs namespace page, select Event Hubs tab from the left pane and + Event Hub:
      • Name your Event Hub as Event Hub
      • Click Create.
    4. Click the name of the Event Hub, and then select Shared access policies and + Add to add a new policy:
      • Give a name to the policy
      • Check Send and Listen
      • Click Create.
    5. After the policy has been created, click the name of the policy to open the details panel, and then copy the Connection string-primary key value. Write it down; you'll need it for the next step.
    6. Go to Consumer groups tab in the left pane and select + Consumer group, give a name for your consumer group as onboarding and select Create.

    2️⃣ Create Azure Key Vault

    1. Go to Azure Portal and select Create a resource, type Key Vault and select Create.
    2. Fill in the Key Vault creation details, and then click Review + Create.
    3. Go to newly created Key Vault and select Secrets tab from the left pane and click + Generate/Import:
      • Give a name to the secret
      • For the value, paste in the connection string you generated at the Event Hubs step
      • Click Create
      • Copy the name of the secret.
    4. Select Access Policies from the left pane and + Add Access Policy:
      • For Secret permissions, select Get
      • For Principal, select Microsoft Graph Change Tracking
      • Click Add.
    5. Select Overview tab from the left pane and copy the Vault URI.

    Subscribe for Logic Apps change notifications

    To start receiving Microsoft Graph Change Notifications, we'll need to create subscription to the resource that we'd like to track - here, 'users'. We'll use Azure Logic Apps to create subscription.

    To create subscription for Microsoft Graph Change Notifications, we'll need to make a http post request to https://graph.microsoft.com/v1.0/subscriptions. Microsoft Graph requires Azure Active Directory authentication make API calls. First, we'll need to register an app to Azure Active Directory, and then we will make the Microsoft Graph Subscription API call with Azure Logic Apps.

    1️⃣ Create an app in Azure Active Directory

    1. In the Azure Portal, go to Azure Active Directory and select App registrations from the left pane and select + New registration. Fill in the details for the new App registration form as below:
      • Name: Graph Subscription Flow Auth
      • Supported account types: Accounts in any organizational directory (Any Azure AD directory - Multitenant) and personal Microsoft accounts (e.g. Skype, Xbox)
      • Select Register.
    2. Go to newly registered app in Azure Active Directory, select API permissions:
      • Select + Add a permission and Microsoft Graph
      • Select Application permissions and add User.Read.All and Directory.Read.All.
      • Select Grant admin consent for the organization
    3. Select Certificates & secrets tab from the left pane, select + New client secret:
      • Choose desired expiry duration
      • Select Add
      • Copy the value of the secret.
    4. Go to Overview from the left pane, copy Application (client) ID and Directory (tenant) ID.

    2️⃣ Create subscription with Azure Logic Apps

    1. Go to Azure Portal and select Create a resource, type Logic apps and select click Create.

    2. Fill in the Logic Apps creation details, and then click Create.

    3. Go to the newly created Logic Apps page, select Workflows tab from the left pane and select + Add:

      • Give a name to the new workflow as graph-subscription-flow
      • Select Stateful as a state type
      • Click Create.
    4. Go to graph-subscription-flow, and then select Designer tab.

    5. In the Choose an operation section, search for Schedule and select Recurrence as a trigger. Fill in the parameters as below:

      • Interval: 61
      • Frequency: Minute
      • Time zone: Select your own time zone
      • Start time: Set a start time
    6. Select + button in the flow and select add an action. Search for HTTP and select HTTP as an action. Fill in the parameters as below:

      • Method: POST
      • URI: https://graph.microsoft.com/v1.0/subscriptions
      • Headers:
        • Key: Content-type
        • Value: application/json
      • Body:
      {
      "changeType": "created, updated",
      "clientState": "secretClientValue",
      "expirationDateTime": "@{addHours(utcNow(), 1)}",
      "notificationUrl": "EventHub:https://<YOUR-VAULT-URI>/secrets/<YOUR-KEY-VAULT-SECRET-NAME>?tenantId=72f988bf-86f1-41af-91ab-2d7cd011db47",
      "resource": "users"
      }

      In notificationUrl, make sure to replace <YOUR-VAULT-URI> with the vault uri and <YOUR-KEY-VAULT-SECRET-NAME> with the secret name that you copied from the Key Vault.

      In resource, define the resource type you'd like to track changes. For our example, we will track changes for users resource.

      • Authentication:
        • Authentication type: Active Directory OAuth
        • Authority: https://login.microsoft.com
        • Tenant: Directory (tenant) ID copied from AAD app
        • Audience: https://graph.microsoft.com
        • Client ID: Application (client) ID copied from AAD app
        • Credential Type: Secret
        • Secret: value of the secret copied from AAD app
    7. Select Save and run your workflow from the Overview tab.

      Check your subscription in Graph Explorer: If you'd like to make sure that your subscription is created successfully by Logic Apps, you can go to Graph Explorer, login with your Microsoft 365 account and make GET request to https://graph.microsoft.com/v1.0/subscriptions. Your subscription should appear in the response after it's created successfully.

    Subscription workflow success

    After subscription is created successfully by Logic Apps, Azure Event Hubs will receive notifications whenever there is a new user created in Azure Active Directory.


    Create Onboarding workflow in Logic Apps

    We'll create a second workflow in the Logic Apps to receive change notifications from Event Hubs when there is a new user created in the Azure Active Directory and add new user in Onboarding team on Microsoft Teams.

    1. Go to the Logic Apps you created in the previous steps, select Workflows tab and create a new workflow by selecting + Add:
      • Give a name to the new workflow as teams-onboarding-flow
      • Select Stateful as a state type
      • Click Create.
    2. Go to teams-onboarding-flow, and then select Designer tab.
    3. In the Choose an operation section, search for Event Hub, select When events are available in Event Hub as a trigger. Setup Event Hub connection as below:
      • Create Connection:
        • Connection name: Connection
        • Authentication Type: Connection String
        • Connection String: Go to Event Hubs > Shared Access Policies > RootManageSharedAccessKey and copy Connection string–primary key
        • Select Create.
      • Parameters:
        • Event Hub Name: Event Hub
        • Consumer Group Name: onboarding
    4. Select + in the flow and add an action, search for Control and add For each as an action. Fill in For each action as below:
      • Select output from previous steps: Events
    5. Inside For each, select + in the flow and add an action, search for Data operations and select Parse JSON. Fill in Parse JSON action as below:
      • Content: Events Content
      • Schema: Copy the json content from schema-parse.json and paste as a schema
    6. Select + in the flow and add an action, search for Control and add For each as an action. Fill in For each action as below:
      • Select output from previous steps: value
      1. Inside For each, select + in the flow and add an action, search for Microsoft Teams and select Add a member to a team. Login with your Microsoft 365 account to create a connection and fill in Add a member to a team action as below:
      • Team: Create an Onboarding team on Microsoft Teams and select
      • A user AAD ID for the user to add to a team: id
    7. Select Save.

    🚀 Debug your onboarding experience

    To debug our onboarding experience, we'll need to create a new user in Azure Active Directory and see if it's added in Microsoft Teams Onboarding team automatically.

    1. Go to Azure Portal and select Azure Active Directory from the left pane and go to Users. Select + New user and Create new user. Fill in the details as below:

      • User name: JaneDoe
      • Name: Jane Doe

      new user in Azure Active Directory

    2. When you added Jane Doe as a new user, it should trigger the teams-onboarding-flow to run. teams onboarding flow success

    3. Once the teams-onboarding-flow runs successfully, you should be able to see Jane Doe as a member of the Onboarding team on Microsoft Teams! 🥳 new member in Onboarding team on Microsoft Teams

    Congratulations! 🎉

    You just built an onboarding experience using Azure Logic Apps, Azure Event Hubs and Azure Key Vault.


    📚 Resources

    - + \ No newline at end of file diff --git a/blog/tags/microservices/page/5/index.html b/blog/tags/microservices/page/5/index.html index d3f3170559..b56ca86317 100644 --- a/blog/tags/microservices/page/5/index.html +++ b/blog/tags/microservices/page/5/index.html @@ -14,14 +14,14 @@ - +

    11 posts tagged with "microservices"

    View All Tags

    · 10 min read
    Brian Benz

    Welcome to Day 18 of #30DaysOfServerless!

    Yesterday my Serverless September post introduced you to making Azure Logic Apps and Azure Cosmos DB work together with a sample application that collects weather data. Today I'm sharing a more robust solution that actually reads my mail. Let's learn about Teaching the cloud to read your mail!

    Ready? Let's go!


    What We'll Cover

    • Introduction to the ReadMail solution
    • Setting up Azure storage, Cosmos DB and Computer Vision
    • Connecting it all together with a Logic App
    • Resources: For self-study!


    Introducing the ReadMail solution

    The US Postal system offers a subscription service that sends you images of mail it will be delivering to your home. I decided it would be cool to try getting Azure to collect data based on these images, so that I could categorize my mail and track the types of mail that I received.

    To do this, I used Azure storage, Cosmos DB, Logic Apps, and computer vision. When a new email comes in from the US Postal service (USPS), it triggers a logic app that:

    • Posts attachments to Azure storage
    • Triggers Azure Computer vision to perform an OCR function on attachments
    • Extracts any results into a JSON document
    • Writes the JSON document to Cosmos DB

    workflow for the readmail solution

    In this post I'll walk you through setting up the solution for yourself.

    Prerequisites

    Setup Azure Services

    First, we'll create all of the target environments we need to be used by our Logic App, then we;ll create the Logic App.

    1. Azure Storage

    We'll be using Azure storage to collect attached images from emails as they arrive. Adding images to Azure storage will also trigger a workflow that performs OCR on new attached images and stores the OCR data in Cosmos DB.

    To create a new Azure storage account from the portal dashboard, Select Create a resource > Storage account > Create.

    The Basics tab covers all of the features and information that we will need for this solution:

    SectionFieldRequired or optionalDescription
    Project detailsSubscriptionRequiredSelect the subscription for the new storage account.
    Project detailsResource groupRequiredCreate a new resource group that you will use for storage, Cosmos DB, Computer Vision and the Logic App.
    Instance detailsStorage account nameRequiredChoose a unique name for your storage account. Storage account names must be between 3 and 24 characters in length and may contain numbers and lowercase letters only.
    Instance detailsRegionRequiredSelect the appropriate region for your storage account.
    Instance detailsPerformanceRequiredSelect Standard performance for general-purpose v2 storage accounts (default).
    Instance detailsRedundancyRequiredSelect locally-redundant Storage (LRS) for this example.

    Select Review + create to accept the remaining default options, then validate and create the account.

    2. Azure CosmosDB

    CosmosDB will be used to store the JSON documents returned by the COmputer Vision OCR process.

    See more details and screen shots for setting up CosmosDB in yesterday's Serverless September post - Using Logic Apps with Cosmos DB

    To get started with Cosmos DB, you create an account, then a database, then a container to store JSON documents. To create a new Cosmos DB account from the portal dashboard, Select Create a resource > Azure Cosmos DB > Create. Choose core SQL for the API.

    Select your subscription, then for simplicity use the same resource group you created when you set up storage. Enter an account name and choose a location, select provisioned throughput capacity mode and apply the free tier discount. From here you can select Review and Create, then Create

    Next, create a new database and container. Go to the Data Explorer in your new Cosmos DB account, and choose New Container. Name the database, and keep all the other defaults except:

    SettingAction
    Container IDid
    Container partition/id

    Press OK to create a database and container

    3. Azure Computer Vision

    Azure Cognitive Services' Computer Vision will perform an OCR process on each image attachment that is stored in Azure storage.

    From the portal dashboard, Select Create a resource > AI + Machine Learning > Computer Vision > Create.

    The Basics and Identity tabs cover all of the features and information that we will need for this solution:

    Basics Tab

    SectionFieldRequired or optionalDescription
    Project detailsSubscriptionRequiredSelect the subscription for the new service.
    Project detailsResource groupRequiredUse the same resource group that you used for Azure storage and Cosmos DB.
    Instance detailsRegionRequiredSelect the appropriate region for your Computer Vision service.
    Instance detailsNameRequiredChoose a unique name for your Computer Vision service.
    Instance detailsPricingRequiredSelect the free tier for this example.

    Identity Tab

    SectionFieldRequired or optionalDescription
    System assigned managed identityStatusRequiredEnable system assigned identity to grant the resource access to other existing resources.

    Select Review + create to accept the remaining default options, then validate and create the account.


    Connect it all with a Logic App

    Now we're ready to put this all together in a Logic App workflow!

    1. Create Logic App

    From the portal dashboard, Select Create a resource > Integration > Logic App > Create. Name your Logic App and select a location, the rest of the settings can be left at their defaults.

    2. Create Workflow: Add Trigger

    Once the Logic App is created, select Create a workflow from designer.

    A workflow is a series of steps that defines a task or process. Each workflow starts with a single trigger, after which you must add one or more actions.

    When in designer, search for outlook.com on the right under Add a trigger. Choose outlook.com. Choose When a new email arrives as the trigger.

    A trigger is always the first step in any workflow and specifies the condition for running any further steps in that workflow.

    Set the following values:

    ParameterValue
    FolderInbox
    ImportanceAny
    Only With AttachmentsYes
    Include AttachmentsYes

    Then add a new parameter:

    ParameterValue
    FromAdd the email address that sends you the email with attachments
    3. Create Workflow: Add Action (for Trigger)

    Choose add an action and choose control > for-each.

    logic app for each

    Inside the for-each action, in Select an output from previous steps, choose attachments. Then, again inside the for-each action, add the create blob action:

    Set the following values:

    ParameterValue
    Folder Path/mailreaderinbox
    Blob NameAttachments Name
    Blob ContentAttachments Content

    This extracts attachments from the email and created a new blob for each attachment.

    Next, inside the same for-each action, add the get blob content action.

    Set the following values:

    ParameterValue
    Blobid
    Infer content typeYes

    We create and read from a blob for each attachment because Computer Vision needs a non-virtual source to read from when performing an OCR process. Because we enabled system assigned identity to grant Computer Vision to other existing resources, it can access the blob but not the outlook.com attachment. Also, we pass the ID of the blob to use as a unique ID when writing to Cosmos DB.

    create blob from attachments

    Next, inside the same for-each action, choose add an action and choose control > condition. Set the value to Media Type > is equal to > image/JPEG

    The USPS sends attachments of multiple types, but we only want to scan attachments that have images of our mail, which are always JPEG images. If the condition is true, we will process the image with Computer Vision OCR and write the results to a JSON document in CosmosDB.

    In the True section of the condition, add an action and choose Computer Vision API > Optical Character Recognition (OCR) to JSON.

    Set the following values:

    ParameterValue
    Image SourceImage Content
    Image contentFile Content

    In the same True section of the condition, choose add an action and choose Cosmos DB. Choose Create or Update Document from the actions. Select Access Key, and provide the primary read-write key (found under keys in Cosmos DB), and the Cosmos DB account ID (without 'documents.azure.com').

    Next, fill in your Cosmos DB Database ID and Collection ID. Create a JSON document by selecting dynamic content elements and wrapping JSON formatting around them.

    Be sure to use the ID passed from blob storage as your unique ID for CosmosDB. That way you can troubleshoot and JSON or OCR issues by tracing back the JSON document in Cosmos Db to the blob in Azure storage. Also, include the Computer Vision JSON response, as it contains the results of the Computer Vision OCR scan. all other elements are optional.

    4. TEST WORKFLOW

    When complete, you should have an action the Logic App designer that looks something like this:

    Logic App workflow create or update document in cosmosdb

    Save the workflow and test the connections by clicking Run Trigger > Run. If connections are working, you should see documents flowing into Cosmos DB each time that an email arrives with image attachments.

    Check the data in Cosmos Db by opening the Data explorer, then choosing the container you created and selecting items. You should see documents similar to this:

    Logic App workflow with trigger and action

    1. Congratulations

    You just built your personal ReadMail solution with Logic Apps! 🎉


    Resources: For self-study!

    Once you have an understanding of the basics in ths post, there is so much more to learn!

    Thanks for stopping by!

    - + \ No newline at end of file diff --git a/blog/tags/microservices/page/6/index.html b/blog/tags/microservices/page/6/index.html index 396ed3ba55..12f59d7a0c 100644 --- a/blog/tags/microservices/page/6/index.html +++ b/blog/tags/microservices/page/6/index.html @@ -14,14 +14,14 @@ - +

    11 posts tagged with "microservices"

    View All Tags

    · 6 min read
    Brian Benz

    Welcome to Day 17 of #30DaysOfServerless!

    In past weeks, we've covered serverless technologies that provide core capabilities (functions, containers, microservices) for building serverless solutions. This week we're looking at technologies that make service integrations more seamless, starting with Logic Apps. Let's look at one usage example today!

    Ready? Let's Go!


    What We'll Cover

    • Introduction to Logic Apps
    • Settng up Cosmos DB for Logic Apps
    • Setting up a Logic App connection and event
    • Writing data to Cosmos DB from a Logic app
    • Resources: For self-study!


    Introduction to Logic Apps

    Previously in Serverless September, we've covered Azure Functions, where the event triggers code. In Logic Apps, the event triggers a workflow that you design. Logic Apps enable serverless applications to connect to external sources for data then automate business processes via workflows.

    In this post I'll walk you through setting up a Logic App that works with Cosmos DB. For this example, we'll connect to the MSN weather service, an design a logic app workflow that collects data when weather changes, and writes the data to Cosmos DB.

    PREREQUISITES

    Setup Cosmos DB for Logic Apps

    Cosmos DB has many APIs to choose from, but to use the default Logic App connection, we need to choose the a Cosmos DB SQL API. We'll set this up via the Azure Portal.

    To get started with Cosmos DB, you create an account, then a database, then a container to store JSON documents. To create a new Cosmos DB account from the portal dashboard, Select Create a resource > Azure Cosmos DB > Create. Choose core SQL for the API.

    Select your subscription, then create a new resource group called CosmosWeather. Enter an account name and choose a location, select provisioned throughput capacity mode and apply the free tier discount. From here you can select Review and Create, then Create

    Azure Cosmos DB is available in two different capacity modes: provisioned throughput and serverless. You can perform the same database operations in both modes, but the way you get billed for these operations is different. We wil be using provisioned throughput and the free tier for this example.

    Setup the CosmosDB account

    Next, create a new database and container. Go to the Data Explorer in your new Cosmos DB account, and choose New Container. Name the database, and keep all the orher defaults except:

    SettingAction
    Container IDid
    Container partition/id

    Press OK to create a database and container

    A database is analogous to a traditional DBMS namespace. It's used to organize one or more containers.

    Setup the CosmosDB Container

    Now we're ready to set up our logic app an write to Cosmos DB!

    Setup Logic App connection + event

    Once the Cosmos DB SQL API account is created, we can set up our Logic App. From the portal dashboard, Select Create a resource > Integration > Logic App > Create. Name your Logic App and select a location, the rest fo the settings can be left at their defaults. Once you new Logic App is created, select Create a workflow from designer to get started.

    A workflow is a series of steps that defines a task or process. Each workflow starts with a single trigger, after which you must add one or more actions.

    When in designer, search for weather on the right under Add a trigger. Choose MSN Weather. Choose When the current conditions change as the trigger.

    A trigger is always the first step in any workflow and specifies the condition for running any further steps in that workflow.

    Add a location. Valid locations are City, Region, State, Country, Landmark, Postal Code, latitude and longitude. This triggers a new workflow when the conditions change for a location.

    Write data from Logic App to Cosmos DB

    Now we are ready to set up the action to write data to Cosmos DB. Choose add an action and choose Cosmos DB.

    An action is each step in a workflow after the trigger. Every action runs some operation in a workflow.

    In this case, we will be writing a JSON document to the Cosmos DB container we created earlier. Choose Create or Update Document from the actions. At this point you should have a workflow in designer that looks something like this:

    Logic App workflow with trigger

    Start wth the connection for set up the Cosmos DB action. Select Access Key, and provide the primary read-write key (found under keys in Cosmos DB), and the Cosmos DB account ID (without 'documents.azure.com').

    Next, fill in your Cosmos DB Database ID and Collection ID. Create a JSON document bt selecting dynamic content elements and wrapping JSON formatting around them.

    You will need a unique ID for each document that you write to Cosmos DB, for that you can use an expression. Because we declared id to be our unique ID in Cosmos DB, we will use use that for the name. Under expressions, type guid() and press enter to add a unique ID to the JSON document. When complete, you should have a workflow in designer that looks something like this:

    Logic App workflow with trigger and action

    Save the workflow and test the connections by clicking Run Trigger > Run. If connections are working, you should see documents flowing into Cosmos DB over the next few minutes.

    Check the data in Cosmos Db by opening the Data explorer, then choosing the container you created and selecting items. You should see documents similar to this:

    Logic App workflow with trigger and action

    Resources: For self-study!

    Once you've grasped the basics in ths post, there is so much more to learn!

    Thanks for stopping by!

    - + \ No newline at end of file diff --git a/blog/tags/microservices/page/7/index.html b/blog/tags/microservices/page/7/index.html index 7fd6cd86a0..e57420ce0e 100644 --- a/blog/tags/microservices/page/7/index.html +++ b/blog/tags/microservices/page/7/index.html @@ -14,7 +14,7 @@ - + @@ -24,7 +24,7 @@ Image showing container apps role assignment

  • Lastly, we need to restart the container app revision, to do so run the command below:

     ##Get revision name and assign it to a variable
    $REVISION_NAME = (az containerapp revision list `
    --name $BACKEND_SVC_NAME `
    --resource-group $RESOURCE_GROUP `
    --query [0].name)

    ##Restart revision by name
    az containerapp revision restart `
    --resource-group $RESOURCE_GROUP `
    --name $BACKEND_SVC_NAME `
    --revision $REVISION_NAME
  • Run end-to-end Test on Azure

    From the Azure Portal, select the Azure Container App orders-processor and navigate to Log stream under Monitoring tab, leave the stream connected and opened. From the Azure Portal, select the Azure Service Bus Namespace ordersservices, select the topic orderreceivedtopic, select the subscription named orders-processor-subscription, then click on Service Bus Explorer (preview). From there we need to publish/send a message. Use the JSON payload below

    ```json
    {
    "data": {
    "reference": "Order 150",
    "quantity": 150,
    "createdOn": "2022-05-10T12:45:22.0983978Z"
    }
    }
    ```

    If all is configured correctly, you should start seeing the information logs in Container Apps Log stream, similar to the images below Image showing publishing messages from Azure Service

    Information logs on the Log stream of the deployed Azure Container App Image showing ACA Log Stream

    🎉 CONGRATULATIONS

    You have successfully deployed to the cloud an Azure Container App and configured Dapr Pub/Sub API with Azure Service Bus.

    9. Clean up

    If you are done with the tutorial, use the following command to delete the resource group and all its contained resources to avoid incurring further costs.

    az group delete --name $RESOURCE_GROUP

    Exercise

    I left for you the configuration of the Dapr State Store API with Azure Cosmos DB :)

    When you look at the action method OrderReceived in controller ExternalOrdersController, you will see that I left a line with ToDo: note, this line is responsible to save the received message (OrderModel) into Azure Cosmos DB.

    There is no need to change anything on the code base (other than removing this commented line), that's the beauty of Dapr Building Blocks and how easy it allows us to plug components to our microservice application without any plumping and brining external SDKs.

    For sure you need to work on the configuration part of Dapr State Store by creating a new component file like what we have done with the Pub/Sub API, things that you need to work on are:

    • Provision Azure Cosmos DB Account and obtain its masterKey.
    • Create a Dapr Component file adhering to Dapr Specs.
    • Create an Azure Container Apps component file adhering to ACA component specs.
    • Test locally on your dev machine using Dapr Component file.
    • Register the new Dapr State Store component with Azure Container Apps Environment and set the Cosmos Db masterKey from the Azure Portal. If you want to challenge yourself more, use the Managed Identity approach as done in this post! The right way to protect your keys and you will not worry about managing CosmosDb keys anymore!
    • Build a new image of the application and push it to Azure Container Registry.
    • Update Azure Container Apps and create a new revision which contains the updated code.
    • Verify the results by checking Azure Cosmos DB, you should see the Order Model stored in Cosmos DB.

    If you need help, you can always refer to my blog post Azure Container Apps State Store With Dapr State Management API which contains exactly what you need to implement here, so I'm very confident you will be able to complete this exercise with no issues, happy coding :)

    What's Next?

    If you enjoyed working with Dapr and Azure Container Apps, and you want to have a deep dive with more complex scenarios (Dapr bindings, service discovery, auto scaling with KEDA, sync services communication, distributed tracing, health probes, etc...) where multiple services deployed to a single Container App Environment; I have created a detailed tutorial which should walk you through step by step with through details to build the application.

    So far, the published posts below, and I'm publishing more posts on weekly basis, so stay tuned :)

    Resources

    - + \ No newline at end of file diff --git a/blog/tags/microservices/page/8/index.html b/blog/tags/microservices/page/8/index.html index 1eb0b50ece..b0f12c12f2 100644 --- a/blog/tags/microservices/page/8/index.html +++ b/blog/tags/microservices/page/8/index.html @@ -14,14 +14,14 @@ - +

    11 posts tagged with "microservices"

    View All Tags

    · 11 min read
    Kendall Roden

    Welcome to Day 13 of #30DaysOfServerless!

    In the previous post, we learned about all things Distributed Application Runtime (Dapr) and highlighted the capabilities you can unlock through managed Dapr in Azure Container Apps! Today, we'll dive into how we can make use of Container Apps secrets and managed identities to securely access cloud-hosted resources that your Container Apps depend on!

    Ready? Let's go.


    What We'll Cover

    • Secure access to external services overview
    • Using Container Apps Secrets
    • Using Managed Identity for connecting to Azure resources
    • Using Dapr secret store component references (Dapr-only)
    • Conclusion
    • Resources: For self-study!


    Securing access to external services

    In most, if not all, microservice-based applications, one or more services in the system will rely on other cloud-hosted resources; Think external services like databases, secret stores, message brokers, event sources, etc. To interact with these services, an application must have the ability to establish a secure connection. Traditionally, an application will authenticate to these backing resources using some type of connection string or password.

    I'm not sure if it was just me, but one of the first things I learned as a developer was to ensure credentials and other sensitive information were never checked into the codebase. The ability to inject these values at runtime is a non-negotiable.

    In Azure Container Apps, applications can securely leverage connection information via Container Apps Secrets. If the resource is Azure-based, a more ideal solution that removes the dependence on secrets altogether is using Managed Identity.

    Specifically for Dapr-enabled container apps, users can now tap into the power of the Dapr secrets API! With this new capability unlocked in Container Apps, users can call the Dapr secrets API from application code to securely access secrets from Key Vault or other backing secret stores. In addition, customers can also make use of a secret store component reference when wiring up Dapr state store components and more!

    ALSO, I'm excited to share that support for Dapr + Managed Identity is now available!!. What does this mean? It means that you can enable Managed Identity for your container app - and when establishing connections via Dapr, the Dapr sidecar can use this identity! This means simplified components without the need for secrets when connecting to Azure services!

    Let's dive a bit deeper into the following three topics:

    1. Using Container Apps secrets in your container apps
    2. Using Managed Identity to connect to Azure services
    3. Connecting to services securely for Dapr-enabled apps

    Secure access to external services without Dapr

    Leveraging Container Apps secrets at runtime

    Users can leverage this approach for any values which need to be securely stored, however, it is recommended to use Managed Identity where possible when connecting to Azure-specific resources.

    First, let's establish a few important points regarding secrets in container apps:

    • Secrets are scoped at the container app level, meaning secrets cannot be shared across container apps today
    • When running in multiple-revision mode,
      • changes to secrets do not generate a new revision
      • running revisions will not be automatically restarted to reflect changes. If you want to force-update existing container app revisions to reflect the changed secrets values, you will need to perform revision restarts.
    STEP 1

    Provide the secure value as a secret parameter when creating your container app using the syntax "SECRET_NAME=SECRET_VALUE"

    az containerapp create \
    --resource-group "my-resource-group" \
    --name queuereader \
    --environment "my-environment-name" \
    --image demos/queuereader:v1 \
    --secrets "queue-connection-string=$CONNECTION_STRING"
    STEP 2

    Create an environment variable which references the value of the secret created in step 1 using the syntax "ENV_VARIABLE_NAME=secretref:SECRET_NAME"

    az containerapp create \
    --resource-group "my-resource-group" \
    --name myQueueApp \
    --environment "my-environment-name" \
    --image demos/myQueueApp:v1 \
    --secrets "queue-connection-string=$CONNECTIONSTRING" \
    --env-vars "QueueName=myqueue" "ConnectionString=secretref:queue-connection-string"

    This ConnectionString environment variable can be used within your application code to securely access the connection string value at runtime.

    Using Managed Identity to connect to Azure services

    A managed identity from Azure Active Directory (Azure AD) allows your container app to access other Azure AD-protected resources. This approach is recommended where possible as it eliminates the need for managing secret credentials in your container apps and allows you to properly scope the permissions needed for a given container app using role-based access control. Both system-assigned and user-assigned identities are available in container apps. For more background on managed identities in Azure AD, see Managed identities for Azure resources.

    To configure your app with a system-assigned managed identity you will follow similar steps to the following:

    STEP 1

    Run the following command to create a system-assigned identity for your container app

    az containerapp identity assign \
    --name "myQueueApp" \
    --resource-group "my-resource-group" \
    --system-assigned
    STEP 2

    Retrieve the identity details for your container app and store the Principal ID for the identity in a variable "PRINCIPAL_ID"

    az containerapp identity show \
    --name "myQueueApp" \
    --resource-group "my-resource-group"
    STEP 3

    Assign the appropriate roles and permissions to your container app's managed identity using the Principal ID in step 2 based on the resources you need to access (example below)

    az role assignment create \
    --role "Storage Queue Data Contributor" \
    --assignee $PRINCIPAL_ID \
    --scope "/subscriptions/<subscription>/resourceGroups/<resource-group>/providers/Microsoft.Storage/storageAccounts/<storage-account>/queueServices/default/queues/<queue>"

    After running the above commands, your container app will be able to access your Azure Store Queue because it's managed identity has been assigned the "Store Queue Data Contributor" role. The role assignments you create will be contingent solely on the resources your container app needs to access. To instrument your code to use this managed identity, see more details here.

    In addition to using managed identity to access services from your container app, you can also use managed identity to pull your container images from Azure Container Registry.

    Secure access to external services with Dapr

    For Dapr-enabled apps, there are a few ways to connect to the resources your solutions depend on. In this section, we will discuss when to use each approach.

    1. Using Container Apps secrets in your Dapr components
    2. Using Managed Identity with Dapr Components
    3. Using Dapr Secret Stores for runtime secrets and component references

    Using Container Apps secrets in Dapr components

    Prior to providing support for the Dapr Secret's Management building block, this was the only approach available for securely storing sensitive values for use in Dapr components.

    In Dapr OSS, when no secret store reference is provided in a Dapr component file, the default secret store is set to "Kubernetes secrets". In Container Apps, we do not expose the ability to use this default store. Rather, Container Apps secrets can be used in it's place.

    With the introduction of the Secrets API and the ability to use Dapr + Managed Identity, this approach is useful for a limited number of scenarios:

    • Quick demos and dev/test scenarios using the Container Apps CLI
    • Securing values when a secret store is not configured or available for use
    • Using service principal credentials to configure an Azure Key Vault secret store component (Using Managed Identity is recommend)
    • Securing access credentials which may be required when creating a non-Azure secret store component
    STEP 1

    Create a Dapr component which can be used by one or more services in the container apps environment. In the below example, you will create a secret to store the storage account key and reference this secret from the appropriate Dapr metadata property.

       componentType: state.azure.blobstorage
    version: v1
    metadata:
    - name: accountName
    value: testStorage
    - name: accountKey
    secretRef: account-key
    - name: containerName
    value: myContainer
    secrets:
    - name: account-key
    value: "<STORAGE_ACCOUNT_KEY>"
    scopes:
    - myApp
    STEP 2

    Deploy the Dapr component using the below command with the appropriate arguments.

     az containerapp env dapr-component set \
    --name "my-environment" \
    --resource-group "my-resource-group" \
    --dapr-component-name statestore \
    --yaml "./statestore.yaml"

    Using Managed Identity with Dapr Components

    Dapr-enabled container apps can now make use of managed identities within Dapr components. This is the most ideal path for connecting to Azure services securely, and allows for the removal of sensitive values in the component itself.

    The Dapr sidecar makes use of the existing identities available within a given container app; Dapr itself does not have it's own identity. Therefore, the steps to enable Dapr + MI are similar to those in the section regarding managed identity for non-Dapr apps. See example steps below specifically for using a system-assigned identity:

    1. Create a system-assigned identity for your container app

    2. Retrieve the identity details for your container app and store the Principal ID for the identity in a variable "PRINCIPAL_ID"

    3. Assign the appropriate roles and permissions (for accessing resources backing your Dapr components) to your ACA's managed identity using the Principal ID

    4. Create a simplified Dapr component without any secrets required

          componentType: state.azure.blobstorage
      version: v1
      metadata:
      - name: accountName
      value: testStorage
      - name: containerName
      value: myContainer
      scopes:
      - myApp
    5. Deploy the component to test the connection from your container app via Dapr!

    Keep in mind, all Dapr components will be loaded by each Dapr-enabled container app in an environment by default. In order to avoid apps without the appropriate permissions from loading a component unsuccessfully, use scopes. This will ensure that only applications with the appropriate identities to access the backing resource load the component.

    Using Dapr Secret Stores for runtime secrets and component references

    Dapr integrates with secret stores to provide apps and other components with secure storage and access to secrets such as access keys and passwords. The Dapr Secrets API is now available for use in Container Apps.

    Using Dapr’s secret store building block typically involves the following:

    • Setting up a component for a specific secret store solution.
    • Retrieving secrets using the Dapr secrets API in the application code.
    • Optionally, referencing secrets in Dapr component files.

    Let's walk through a couple sample workflows involving the use of Dapr's Secrets Management capabilities!

    Setting up a component for a specific secret store solution

    1. Create an Azure Key Vault instance for hosting the secrets required by your application.

      az keyvault create --name "<your-unique-keyvault-name>" --resource-group "my-resource-group" --location "<your-location>"
    2. Create an Azure Key Vault component in your environment without the secrets values, as the connection will be established to Azure Key Vault via Managed Identity.

          componentType: secretstores.azure.keyvault
      version: v1
      metadata:
      - name: vaultName
      value: "[your_keyvault_name]"
      scopes:
      - myApp
      az containerapp env dapr-component set \
      --name "my-environment" \
      --resource-group "my-resource-group" \
      --dapr-component-name secretstore \
      --yaml "./secretstore.yaml"
    3. Run the following command to create a system-assigned identity for your container app

      az containerapp identity assign \
      --name "myApp" \
      --resource-group "my-resource-group" \
      --system-assigned
    4. Retrieve the identity details for your container app and store the Principal ID for the identity in a variable "PRINCIPAL_ID"

      az containerapp identity show \
      --name "myApp" \
      --resource-group "my-resource-group"
    5. Assign the appropriate roles and permissions to your container app's managed identity to access Azure Key Vault

      az role assignment create \
      --role "Key Vault Secrets Officer" \
      --assignee $PRINCIPAL_ID \
      --scope /subscriptions/{subscriptionid}/resourcegroups/{resource-group-name}/providers/Microsoft.KeyVault/vaults/{key-vault-name}
    6. Begin using the Dapr Secrets API in your application code to retrieve secrets! See additional details here.

    Referencing secrets in Dapr component files

    Once a Dapr secret store component is available in the environment, it can be used to retrieve secrets for use in other components. For example, when creating a state store component, you can add a reference to the Dapr secret store from which you would like to source connection information. You will no longer use secrets directly in the component spec, but rather will instruct the Dapr sidecar to retrieve the secrets from the specified store.

          componentType: state.azure.blobstorage
    version: v1
    metadata:
    - name: accountName
    value: testStorage
    - name: accountKey
    secretRef: account-key
    - name: containerName
    value: myContainer
    secretStoreComponent: "<SECRET_STORE_COMPONENT_NAME>"
    scopes:
    - myApp

    Summary

    In this post, we have covered the high-level details on how to work with secret values in Azure Container Apps for both Dapr and Non-Dapr apps. In the next article, we will walk through a complex Dapr example from end-to-end which makes use of the new support for Dapr + Managed Identity. Stayed tuned for additional documentation around Dapr secrets as it will be release in the next two weeks!

    Resources

    Here are the main resources to explore for self-study:

    - + \ No newline at end of file diff --git a/blog/tags/microservices/page/9/index.html b/blog/tags/microservices/page/9/index.html index 87ca565759..5047301cee 100644 --- a/blog/tags/microservices/page/9/index.html +++ b/blog/tags/microservices/page/9/index.html @@ -14,13 +14,13 @@ - +

    11 posts tagged with "microservices"

    View All Tags

    · 8 min read
    Nitya Narasimhan

    Welcome to Day 12 of #30DaysOfServerless!

    So far we've looked at Azure Container Apps - what it is, how it enables microservices communication, and how it enables auto-scaling with KEDA compliant scalers. Today we'll shift gears and talk about Dapr - the Distributed Application Runtime - and how it makes microservices development with ACA easier with core building blocks and a sidecar architecture!

    Ready? Let's go!


    What We'll Cover

    • What is Dapr and why use it?
    • Building Block APIs
    • Dapr Quickstart and Tutorials
    • Dapr-enabled ACA: A Sidecar Approach
    • Exercise: Build & Deploy a Dapr-enabled ACA.
    • Resources: For self-study!


    Hello, Dapr!

    Building distributed applications is hard. Building reliable and portable microservces means having middleware that deals with challenges like service discovery, sync and async communications, state management, secure information sharing and more. Integrating these support services into your application can be challenging from both development and maintenance perspectives, adding complexity that is independent of the core application logic you want to focus on.

    This is where Dapr (Distributed Application Runtime) shines - it's defined as::

    a portable, event-driven runtime that makes it easy for any developer to build resilient, stateless and stateful applications that run on the cloud and edge and embraces the diversity of languages and developer frameworks.

    But what does this actually mean to me as an app developer?


    Dapr + Apps: A Sidecar Approach

    The strength of Dapr lies in its ability to:

    • abstract complexities of distributed systems middleware - with Building Block APIs that implement components using best practices to tackle key challenges.
    • implement a Sidecar Pattern with interactions via APIs - allowing applications to keep their codebase clean and focus on app logic.
    • be Incrementally Adoptable - allowing developers to start by integrating one API, then evolving to use more as and when needed.
    • be Platform Agnostic - allowing applications to be developed in a preferred language or framework without impacting integration capabilities.

    The application-dapr sidecar interaction is illustrated below. The API abstraction allows applications to get the desired functionality without having to know how it was implemented, or without having to integrate Dapr-specific code into their codebase. Note how the sidecar process listens on port 3500 and the API provides clear routes for the specific building blocks supported by Dapr (e.g, /secrets, /state etc.)


    Dapr Building Blocks: API Interactions

    Dapr Building Blocks refers to HTTP and gRPC endpoints exposed by Dapr API endpoints exposed by the Dapr sidecar, providing key capabilities like state management, observability, service-to-service invocation, pub/sub messaging and more to the associated application.

    Building Blocks: Under the Hood
    The Dapr API is implemented by modular components that codify best practices for tackling the specific challenge that they represent. The API abstraction allows component implementations to evolve, or alternatives to be used , without requiring changes to the application codebase.

    The latest Dapr release has the building blocks shown in the above figure. Not all capabilities are available to Azure Container Apps by default - check the documentation for the latest updates on this. For now, Azure Container Apps + Dapr integration provides the following capabilities to the application:

    In the next section, we'll dive into Dapr-enabled Azure Container Apps. Before we do that, here are a couple of resources to help you explore the Dapr platform by itself, and get more hands-on experience with the concepts and capabilities:

    • Dapr Quickstarts - build your first Dapr app, then explore quickstarts for a core APIs including service-to-service invocation, pub/sub, state mangement, bindings and secrets management.
    • Dapr Tutorials - go beyond the basic quickstart and explore more realistic service integrations and usage scenarios. Try the distributed calculator example!

    Integrate Dapr & Azure Container Apps

    Dapr currently has a v1.9 (preview) version, but Azure Container Apps supports Dapr v1.8. In this section, we'll look at what it takes to enable, configure, and use, Dapr integration with Azure Container Apps. It involves 3 steps: enabling Dapr using settings, configuring Dapr components (API) for use, then invoking the APIs.

    Here's a simple a publisher-subscriber scenario from the documentation. We have two Container apps identified as publisher-app and subscriber-app deployed in a single environment. Each ACA has an activated daprd sidecar, allowing them to use the Pub/Sub API to communicate asynchronously with each other - without having to write the underlying pub/sub implementation themselves. Rather, we can see that the Dapr API uses a pubsub,azure.servicebus component to implement that capability.

    Pub/sub example

    Let's look at how this is setup.

    1. Enable Dapr in ACA: Settings

    We can enable Dapr integration in the Azure Container App during creation by specifying settings in one of two ways, based on your development preference:

    • Using Azure CLI: use custom commandline options for each setting
    • Using Infrastructure-as-Code (IaC): using properties for Bicep, ARM templates

    Once enabled, Dapr will run in the same environment as the Azure Container App, and listen on port 3500 for API requests. The Dapr sidecar can be shared my multiple Container Apps deployed in the same environment.

    There are four main settings we will focus on for this demo - the example below shows the ARM template properties, but you can find the equivalent CLI parameters here for comparison.

    • dapr.enabled - enable Dapr for Azure Container App
    • dapr.appPort - specify port on which app is listening
    • dapr.appProtocol - specify if using http (default) or gRPC for API
    • dapr.appId - specify unique application ID for service discovery, usage

    These are defined under the properties.configuration section for your resource. Changing Dapr settings does not update the revision but it will restart ACA revisions and replicas. Here is what the relevant section of the ARM template looks like for the publisher-app ACA in the scenario shown above.

    "dapr": {
    "enabled": true,
    "appId": "publisher-app",
    "appProcotol": "http",
    "appPort": 80
    }

    2. Configure Dapr in ACA: Components

    The next step after activating the Dapr sidecar, is to define the APIs that you want to use and potentially specify the Dapr components (specific implementations of that API) that you prefer. These components are created at environment-level and by default, Dapr-enabled containers apps in an environment will load the complete set of deployed components -- use the scopes property to ensure only components needed by a given app are loaded at runtime. Here's what the ARM template resources section looks like for the example above. This tells us that the environment has a dapr-pubsub component of type pubsub.azure.servicebus deployed - where that component is loaded by container apps with dapr ids (publisher-app, subscriber-app).

    USING MANAGED IDENTITY + DAPR

    The secrets approach used here is idea for demo purposes. However, we recommend using Managed Identity with Dapr in production. For more details on secrets, check out tomorrow's post on Secrets and Managed Identity in Azure Container Apps

    {
    "resources": [
    {
    "type": "daprComponents",
    "name": "dapr-pubsub",
    "properties": {
    "componentType": "pubsub.azure.servicebus",
    "version": "v1",
    "secrets": [
    {
    "name": "sb-root-connectionstring",
    "value": "value"
    }
    ],
    "metadata": [
    {
    "name": "connectionString",
    "secretRef": "sb-root-connectionstring"
    }
    ],
    // Application scopes
    "scopes": ["publisher-app", "subscriber-app"]

    }
    }
    ]
    }

    With this configuration, the ACA is now set to use pub/sub capabilities from the Dapr sidecar, using standard HTTP requests to the exposed API endpoint for this service.

    Exercise: Deploy Dapr-enabled ACA

    In the next couple posts in this series, we'll be discussing how you can use the Dapr secrets API and doing a walkthrough of a more complex example, to show how Dapr-enabled Azure Container Apps are created and deployed.

    However, you can get hands-on experience with these concepts by walking through one of these two tutorials, each providing an alternative approach to configure and setup the application describe in the scenario below:

    Resources

    Here are the main resources to explore for self-study:

    - + \ No newline at end of file diff --git a/blog/tags/microsoft-365/index.html b/blog/tags/microsoft-365/index.html index d8176020dd..403ebdc937 100644 --- a/blog/tags/microsoft-365/index.html +++ b/blog/tags/microsoft-365/index.html @@ -14,7 +14,7 @@ - + @@ -22,7 +22,7 @@

    One post tagged with "microsoft-365"

    View All Tags

    · 10 min read
    Ayca Bas

    Welcome to Day 20 of #30DaysOfServerless!

    Every day millions of people spend their precious time in productivity tools. What if you use data and intelligence behind the Microsoft applications (Microsoft Teams, Outlook, and many other Office apps) to build seamless automations and custom apps to boost productivity?

    In this post, we'll learn how to build a seamless onboarding experience for new employees joining a company with the power of Microsoft Graph, integrated with Event Hubs and Logic Apps!


    What We'll Cover

    • ✨ The power of Microsoft Graph
    • 🖇️ How do Microsoft Graph and Event Hubs work together?
    • 🛠 Let's Build an Onboarding Workflow!
      • 1️⃣ Setup Azure Event Hubs + Key Vault
      • 2️⃣ Subscribe to users, receive change notifications from Logic Apps
      • 3️⃣ Create Onboarding workflow in the Logic Apps
    • 🚀 Debug: Your onboarding experience
    • ✋ Exercise: Try this tutorial out yourself!
    • 📚 Resources: For Self-Study


    ✨ The Power of Microsoft Graph

    Microsoft Graph is the gateway to data and intelligence in Microsoft 365 platform. Microsoft Graph exploses Rest APIs and client libraries to access data across Microsoft 365 core services such as Calendar, Teams, To Do, Outlook, People, Planner, OneDrive, OneNote and more.

    Overview of Microsoft Graph

    You can build custom experiences by using Microsoft Graph such as automating the onboarding process for new employees. When new employees are created in the Azure Active Directory, they will be automatically added in the Onboarding team on Microsoft Teams.

    Solution architecture


    🖇️ Microsoft Graph with Event Hubs

    Microsoft Graph uses a webhook mechanism to track changes in resources and deliver change notifications to the clients. For example, with Microsoft Graph Change Notifications, you can receive change notifications when:

    • a new task is added in the to-do list
    • a user changes the presence status from busy to available
    • an event is deleted/cancelled from the calendar

    If you'd like to track a large set of resources at a high frequency, use Azure Events Hubs instead of traditional webhooks to receive change notifications. Azure Event Hubs is a popular real-time events ingestion and distribution service built for scale.

    EVENT GRID - PARTNER EVENTS

    Microsoft Graph Change Notifications can be also received by using Azure Event Grid -- currently available for Microsoft Partners! Read the Partner Events Overview documentation for details.

    Setup Azure Event Hubs + Key Vault.

    To get Microsoft Graph Change Notifications delivered to Azure Event Hubs, we'll have to setup Azure Event Hubs and Azure Key Vault. We'll use Azure Key Vault to access to Event Hubs connection string.

    1️⃣ Create Azure Event Hubs

    1. Go to Azure Portal and select Create a resource, type Event Hubs and select click Create.
    2. Fill in the Event Hubs namespace creation details, and then click Create.
    3. Go to the newly created Event Hubs namespace page, select Event Hubs tab from the left pane and + Event Hub:
      • Name your Event Hub as Event Hub
      • Click Create.
    4. Click the name of the Event Hub, and then select Shared access policies and + Add to add a new policy:
      • Give a name to the policy
      • Check Send and Listen
      • Click Create.
    5. After the policy has been created, click the name of the policy to open the details panel, and then copy the Connection string-primary key value. Write it down; you'll need it for the next step.
    6. Go to Consumer groups tab in the left pane and select + Consumer group, give a name for your consumer group as onboarding and select Create.

    2️⃣ Create Azure Key Vault

    1. Go to Azure Portal and select Create a resource, type Key Vault and select Create.
    2. Fill in the Key Vault creation details, and then click Review + Create.
    3. Go to newly created Key Vault and select Secrets tab from the left pane and click + Generate/Import:
      • Give a name to the secret
      • For the value, paste in the connection string you generated at the Event Hubs step
      • Click Create
      • Copy the name of the secret.
    4. Select Access Policies from the left pane and + Add Access Policy:
      • For Secret permissions, select Get
      • For Principal, select Microsoft Graph Change Tracking
      • Click Add.
    5. Select Overview tab from the left pane and copy the Vault URI.

    Subscribe for Logic Apps change notifications

    To start receiving Microsoft Graph Change Notifications, we'll need to create subscription to the resource that we'd like to track - here, 'users'. We'll use Azure Logic Apps to create subscription.

    To create subscription for Microsoft Graph Change Notifications, we'll need to make a http post request to https://graph.microsoft.com/v1.0/subscriptions. Microsoft Graph requires Azure Active Directory authentication make API calls. First, we'll need to register an app to Azure Active Directory, and then we will make the Microsoft Graph Subscription API call with Azure Logic Apps.

    1️⃣ Create an app in Azure Active Directory

    1. In the Azure Portal, go to Azure Active Directory and select App registrations from the left pane and select + New registration. Fill in the details for the new App registration form as below:
      • Name: Graph Subscription Flow Auth
      • Supported account types: Accounts in any organizational directory (Any Azure AD directory - Multitenant) and personal Microsoft accounts (e.g. Skype, Xbox)
      • Select Register.
    2. Go to newly registered app in Azure Active Directory, select API permissions:
      • Select + Add a permission and Microsoft Graph
      • Select Application permissions and add User.Read.All and Directory.Read.All.
      • Select Grant admin consent for the organization
    3. Select Certificates & secrets tab from the left pane, select + New client secret:
      • Choose desired expiry duration
      • Select Add
      • Copy the value of the secret.
    4. Go to Overview from the left pane, copy Application (client) ID and Directory (tenant) ID.

    2️⃣ Create subscription with Azure Logic Apps

    1. Go to Azure Portal and select Create a resource, type Logic apps and select click Create.

    2. Fill in the Logic Apps creation details, and then click Create.

    3. Go to the newly created Logic Apps page, select Workflows tab from the left pane and select + Add:

      • Give a name to the new workflow as graph-subscription-flow
      • Select Stateful as a state type
      • Click Create.
    4. Go to graph-subscription-flow, and then select Designer tab.

    5. In the Choose an operation section, search for Schedule and select Recurrence as a trigger. Fill in the parameters as below:

      • Interval: 61
      • Frequency: Minute
      • Time zone: Select your own time zone
      • Start time: Set a start time
    6. Select + button in the flow and select add an action. Search for HTTP and select HTTP as an action. Fill in the parameters as below:

      • Method: POST
      • URI: https://graph.microsoft.com/v1.0/subscriptions
      • Headers:
        • Key: Content-type
        • Value: application/json
      • Body:
      {
      "changeType": "created, updated",
      "clientState": "secretClientValue",
      "expirationDateTime": "@{addHours(utcNow(), 1)}",
      "notificationUrl": "EventHub:https://<YOUR-VAULT-URI>/secrets/<YOUR-KEY-VAULT-SECRET-NAME>?tenantId=72f988bf-86f1-41af-91ab-2d7cd011db47",
      "resource": "users"
      }

      In notificationUrl, make sure to replace <YOUR-VAULT-URI> with the vault uri and <YOUR-KEY-VAULT-SECRET-NAME> with the secret name that you copied from the Key Vault.

      In resource, define the resource type you'd like to track changes. For our example, we will track changes for users resource.

      • Authentication:
        • Authentication type: Active Directory OAuth
        • Authority: https://login.microsoft.com
        • Tenant: Directory (tenant) ID copied from AAD app
        • Audience: https://graph.microsoft.com
        • Client ID: Application (client) ID copied from AAD app
        • Credential Type: Secret
        • Secret: value of the secret copied from AAD app
    7. Select Save and run your workflow from the Overview tab.

      Check your subscription in Graph Explorer: If you'd like to make sure that your subscription is created successfully by Logic Apps, you can go to Graph Explorer, login with your Microsoft 365 account and make GET request to https://graph.microsoft.com/v1.0/subscriptions. Your subscription should appear in the response after it's created successfully.

    Subscription workflow success

    After subscription is created successfully by Logic Apps, Azure Event Hubs will receive notifications whenever there is a new user created in Azure Active Directory.


    Create Onboarding workflow in Logic Apps

    We'll create a second workflow in the Logic Apps to receive change notifications from Event Hubs when there is a new user created in the Azure Active Directory and add new user in Onboarding team on Microsoft Teams.

    1. Go to the Logic Apps you created in the previous steps, select Workflows tab and create a new workflow by selecting + Add:
      • Give a name to the new workflow as teams-onboarding-flow
      • Select Stateful as a state type
      • Click Create.
    2. Go to teams-onboarding-flow, and then select Designer tab.
    3. In the Choose an operation section, search for Event Hub, select When events are available in Event Hub as a trigger. Setup Event Hub connection as below:
      • Create Connection:
        • Connection name: Connection
        • Authentication Type: Connection String
        • Connection String: Go to Event Hubs > Shared Access Policies > RootManageSharedAccessKey and copy Connection string–primary key
        • Select Create.
      • Parameters:
        • Event Hub Name: Event Hub
        • Consumer Group Name: onboarding
    4. Select + in the flow and add an action, search for Control and add For each as an action. Fill in For each action as below:
      • Select output from previous steps: Events
    5. Inside For each, select + in the flow and add an action, search for Data operations and select Parse JSON. Fill in Parse JSON action as below:
      • Content: Events Content
      • Schema: Copy the json content from schema-parse.json and paste as a schema
    6. Select + in the flow and add an action, search for Control and add For each as an action. Fill in For each action as below:
      • Select output from previous steps: value
      1. Inside For each, select + in the flow and add an action, search for Microsoft Teams and select Add a member to a team. Login with your Microsoft 365 account to create a connection and fill in Add a member to a team action as below:
      • Team: Create an Onboarding team on Microsoft Teams and select
      • A user AAD ID for the user to add to a team: id
    7. Select Save.

    🚀 Debug your onboarding experience

    To debug our onboarding experience, we'll need to create a new user in Azure Active Directory and see if it's added in Microsoft Teams Onboarding team automatically.

    1. Go to Azure Portal and select Azure Active Directory from the left pane and go to Users. Select + New user and Create new user. Fill in the details as below:

      • User name: JaneDoe
      • Name: Jane Doe

      new user in Azure Active Directory

    2. When you added Jane Doe as a new user, it should trigger the teams-onboarding-flow to run. teams onboarding flow success

    3. Once the teams-onboarding-flow runs successfully, you should be able to see Jane Doe as a member of the Onboarding team on Microsoft Teams! 🥳 new member in Onboarding team on Microsoft Teams

    Congratulations! 🎉

    You just built an onboarding experience using Azure Logic Apps, Azure Event Hubs and Azure Key Vault.


    📚 Resources

    - + \ No newline at end of file diff --git a/blog/tags/microsoft-graph/index.html b/blog/tags/microsoft-graph/index.html index 62682c340a..d5d85ef549 100644 --- a/blog/tags/microsoft-graph/index.html +++ b/blog/tags/microsoft-graph/index.html @@ -14,7 +14,7 @@ - + @@ -22,7 +22,7 @@

    One post tagged with "microsoft-graph"

    View All Tags

    · 10 min read
    Ayca Bas

    Welcome to Day 20 of #30DaysOfServerless!

    Every day millions of people spend their precious time in productivity tools. What if you use data and intelligence behind the Microsoft applications (Microsoft Teams, Outlook, and many other Office apps) to build seamless automations and custom apps to boost productivity?

    In this post, we'll learn how to build a seamless onboarding experience for new employees joining a company with the power of Microsoft Graph, integrated with Event Hubs and Logic Apps!


    What We'll Cover

    • ✨ The power of Microsoft Graph
    • 🖇️ How do Microsoft Graph and Event Hubs work together?
    • 🛠 Let's Build an Onboarding Workflow!
      • 1️⃣ Setup Azure Event Hubs + Key Vault
      • 2️⃣ Subscribe to users, receive change notifications from Logic Apps
      • 3️⃣ Create Onboarding workflow in the Logic Apps
    • 🚀 Debug: Your onboarding experience
    • ✋ Exercise: Try this tutorial out yourself!
    • 📚 Resources: For Self-Study


    ✨ The Power of Microsoft Graph

    Microsoft Graph is the gateway to data and intelligence in Microsoft 365 platform. Microsoft Graph exploses Rest APIs and client libraries to access data across Microsoft 365 core services such as Calendar, Teams, To Do, Outlook, People, Planner, OneDrive, OneNote and more.

    Overview of Microsoft Graph

    You can build custom experiences by using Microsoft Graph such as automating the onboarding process for new employees. When new employees are created in the Azure Active Directory, they will be automatically added in the Onboarding team on Microsoft Teams.

    Solution architecture


    🖇️ Microsoft Graph with Event Hubs

    Microsoft Graph uses a webhook mechanism to track changes in resources and deliver change notifications to the clients. For example, with Microsoft Graph Change Notifications, you can receive change notifications when:

    • a new task is added in the to-do list
    • a user changes the presence status from busy to available
    • an event is deleted/cancelled from the calendar

    If you'd like to track a large set of resources at a high frequency, use Azure Events Hubs instead of traditional webhooks to receive change notifications. Azure Event Hubs is a popular real-time events ingestion and distribution service built for scale.

    EVENT GRID - PARTNER EVENTS

    Microsoft Graph Change Notifications can be also received by using Azure Event Grid -- currently available for Microsoft Partners! Read the Partner Events Overview documentation for details.

    Setup Azure Event Hubs + Key Vault.

    To get Microsoft Graph Change Notifications delivered to Azure Event Hubs, we'll have to setup Azure Event Hubs and Azure Key Vault. We'll use Azure Key Vault to access to Event Hubs connection string.

    1️⃣ Create Azure Event Hubs

    1. Go to Azure Portal and select Create a resource, type Event Hubs and select click Create.
    2. Fill in the Event Hubs namespace creation details, and then click Create.
    3. Go to the newly created Event Hubs namespace page, select Event Hubs tab from the left pane and + Event Hub:
      • Name your Event Hub as Event Hub
      • Click Create.
    4. Click the name of the Event Hub, and then select Shared access policies and + Add to add a new policy:
      • Give a name to the policy
      • Check Send and Listen
      • Click Create.
    5. After the policy has been created, click the name of the policy to open the details panel, and then copy the Connection string-primary key value. Write it down; you'll need it for the next step.
    6. Go to Consumer groups tab in the left pane and select + Consumer group, give a name for your consumer group as onboarding and select Create.

    2️⃣ Create Azure Key Vault

    1. Go to Azure Portal and select Create a resource, type Key Vault and select Create.
    2. Fill in the Key Vault creation details, and then click Review + Create.
    3. Go to newly created Key Vault and select Secrets tab from the left pane and click + Generate/Import:
      • Give a name to the secret
      • For the value, paste in the connection string you generated at the Event Hubs step
      • Click Create
      • Copy the name of the secret.
    4. Select Access Policies from the left pane and + Add Access Policy:
      • For Secret permissions, select Get
      • For Principal, select Microsoft Graph Change Tracking
      • Click Add.
    5. Select Overview tab from the left pane and copy the Vault URI.

    Subscribe for Logic Apps change notifications

    To start receiving Microsoft Graph Change Notifications, we'll need to create subscription to the resource that we'd like to track - here, 'users'. We'll use Azure Logic Apps to create subscription.

    To create subscription for Microsoft Graph Change Notifications, we'll need to make a http post request to https://graph.microsoft.com/v1.0/subscriptions. Microsoft Graph requires Azure Active Directory authentication make API calls. First, we'll need to register an app to Azure Active Directory, and then we will make the Microsoft Graph Subscription API call with Azure Logic Apps.

    1️⃣ Create an app in Azure Active Directory

    1. In the Azure Portal, go to Azure Active Directory and select App registrations from the left pane and select + New registration. Fill in the details for the new App registration form as below:
      • Name: Graph Subscription Flow Auth
      • Supported account types: Accounts in any organizational directory (Any Azure AD directory - Multitenant) and personal Microsoft accounts (e.g. Skype, Xbox)
      • Select Register.
    2. Go to newly registered app in Azure Active Directory, select API permissions:
      • Select + Add a permission and Microsoft Graph
      • Select Application permissions and add User.Read.All and Directory.Read.All.
      • Select Grant admin consent for the organization
    3. Select Certificates & secrets tab from the left pane, select + New client secret:
      • Choose desired expiry duration
      • Select Add
      • Copy the value of the secret.
    4. Go to Overview from the left pane, copy Application (client) ID and Directory (tenant) ID.

    2️⃣ Create subscription with Azure Logic Apps

    1. Go to Azure Portal and select Create a resource, type Logic apps and select click Create.

    2. Fill in the Logic Apps creation details, and then click Create.

    3. Go to the newly created Logic Apps page, select Workflows tab from the left pane and select + Add:

      • Give a name to the new workflow as graph-subscription-flow
      • Select Stateful as a state type
      • Click Create.
    4. Go to graph-subscription-flow, and then select Designer tab.

    5. In the Choose an operation section, search for Schedule and select Recurrence as a trigger. Fill in the parameters as below:

      • Interval: 61
      • Frequency: Minute
      • Time zone: Select your own time zone
      • Start time: Set a start time
    6. Select + button in the flow and select add an action. Search for HTTP and select HTTP as an action. Fill in the parameters as below:

      • Method: POST
      • URI: https://graph.microsoft.com/v1.0/subscriptions
      • Headers:
        • Key: Content-type
        • Value: application/json
      • Body:
      {
      "changeType": "created, updated",
      "clientState": "secretClientValue",
      "expirationDateTime": "@{addHours(utcNow(), 1)}",
      "notificationUrl": "EventHub:https://<YOUR-VAULT-URI>/secrets/<YOUR-KEY-VAULT-SECRET-NAME>?tenantId=72f988bf-86f1-41af-91ab-2d7cd011db47",
      "resource": "users"
      }

      In notificationUrl, make sure to replace <YOUR-VAULT-URI> with the vault uri and <YOUR-KEY-VAULT-SECRET-NAME> with the secret name that you copied from the Key Vault.

      In resource, define the resource type you'd like to track changes. For our example, we will track changes for users resource.

      • Authentication:
        • Authentication type: Active Directory OAuth
        • Authority: https://login.microsoft.com
        • Tenant: Directory (tenant) ID copied from AAD app
        • Audience: https://graph.microsoft.com
        • Client ID: Application (client) ID copied from AAD app
        • Credential Type: Secret
        • Secret: value of the secret copied from AAD app
    7. Select Save and run your workflow from the Overview tab.

      Check your subscription in Graph Explorer: If you'd like to make sure that your subscription is created successfully by Logic Apps, you can go to Graph Explorer, login with your Microsoft 365 account and make GET request to https://graph.microsoft.com/v1.0/subscriptions. Your subscription should appear in the response after it's created successfully.

    Subscription workflow success

    After subscription is created successfully by Logic Apps, Azure Event Hubs will receive notifications whenever there is a new user created in Azure Active Directory.


    Create Onboarding workflow in Logic Apps

    We'll create a second workflow in the Logic Apps to receive change notifications from Event Hubs when there is a new user created in the Azure Active Directory and add new user in Onboarding team on Microsoft Teams.

    1. Go to the Logic Apps you created in the previous steps, select Workflows tab and create a new workflow by selecting + Add:
      • Give a name to the new workflow as teams-onboarding-flow
      • Select Stateful as a state type
      • Click Create.
    2. Go to teams-onboarding-flow, and then select Designer tab.
    3. In the Choose an operation section, search for Event Hub, select When events are available in Event Hub as a trigger. Setup Event Hub connection as below:
      • Create Connection:
        • Connection name: Connection
        • Authentication Type: Connection String
        • Connection String: Go to Event Hubs > Shared Access Policies > RootManageSharedAccessKey and copy Connection string–primary key
        • Select Create.
      • Parameters:
        • Event Hub Name: Event Hub
        • Consumer Group Name: onboarding
    4. Select + in the flow and add an action, search for Control and add For each as an action. Fill in For each action as below:
      • Select output from previous steps: Events
    5. Inside For each, select + in the flow and add an action, search for Data operations and select Parse JSON. Fill in Parse JSON action as below:
      • Content: Events Content
      • Schema: Copy the json content from schema-parse.json and paste as a schema
    6. Select + in the flow and add an action, search for Control and add For each as an action. Fill in For each action as below:
      • Select output from previous steps: value
      1. Inside For each, select + in the flow and add an action, search for Microsoft Teams and select Add a member to a team. Login with your Microsoft 365 account to create a connection and fill in Add a member to a team action as below:
      • Team: Create an Onboarding team on Microsoft Teams and select
      • A user AAD ID for the user to add to a team: id
    7. Select Save.

    🚀 Debug your onboarding experience

    To debug our onboarding experience, we'll need to create a new user in Azure Active Directory and see if it's added in Microsoft Teams Onboarding team automatically.

    1. Go to Azure Portal and select Azure Active Directory from the left pane and go to Users. Select + New user and Create new user. Fill in the details as below:

      • User name: JaneDoe
      • Name: Jane Doe

      new user in Azure Active Directory

    2. When you added Jane Doe as a new user, it should trigger the teams-onboarding-flow to run. teams onboarding flow success

    3. Once the teams-onboarding-flow runs successfully, you should be able to see Jane Doe as a member of the Onboarding team on Microsoft Teams! 🥳 new member in Onboarding team on Microsoft Teams

    Congratulations! 🎉

    You just built an onboarding experience using Azure Logic Apps, Azure Event Hubs and Azure Key Vault.


    📚 Resources

    - + \ No newline at end of file diff --git a/blog/tags/openapi/index.html b/blog/tags/openapi/index.html index 0e5b1a81d4..2e384c66cf 100644 --- a/blog/tags/openapi/index.html +++ b/blog/tags/openapi/index.html @@ -14,13 +14,13 @@ - +

    One post tagged with "openapi"

    View All Tags

    · 14 min read
    Justin Yoo

    Welcome to Day 28 of #30DaysOfServerless!

    Since it's the serverless end-to-end week, I'm going to discuss how to use a serverless application Azure Functions with OpenAPI extension to be seamlessly integrated with Power Platform custom connector through Azure API Management - in a post I call "Where am I? My GPS Location with Serverless Power Platform Custom Connector"

    OK. Are you ready? Let's get started!


    What We'll Cover

    • What is Power Platform custom connector?
    • Proxy app to Google Maps and Naver Map API
    • API Management integration
    • Two ways of building custom connector
    • Where am I? Power Apps app
    • Exercise: Try this yourself!
    • Resources: For self-study!


    SAMPLE REPO

    Want to follow along? Check out the sample app on GitHub repository used in this post.

    What is Power Platform custom connector?

    Power Platform is a low-code/no-code application development tool for fusion teams that consist of a group of people. Those people come from various disciplines, including field experts (domain experts), IT professionals and professional developers, to draw business values successfully. Within the fusion team, the domain experts become citizen developers or low-code developers by Power Platform. In addition, Making Power Platform more powerful is that it offers hundreds of connectors to other Microsoft 365 and third-party services like SAP, ServiceNow, Salesforce, Google, etc.

    However, what if you want to use your internal APIs or APIs not yet offering their official connectors? Here's an example. If your company has an inventory management system, and you want to use it within your Power Apps or Power Automate. That point is exactly where Power Platform custom connectors is necessary.

    Inventory Management System for Power Apps

    Therefore, Power Platform custom connectors enrich those citizen developers' capabilities because those connectors can connect any API applications for the citizen developers to use.

    In this post, let's build a custom connector that provides a static map image generated by Google Maps API and Naver Map API using your GPS location.

    Proxy app to Google Maps and Naver Map API

    First, let's build an Azure Functions app that connects to Google Maps and Naver Map. Suppose that you've already got the API keys for both services. If you haven't yet, get the keys first by visiting here for Google and here for Naver. Then, store them to local.settings.json within your Azure Functions app.

    {
    "Values": {
    ...
    "Maps__Google__ApiKey": "<GOOGLE_MAPS_API_KEY>",
    "Maps__Naver__ClientId": "<NAVER_MAP_API_CLIENT_ID>",
    "Maps__Naver__ClientSecret": "<NAVER_MAP_API_CLIENT_SECRET>"
    }
    }

    Here's the sample logic to get the static image from Google Maps API. It takes the latitude and longitude of your current location and image zoom level, then returns the static map image. There are a few hard-coded assumptions, though:

    • The image size should be 400x400.
    • The image should be in .png format.
    • The marker should show be red and show my location.
    public class GoogleMapService : IMapService
    {
    public async Task<byte[]> GetMapAsync(HttpRequest req)
    {
    var latitude = req.Query["lat"];
    var longitude = req.Query["long"];
    var zoom = (string)req.Query["zoom"] ?? "14";

    var sb = new StringBuilder();
    sb.Append("https://maps.googleapis.com/maps/api/staticmap")
    .Append($"?center={latitude},{longitude}")
    .Append("&size=400x400")
    .Append($"&zoom={zoom}")
    .Append($"&markers=color:red|{latitude},{longitude}")
    .Append("&format=png32")
    .Append($"&key={this._settings.Google.ApiKey}");
    var requestUri = new Uri(sb.ToString());

    var bytes = await this._http.GetByteArrayAsync(requestUri).ConfigureAwait(false);

    return bytes;
    }
    }

    The NaverMapService class has a similar logic with the same input and assumptions. Here's the code:

    public class NaverMapService : IMapService
    {
    public async Task<byte[]> GetMapAsync(HttpRequest req)
    {
    var latitude = req.Query["lat"];
    var longitude = req.Query["long"];
    var zoom = (string)req.Query["zoom"] ?? "13";

    var sb = new StringBuilder();
    sb.Append("https://naveropenapi.apigw.ntruss.com/map-static/v2/raster")
    .Append($"?center={longitude},{latitude}")
    .Append("&w=400")
    .Append("&h=400")
    .Append($"&level={zoom}")
    .Append($"&markers=color:blue|pos:{longitude}%20{latitude}")
    .Append("&format=png")
    .Append("&lang=en");
    var requestUri = new Uri(sb.ToString());

    this._http.DefaultRequestHeaders.Clear();
    this._http.DefaultRequestHeaders.Add("X-NCP-APIGW-API-KEY-ID", this._settings.Naver.ClientId);
    this._http.DefaultRequestHeaders.Add("X-NCP-APIGW-API-KEY", this._settings.Naver.ClientSecret);

    var bytes = await this._http.GetByteArrayAsync(requestUri).ConfigureAwait(false);

    return bytes;
    }
    }

    Let's take a look at the function endpoints. Here's for the Google Maps and Naver Map. As the GetMapAsync(req) method returns a byte array value, you need to transform it as FileContentResult, with the content type of image/png.

    // Google Maps
    public class GoogleMapsTrigger
    {
    [FunctionName(nameof(GoogleMapsTrigger.GetGoogleMapImage))]
    public async Task<IActionResult> GetGoogleMapImage(
    [HttpTrigger(AuthorizationLevel.Anonymous, "GET", Route = "google/image")] HttpRequest req)
    {
    this._logger.LogInformation("C# HTTP trigger function processed a request.");

    var bytes = await this._service.GetMapAsync(req).ConfigureAwait(false);

    return new FileContentResult(bytes, "image/png");
    }
    }

    // Naver Map
    public class NaverMapsTrigger
    {
    [FunctionName(nameof(NaverMapsTrigger.GetNaverMapImage))]
    public async Task<IActionResult> GetNaverMapImage(
    [HttpTrigger(AuthorizationLevel.Anonymous, "GET", Route = "naver/image")] HttpRequest req)
    {
    this._logger.LogInformation("C# HTTP trigger function processed a request.");

    var bytes = await this._service.GetMapAsync(req).ConfigureAwait(false);

    return new FileContentResult(bytes, "image/png");
    }
    }

    Then, add the OpenAPI capability to each function endpoint. Here's the example:

    // Google Maps
    public class GoogleMapsTrigger
    {
    [FunctionName(nameof(GoogleMapsTrigger.GetGoogleMapImage))]
    // ⬇️⬇️⬇️ Add decorators provided by the OpenAPI extension ⬇️⬇️⬇️
    [OpenApiOperation(operationId: nameof(GoogleMapsTrigger.GetGoogleMapImage), tags: new[] { "google" })]
    [OpenApiParameter(name: "lat", In = ParameterLocation.Query, Required = true, Type = typeof(string), Description = "The **latitude** parameter")]
    [OpenApiParameter(name: "long", In = ParameterLocation.Query, Required = true, Type = typeof(string), Description = "The **longitude** parameter")]
    [OpenApiParameter(name: "zoom", In = ParameterLocation.Query, Required = false, Type = typeof(string), Description = "The **zoom level** parameter &ndash; Default value is `14`")]
    [OpenApiResponseWithBody(statusCode: HttpStatusCode.OK, contentType: "image/png", bodyType: typeof(byte[]), Description = "The map image as an OK response")]
    // ⬆️⬆️⬆️ Add decorators provided by the OpenAPI extension ⬆️⬆️⬆️
    public async Task<IActionResult> GetGoogleMapImage(
    [HttpTrigger(AuthorizationLevel.Anonymous, "GET", Route = "google/image")] HttpRequest req)
    {
    ...
    }
    }

    // Naver Map
    public class NaverMapsTrigger
    {
    [FunctionName(nameof(NaverMapsTrigger.GetNaverMapImage))]
    // ⬇️⬇️⬇️ Add decorators provided by the OpenAPI extension ⬇️⬇️⬇️
    [OpenApiOperation(operationId: nameof(NaverMapsTrigger.GetNaverMapImage), tags: new[] { "naver" })]
    [OpenApiParameter(name: "lat", In = ParameterLocation.Query, Required = true, Type = typeof(string), Description = "The **latitude** parameter")]
    [OpenApiParameter(name: "long", In = ParameterLocation.Query, Required = true, Type = typeof(string), Description = "The **longitude** parameter")]
    [OpenApiParameter(name: "zoom", In = ParameterLocation.Query, Required = false, Type = typeof(string), Description = "The **zoom level** parameter &ndash; Default value is `13`")]
    [OpenApiResponseWithBody(statusCode: HttpStatusCode.OK, contentType: "image/png", bodyType: typeof(byte[]), Description = "The map image as an OK response")]
    // ⬆️⬆️⬆️ Add decorators provided by the OpenAPI extension ⬆️⬆️⬆️
    public async Task<IActionResult> GetNaverMapImage(
    [HttpTrigger(AuthorizationLevel.Anonymous, "GET", Route = "naver/image")] HttpRequest req)
    {
    ...
    }
    }

    Run the function app in the local. Here are the latitude and longitude values for Seoul, Korea.

    • latitude: 37.574703
    • longitude: 126.978519

    Google Map for Seoul

    It seems to be working! Let's deploy it to Azure.

    API Management integration

    Visual Studio 2022 provides a built-in deployment tool for Azure Functions app onto Azure. In addition, the deployment tool supports seamless integration with Azure API Management as long as your Azure Functions app enables the OpenAPI capability. In this post, I'm going to use this feature. Right-mouse click on the Azure Functions project and select the "Publish" menu.

    Visual Studio context menu for publish

    Then, you will see the publish screen. Click the "➕ New" button to create a new publish profile.

    Create a new publish profile

    Choose "Azure" and click the "Next" button.

    Choose the target platform for publish

    Select the app instance. This time simply pick up the "Azure Function App (Windows)" option, then click "Next".

    Choose the target OS for publish

    If you already provision an Azure Function app instance, you will see it on the screen. Otherwise, create a new one. Then, click "Next".

    Choose the target instance for publish

    In the next step, you are asked to choose the Azure API Management instance for integration. Choose one, or create a new one. Then, click "Next".

    Choose the APIM instance for integration

    Finally, select the publish method either local publish or GitHub Actions workflow. Let's pick up the local publish method for now. Then, click "Finish".

    Choose the deployment type

    The publish profile has been created. Click "Close" to move on.

    Publish profile created

    Now the function app is ready for deployment. Click the "Publish" button and see how it goes.

    Publish function app

    The Azure function app has been deployed and integrated with the Azure API Management instance.

    Function app published

    Go to the published function app site, and everything looks OK.

    Function app on Azure

    And API Management shows the function app integrated perfectly.

    Function app integrated with APIM

    Now, you are ready to create a custom connector. Let's move on.

    Two ways of building custom connector

    There are two ways to create a custom connector.

    Export custom connector from API Management

    First, you can directly use the built-in API Management feature. Then, click the ellipsis icon and select the "Create Power Connector" menu.

    Create Power Connector menu

    Then, you are redirected to this screen. While the "API" and "API display name" fields are pre-populated, you need to choose the Power Platform environment tied to your tenant. Choose an environment, click "Authenticate", and click "Create".

    Create custom connector screen

    Check your custom connector on Power Apps or Power Automate side.

    Custom connector created on Power Apps

    However, there's a caveat to this approach. Because it's tied to your tenant, you should use the second approach if you want to use this custom connector on the other tenant.

    Import custom connector from OpenAPI document or URL

    Click the ellipsis icon again and select the "Export" menu.

    Export menu

    On the Export API screen, choose the "OpenAPI v2 (JSON)" panel because Power Platform custom connector currently accepts version 2 of the OpenAPI document.

    Select OpenAPI v2

    Download the OpenAPI document to your local computer and move to your Power Apps or Power Automate page under your desired environment. I'm going to use the Power Automate page. First, go to the "Data" ➡️ "Custom connectors" page. Then, click the "➕ New custom connector" ➡️ "Import an OpenAPI file" at the top right corner.

    New custom connector

    When a modal pops up, give the custom connector name and import the OpenAPI document exported above. Then, click "Continue".

    Import custom connector

    Actually, that's it! Next, click the "✔️ Create connector" button to create the connector.

    Create custom connector

    Go back to the custom connector page, and you will see the "Maps API" custom connector you just created.

    Custom connector imported

    So, you are ready to create a Power Apps app to display your location on Google Maps or Naver Map! Let's move on.

    Where am I? Power Apps app

    Open the Power Apps Studio, and create an empty canvas app, named Who am I with a phone layout.

    Custom connector integration

    To use the custom connector created above, you need to add it to the Power App. Click the cylinder icon on the left and click the "Add data" button.

    Add custom connector to data pane

    Search the custom connector name, "Maps API", and click the custom connector to add.

    Search custom connector

    To use the custom connector, you also need to create a connection to it. Click the "Connect" button and move on.

    Create connection to custom connector

    Now, you've got the connection to the custom connector.

    Connection to custom connector ready

    Controls

    Let's build the Power Apps app. First of all, put three controls Image, Slider and Button onto the canvas.

    Power Apps control added

    Click the "Screen1" control and change the value on the property "OnVisible" to the formula below. The formula stores the current slider value in the zoomlevel collection.

    ClearCollect(
    zoomlevel,
    Slider1.Value
    )

    Click the "Botton1" control and change the value on the property "OnSelected" to the formula below. It passes the current latitude, longitude and zoom level to the custom connector and receives the image data. The received image data is stored in the result collection.

    ClearCollect(
    result,
    MAPS.GetGoogleMapImage(
    Location.Latitude,
    Location.Longitude,
    { zoom: First(zoomlevel).Value }
    )
    )

    Click the "Image1" control and change the value on the property "Image" to the formula below. It gets the image data from the result collection.

    First(result).Url

    Click the "Slider1" control and change the value on the property "OnChange" to the formula below. It stores the current slider value to the zoomlevel collection, followed by calling the custom connector to get the image data against the current location.

    ClearCollect(
    zoomlevel,
    Slider1.Value
    );
    ClearCollect(
    result,
    MAPS.GetGoogleMapImage(
    Location.Latitude,
    Location.Longitude,
    { zoom: First(zoomlevel).Value }
    )
    )

    That seems to be OK. Let's click the "Where am I?" button. But it doesn't show the image. The First(result).Url value is actually similar to this:

    appres://blobmanager/1090a86393a843adbfcf428f0b90e91b/1

    It's the image reference value somewhere you can't get there.

    Workaround Power Automate workflow

    Therefore, you need a workaround using a Power Automate workflow to sort out this issue. Open the Power Automate Studio, create an instant cloud flow with the Power App trigger, and give it the "Where am I" name. Then add input parameters of lat, long and zoom.

    Power Apps trigger on Power Automate workflow

    Add custom connector action to get the map image.

    Select action to get the Google Maps image

    In the action, pass the appropriate parameters to the action.

    Pass parameters to the custom connector action

    Add a "Response" action and put the following values into each field.

    • "Body" field:

      {
      "base64Image": <power_automate_expression>
      }

      The <power_automate_expression> should be concat('data:', body('GetGoogleMapImage')?['$content-type'], ';base64,', body('GetGoogleMapImage')?['$content']).

    • "Response Body JSON Schema" field:

      {
      "type": "object",
      "properties": {
      "base64Image": {
      "type": "string"
      }
      }
      }

    Format the Response action

    Let's return to the Power Apps Studio and add the Power Automate workflow you created.

    Add Power Automate workflow

    Select "Button1" and change the value on the property "OnSelect" below. It replaces the direct call to the custom connector with the Power Automate workflow.

    ClearCollect(
    result,
    WhereamI.Run(
    Location.Latitude,
    Location.Longitude,
    First(zoomlevel).Value
    )
    )

    Also, change the value on the property "OnChange" of the "Slider1" control below, replacing the custom connector call with the Power Automate workflow call.

    ClearCollect(
    zoomlevel,
    Slider1.Value
    );
    ClearCollect(
    result,
    WhereamI.Run(
    Location.Latitude,
    Location.Longitude,
    First(zoomlevel).Value
    )
    )

    And finally, change the "Image1" control's "Image" property value below.

    First(result).base64Image

    The workaround has been applied. Click the "Where am I?" button to see your current location from Google Maps.

    Run Power Apps app #1

    If you change the slider left or right, you will see either the zoomed-in image or the zoomed-out image.

    Run Power Apps app #2

    Now, you've created a Power Apps app to show your current location using:

    • Google Maps API through the custom connector, and
    • Custom connector written in Azure Functions with OpenAPI extension!

    Exercise: Try this yourself!

    You can fork this GitHub repository to your account and play around with it to see how the custom connector works. After forking the repository, make sure that you create all the necessary secrets to your repository documented in the README file.

    Then, click the "Deploy to Azure" button, and it will provision all necessary Azure resources and deploy an Azure Functions app for a custom connector.

    Deploy To Azure

    Once everything is deployed successfully, try to create a Power Apps app and Power Automate workflow to see your current location in real-time!

    Resources: For self-study!

    Want to know more about Power Platform custom connector and Azure Functions OpenAPI extension? Here are several resources you can take a look at:

    - + \ No newline at end of file diff --git a/blog/tags/power-platform/index.html b/blog/tags/power-platform/index.html index c98647099c..90ce23d8bf 100644 --- a/blog/tags/power-platform/index.html +++ b/blog/tags/power-platform/index.html @@ -14,13 +14,13 @@ - +

    One post tagged with "power-platform"

    View All Tags

    · 14 min read
    Justin Yoo

    Welcome to Day 28 of #30DaysOfServerless!

    Since it's the serverless end-to-end week, I'm going to discuss how to use a serverless application Azure Functions with OpenAPI extension to be seamlessly integrated with Power Platform custom connector through Azure API Management - in a post I call "Where am I? My GPS Location with Serverless Power Platform Custom Connector"

    OK. Are you ready? Let's get started!


    What We'll Cover

    • What is Power Platform custom connector?
    • Proxy app to Google Maps and Naver Map API
    • API Management integration
    • Two ways of building custom connector
    • Where am I? Power Apps app
    • Exercise: Try this yourself!
    • Resources: For self-study!


    SAMPLE REPO

    Want to follow along? Check out the sample app on GitHub repository used in this post.

    What is Power Platform custom connector?

    Power Platform is a low-code/no-code application development tool for fusion teams that consist of a group of people. Those people come from various disciplines, including field experts (domain experts), IT professionals and professional developers, to draw business values successfully. Within the fusion team, the domain experts become citizen developers or low-code developers by Power Platform. In addition, Making Power Platform more powerful is that it offers hundreds of connectors to other Microsoft 365 and third-party services like SAP, ServiceNow, Salesforce, Google, etc.

    However, what if you want to use your internal APIs or APIs not yet offering their official connectors? Here's an example. If your company has an inventory management system, and you want to use it within your Power Apps or Power Automate. That point is exactly where Power Platform custom connectors is necessary.

    Inventory Management System for Power Apps

    Therefore, Power Platform custom connectors enrich those citizen developers' capabilities because those connectors can connect any API applications for the citizen developers to use.

    In this post, let's build a custom connector that provides a static map image generated by Google Maps API and Naver Map API using your GPS location.

    Proxy app to Google Maps and Naver Map API

    First, let's build an Azure Functions app that connects to Google Maps and Naver Map. Suppose that you've already got the API keys for both services. If you haven't yet, get the keys first by visiting here for Google and here for Naver. Then, store them to local.settings.json within your Azure Functions app.

    {
    "Values": {
    ...
    "Maps__Google__ApiKey": "<GOOGLE_MAPS_API_KEY>",
    "Maps__Naver__ClientId": "<NAVER_MAP_API_CLIENT_ID>",
    "Maps__Naver__ClientSecret": "<NAVER_MAP_API_CLIENT_SECRET>"
    }
    }

    Here's the sample logic to get the static image from Google Maps API. It takes the latitude and longitude of your current location and image zoom level, then returns the static map image. There are a few hard-coded assumptions, though:

    • The image size should be 400x400.
    • The image should be in .png format.
    • The marker should show be red and show my location.
    public class GoogleMapService : IMapService
    {
    public async Task<byte[]> GetMapAsync(HttpRequest req)
    {
    var latitude = req.Query["lat"];
    var longitude = req.Query["long"];
    var zoom = (string)req.Query["zoom"] ?? "14";

    var sb = new StringBuilder();
    sb.Append("https://maps.googleapis.com/maps/api/staticmap")
    .Append($"?center={latitude},{longitude}")
    .Append("&size=400x400")
    .Append($"&zoom={zoom}")
    .Append($"&markers=color:red|{latitude},{longitude}")
    .Append("&format=png32")
    .Append($"&key={this._settings.Google.ApiKey}");
    var requestUri = new Uri(sb.ToString());

    var bytes = await this._http.GetByteArrayAsync(requestUri).ConfigureAwait(false);

    return bytes;
    }
    }

    The NaverMapService class has a similar logic with the same input and assumptions. Here's the code:

    public class NaverMapService : IMapService
    {
    public async Task<byte[]> GetMapAsync(HttpRequest req)
    {
    var latitude = req.Query["lat"];
    var longitude = req.Query["long"];
    var zoom = (string)req.Query["zoom"] ?? "13";

    var sb = new StringBuilder();
    sb.Append("https://naveropenapi.apigw.ntruss.com/map-static/v2/raster")
    .Append($"?center={longitude},{latitude}")
    .Append("&w=400")
    .Append("&h=400")
    .Append($"&level={zoom}")
    .Append($"&markers=color:blue|pos:{longitude}%20{latitude}")
    .Append("&format=png")
    .Append("&lang=en");
    var requestUri = new Uri(sb.ToString());

    this._http.DefaultRequestHeaders.Clear();
    this._http.DefaultRequestHeaders.Add("X-NCP-APIGW-API-KEY-ID", this._settings.Naver.ClientId);
    this._http.DefaultRequestHeaders.Add("X-NCP-APIGW-API-KEY", this._settings.Naver.ClientSecret);

    var bytes = await this._http.GetByteArrayAsync(requestUri).ConfigureAwait(false);

    return bytes;
    }
    }

    Let's take a look at the function endpoints. Here's for the Google Maps and Naver Map. As the GetMapAsync(req) method returns a byte array value, you need to transform it as FileContentResult, with the content type of image/png.

    // Google Maps
    public class GoogleMapsTrigger
    {
    [FunctionName(nameof(GoogleMapsTrigger.GetGoogleMapImage))]
    public async Task<IActionResult> GetGoogleMapImage(
    [HttpTrigger(AuthorizationLevel.Anonymous, "GET", Route = "google/image")] HttpRequest req)
    {
    this._logger.LogInformation("C# HTTP trigger function processed a request.");

    var bytes = await this._service.GetMapAsync(req).ConfigureAwait(false);

    return new FileContentResult(bytes, "image/png");
    }
    }

    // Naver Map
    public class NaverMapsTrigger
    {
    [FunctionName(nameof(NaverMapsTrigger.GetNaverMapImage))]
    public async Task<IActionResult> GetNaverMapImage(
    [HttpTrigger(AuthorizationLevel.Anonymous, "GET", Route = "naver/image")] HttpRequest req)
    {
    this._logger.LogInformation("C# HTTP trigger function processed a request.");

    var bytes = await this._service.GetMapAsync(req).ConfigureAwait(false);

    return new FileContentResult(bytes, "image/png");
    }
    }

    Then, add the OpenAPI capability to each function endpoint. Here's the example:

    // Google Maps
    public class GoogleMapsTrigger
    {
    [FunctionName(nameof(GoogleMapsTrigger.GetGoogleMapImage))]
    // ⬇️⬇️⬇️ Add decorators provided by the OpenAPI extension ⬇️⬇️⬇️
    [OpenApiOperation(operationId: nameof(GoogleMapsTrigger.GetGoogleMapImage), tags: new[] { "google" })]
    [OpenApiParameter(name: "lat", In = ParameterLocation.Query, Required = true, Type = typeof(string), Description = "The **latitude** parameter")]
    [OpenApiParameter(name: "long", In = ParameterLocation.Query, Required = true, Type = typeof(string), Description = "The **longitude** parameter")]
    [OpenApiParameter(name: "zoom", In = ParameterLocation.Query, Required = false, Type = typeof(string), Description = "The **zoom level** parameter &ndash; Default value is `14`")]
    [OpenApiResponseWithBody(statusCode: HttpStatusCode.OK, contentType: "image/png", bodyType: typeof(byte[]), Description = "The map image as an OK response")]
    // ⬆️⬆️⬆️ Add decorators provided by the OpenAPI extension ⬆️⬆️⬆️
    public async Task<IActionResult> GetGoogleMapImage(
    [HttpTrigger(AuthorizationLevel.Anonymous, "GET", Route = "google/image")] HttpRequest req)
    {
    ...
    }
    }

    // Naver Map
    public class NaverMapsTrigger
    {
    [FunctionName(nameof(NaverMapsTrigger.GetNaverMapImage))]
    // ⬇️⬇️⬇️ Add decorators provided by the OpenAPI extension ⬇️⬇️⬇️
    [OpenApiOperation(operationId: nameof(NaverMapsTrigger.GetNaverMapImage), tags: new[] { "naver" })]
    [OpenApiParameter(name: "lat", In = ParameterLocation.Query, Required = true, Type = typeof(string), Description = "The **latitude** parameter")]
    [OpenApiParameter(name: "long", In = ParameterLocation.Query, Required = true, Type = typeof(string), Description = "The **longitude** parameter")]
    [OpenApiParameter(name: "zoom", In = ParameterLocation.Query, Required = false, Type = typeof(string), Description = "The **zoom level** parameter &ndash; Default value is `13`")]
    [OpenApiResponseWithBody(statusCode: HttpStatusCode.OK, contentType: "image/png", bodyType: typeof(byte[]), Description = "The map image as an OK response")]
    // ⬆️⬆️⬆️ Add decorators provided by the OpenAPI extension ⬆️⬆️⬆️
    public async Task<IActionResult> GetNaverMapImage(
    [HttpTrigger(AuthorizationLevel.Anonymous, "GET", Route = "naver/image")] HttpRequest req)
    {
    ...
    }
    }

    Run the function app in the local. Here are the latitude and longitude values for Seoul, Korea.

    • latitude: 37.574703
    • longitude: 126.978519

    Google Map for Seoul

    It seems to be working! Let's deploy it to Azure.

    API Management integration

    Visual Studio 2022 provides a built-in deployment tool for Azure Functions app onto Azure. In addition, the deployment tool supports seamless integration with Azure API Management as long as your Azure Functions app enables the OpenAPI capability. In this post, I'm going to use this feature. Right-mouse click on the Azure Functions project and select the "Publish" menu.

    Visual Studio context menu for publish

    Then, you will see the publish screen. Click the "➕ New" button to create a new publish profile.

    Create a new publish profile

    Choose "Azure" and click the "Next" button.

    Choose the target platform for publish

    Select the app instance. This time simply pick up the "Azure Function App (Windows)" option, then click "Next".

    Choose the target OS for publish

    If you already provision an Azure Function app instance, you will see it on the screen. Otherwise, create a new one. Then, click "Next".

    Choose the target instance for publish

    In the next step, you are asked to choose the Azure API Management instance for integration. Choose one, or create a new one. Then, click "Next".

    Choose the APIM instance for integration

    Finally, select the publish method either local publish or GitHub Actions workflow. Let's pick up the local publish method for now. Then, click "Finish".

    Choose the deployment type

    The publish profile has been created. Click "Close" to move on.

    Publish profile created

    Now the function app is ready for deployment. Click the "Publish" button and see how it goes.

    Publish function app

    The Azure function app has been deployed and integrated with the Azure API Management instance.

    Function app published

    Go to the published function app site, and everything looks OK.

    Function app on Azure

    And API Management shows the function app integrated perfectly.

    Function app integrated with APIM

    Now, you are ready to create a custom connector. Let's move on.

    Two ways of building custom connector

    There are two ways to create a custom connector.

    Export custom connector from API Management

    First, you can directly use the built-in API Management feature. Then, click the ellipsis icon and select the "Create Power Connector" menu.

    Create Power Connector menu

    Then, you are redirected to this screen. While the "API" and "API display name" fields are pre-populated, you need to choose the Power Platform environment tied to your tenant. Choose an environment, click "Authenticate", and click "Create".

    Create custom connector screen

    Check your custom connector on Power Apps or Power Automate side.

    Custom connector created on Power Apps

    However, there's a caveat to this approach. Because it's tied to your tenant, you should use the second approach if you want to use this custom connector on the other tenant.

    Import custom connector from OpenAPI document or URL

    Click the ellipsis icon again and select the "Export" menu.

    Export menu

    On the Export API screen, choose the "OpenAPI v2 (JSON)" panel because Power Platform custom connector currently accepts version 2 of the OpenAPI document.

    Select OpenAPI v2

    Download the OpenAPI document to your local computer and move to your Power Apps or Power Automate page under your desired environment. I'm going to use the Power Automate page. First, go to the "Data" ➡️ "Custom connectors" page. Then, click the "➕ New custom connector" ➡️ "Import an OpenAPI file" at the top right corner.

    New custom connector

    When a modal pops up, give the custom connector name and import the OpenAPI document exported above. Then, click "Continue".

    Import custom connector

    Actually, that's it! Next, click the "✔️ Create connector" button to create the connector.

    Create custom connector

    Go back to the custom connector page, and you will see the "Maps API" custom connector you just created.

    Custom connector imported

    So, you are ready to create a Power Apps app to display your location on Google Maps or Naver Map! Let's move on.

    Where am I? Power Apps app

    Open the Power Apps Studio, and create an empty canvas app, named Who am I with a phone layout.

    Custom connector integration

    To use the custom connector created above, you need to add it to the Power App. Click the cylinder icon on the left and click the "Add data" button.

    Add custom connector to data pane

    Search the custom connector name, "Maps API", and click the custom connector to add.

    Search custom connector

    To use the custom connector, you also need to create a connection to it. Click the "Connect" button and move on.

    Create connection to custom connector

    Now, you've got the connection to the custom connector.

    Connection to custom connector ready

    Controls

    Let's build the Power Apps app. First of all, put three controls Image, Slider and Button onto the canvas.

    Power Apps control added

    Click the "Screen1" control and change the value on the property "OnVisible" to the formula below. The formula stores the current slider value in the zoomlevel collection.

    ClearCollect(
    zoomlevel,
    Slider1.Value
    )

    Click the "Botton1" control and change the value on the property "OnSelected" to the formula below. It passes the current latitude, longitude and zoom level to the custom connector and receives the image data. The received image data is stored in the result collection.

    ClearCollect(
    result,
    MAPS.GetGoogleMapImage(
    Location.Latitude,
    Location.Longitude,
    { zoom: First(zoomlevel).Value }
    )
    )

    Click the "Image1" control and change the value on the property "Image" to the formula below. It gets the image data from the result collection.

    First(result).Url

    Click the "Slider1" control and change the value on the property "OnChange" to the formula below. It stores the current slider value to the zoomlevel collection, followed by calling the custom connector to get the image data against the current location.

    ClearCollect(
    zoomlevel,
    Slider1.Value
    );
    ClearCollect(
    result,
    MAPS.GetGoogleMapImage(
    Location.Latitude,
    Location.Longitude,
    { zoom: First(zoomlevel).Value }
    )
    )

    That seems to be OK. Let's click the "Where am I?" button. But it doesn't show the image. The First(result).Url value is actually similar to this:

    appres://blobmanager/1090a86393a843adbfcf428f0b90e91b/1

    It's the image reference value somewhere you can't get there.

    Workaround Power Automate workflow

    Therefore, you need a workaround using a Power Automate workflow to sort out this issue. Open the Power Automate Studio, create an instant cloud flow with the Power App trigger, and give it the "Where am I" name. Then add input parameters of lat, long and zoom.

    Power Apps trigger on Power Automate workflow

    Add custom connector action to get the map image.

    Select action to get the Google Maps image

    In the action, pass the appropriate parameters to the action.

    Pass parameters to the custom connector action

    Add a "Response" action and put the following values into each field.

    • "Body" field:

      {
      "base64Image": <power_automate_expression>
      }

      The <power_automate_expression> should be concat('data:', body('GetGoogleMapImage')?['$content-type'], ';base64,', body('GetGoogleMapImage')?['$content']).

    • "Response Body JSON Schema" field:

      {
      "type": "object",
      "properties": {
      "base64Image": {
      "type": "string"
      }
      }
      }

    Format the Response action

    Let's return to the Power Apps Studio and add the Power Automate workflow you created.

    Add Power Automate workflow

    Select "Button1" and change the value on the property "OnSelect" below. It replaces the direct call to the custom connector with the Power Automate workflow.

    ClearCollect(
    result,
    WhereamI.Run(
    Location.Latitude,
    Location.Longitude,
    First(zoomlevel).Value
    )
    )

    Also, change the value on the property "OnChange" of the "Slider1" control below, replacing the custom connector call with the Power Automate workflow call.

    ClearCollect(
    zoomlevel,
    Slider1.Value
    );
    ClearCollect(
    result,
    WhereamI.Run(
    Location.Latitude,
    Location.Longitude,
    First(zoomlevel).Value
    )
    )

    And finally, change the "Image1" control's "Image" property value below.

    First(result).base64Image

    The workaround has been applied. Click the "Where am I?" button to see your current location from Google Maps.

    Run Power Apps app #1

    If you change the slider left or right, you will see either the zoomed-in image or the zoomed-out image.

    Run Power Apps app #2

    Now, you've created a Power Apps app to show your current location using:

    • Google Maps API through the custom connector, and
    • Custom connector written in Azure Functions with OpenAPI extension!

    Exercise: Try this yourself!

    You can fork this GitHub repository to your account and play around with it to see how the custom connector works. After forking the repository, make sure that you create all the necessary secrets to your repository documented in the README file.

    Then, click the "Deploy to Azure" button, and it will provision all necessary Azure resources and deploy an Azure Functions app for a custom connector.

    Deploy To Azure

    Once everything is deployed successfully, try to create a Power Apps app and Power Automate workflow to see your current location in real-time!

    Resources: For self-study!

    Want to know more about Power Platform custom connector and Azure Functions OpenAPI extension? Here are several resources you can take a look at:

    - + \ No newline at end of file diff --git a/blog/tags/python/index.html b/blog/tags/python/index.html index a870486a20..f276482df4 100644 --- a/blog/tags/python/index.html +++ b/blog/tags/python/index.html @@ -14,7 +14,7 @@ - + @@ -22,7 +22,7 @@

    One post tagged with "python"

    View All Tags

    · 7 min read
    Jay Miller

    Welcome to Day 7 of #30DaysOfServerless!

    Over the past couple of days, we've explored Azure Functions from the perspective of specific programming languages. Today we'll continue that trend by looking at Python - exploring the Timer Trigger and CosmosDB binding, and showcasing integration with a FastAPI-implemented web app.

    Ready? Let's go.


    What We'll Cover

    • Developer Guidance: Azure Functions On Python
    • Build & Deploy: Wildfire Detection Apps with Timer Trigger + CosmosDB
    • Demo: My Fire Map App: Using FastAPI and Azure Maps to visualize data
    • Next Steps: Explore Azure Samples
    • Exercise: Try this yourself!
    • Resources: For self-study!


    Developer Guidance

    If you're a Python developer new to serverless on Azure, start with the Azure Functions Python Developer Guide. It covers:

    • Quickstarts with Visual Studio Code and Azure CLI
    • Adopting best practices for hosting, reliability and efficiency.
    • Tutorials showcasing Azure automation, image classification and more
    • Samples showcasing Azure Functions features for Python developers

    Now let's dive in and build our first Python-based Azure Functions app.


    Detecting Wildfires Around the World?

    I live in California which is known for lots of wildfires. I wanted to create a proof of concept for developing an application that could let me know if there was a wildfire detected near my home.

    NASA has a few satelites orbiting the Earth that can detect wildfires. These satelites take scans of the radiative heat in and use that to determine the likelihood of a wildfire. NASA updates their information about every 30 minutes and it can take about four hours for to scan and process information.

    Fire Point Near Austin, TX

    I want to get the information but I don't want to ping NASA or another service every time I check.

    What if I occaisionally download all the data I need? Then I can ping that as much as I like.

    I can create a script that does just that. Any time I say I can create a script that is a verbal queue for me to consider using an Azure function. With the function being ran in the cloud, I can ensure the script runs even when I'm not at my computer.

    How the Timer Trigger Works

    This function will utilize the Timer Trigger. This means Azure will call this function to run at a scheduled interval. This isn't the only way to keep the data in sync, but we know that arcgis, the service that we're using says that data is only updated every 30 minutes or so.

    To learn more about the TimerTrigger as a concept, check out the Azure Functions documentation around Timers.

    When we create the function we tell it a few things like where the script will live (in our case in __init__.py) the type and direction and notably often it should run. We specify the timer using schedule": <The CRON INTERVAL>. For us we're using 0 0,30 * * * which means every 30 minutes at the hour and half-hour.

    {
    "scriptFile": "__init__.py",
    "bindings": [
    {
    "name": "reqTimer",
    "type": "timerTrigger",
    "direction": "in",
    "schedule": "0 0,30 * * * *"
    }
    ]
    }

    Next, we create the code that runs when the function is called.

    Connecting to the Database and our Source

    Disclaimer: The data that we're pulling is for educational purposes only. This is not meant to be a production level application. You're welcome play with this project but ensure that you're using the data in compliance with Esri.

    Our function does two important things.

    1. It pulls data from ArcGIS that meets the parameters
    2. It stores that pulled data into our database

    If you want to check out the code in its entirety, check out the GitHub repository.

    Pulling the data from ArcGIS is easy. We can use the ArcGIS Python API. Then, we need to load the service layer. Finally we query that layer for the specific data.

    def write_new_file_data(gis_id:str, layer:int=0) -> FeatureSet:
    """Returns a JSON String of the Dataframe"""
    fire_data = g.content.get(gis_id)
    feature = fire_data.layers[layer] # Loading Featured Layer from ArcGIS
    q = feature.query(
    where="confidence >= 65 AND hours_old <= 4", #The filter for the query
    return_distince_values=True,
    out_fields="confidence, hours_old", # The data we want to store with our points
    out_sr=4326, # The spatial reference of the data
    )
    return q

    Then we need to store the data in our database.

    We're using Cosmos DB for this. COSMOSDB is a NoSQL database, which means that the data looks a lot like a python dictionary as it's JSON. This means that we don't need to worry about converting the data into a format that can be stored in a relational database.

    The second reason is that Cosmos DB is tied into the Azure ecosystem so that if we want to create functions Azure events around it, we can.

    Our script grabs the information that we pulled from ArcGIS and stores it in our database.

    async with CosmosClient.from_connection_string(COSMOS_CONNECTION_STRING) as client:
    container = database.get_container_client(container=CONTAINER)
    for record in data:
    await container.create_item(
    record,
    enable_automatic_id_generation=True,
    )

    In our code each of these functions live in their own space. So in the main function we focus solely on what azure functions will be doing. The script that gets called is __init__.py. There we'll have the function call the other functions running.

    We created another function called load_and_write that does all the work outlined above. __init__.py will call that.

    async def main(reqTimer: func.TimerRequest) -> None:
    database=database
    container=container
    await update_db.load_and_write(gis_id=GIS_LAYER_ID, database=database, container=container)

    Then we deploy the function to Azure. I like to use VS Code's Azure Extension but you can also deploy it a few other ways.

    Deploying the function via VS Code

    Once the function is deployed we can load the Azure portal and see a ping whenever the function is called. The pings correspond to the Function being ran

    We can also see the data now living in the datastore. Document in Cosmos DB

    It's in the Database, Now What?

    Now the real fun begins. We just loaded the last bit of fire data into a database. We can now query that data and serve it to others.

    As I mentioned before, our Cosmos DB data is also stored in Azure, which means that we can deploy Azure Functions to trigger when new data is added. Perhaps you can use this to check for fires near you and use a Logic App to send an alert to your phone or email.

    Another option is to create a web application that talks to the database and displays the data. I've created an example of this using FastAPI – https://jm-func-us-fire-notify.azurewebsites.net.

    Website that Checks for Fires


    Next Steps

    This article showcased the Timer Trigger and the HTTP Trigger for Azure Functions in Python. Now try exploring other triggers and bindings by browsing Bindings code samples for Python and Azure Functions samples for Python

    Once you've tried out the samples, you may want to explore more advanced integrations or extensions for serverless Python scenarios. Here are some suggestions:

    And check out the resources for more tutorials to build up your Azure Functions skills.

    Exercise

    I encourage you to fork the repository and try building and deploying it yourself! You can see the TimerTrigger and a HTTPTrigger building the website.

    Then try extending it. Perhaps if wildfires are a big thing in your area, you can use some of the data available in Planetary Computer to check out some other datasets.

    Resources

    - + \ No newline at end of file diff --git a/blog/tags/serverless-e-2-e/index.html b/blog/tags/serverless-e-2-e/index.html index e912749f35..36469396cc 100644 --- a/blog/tags/serverless-e-2-e/index.html +++ b/blog/tags/serverless-e-2-e/index.html @@ -14,13 +14,13 @@ - +

    One post tagged with "serverless-e2e"

    View All Tags

    · 5 min read
    Nitya Narasimhan
    Devanshi Joshi

    SEP 08: CHANGE IN PUBLISHING SCHEDULE

    Starting from Week 2 (Sep 8), we'll be publishing blog posts in batches rather than on a daily basis, so you can read a series of related posts together. Don't want to miss updates? Just subscribe to the feed


    Welcome to Day 8 of #30DaysOfServerless!

    This marks the end of our Week 1 Roadmap focused on Azure Functions!! Today, we'll do a quick recap of all #ServerlessSeptember activities in Week 1, set the stage for Week 2 - and leave you with some excellent tutorials you should explore to build more advanced scenarios with Azure Functions.

    Ready? Let's go.


    What We'll Cover

    • Azure Functions: Week 1 Recap
    • Advanced Functions: Explore Samples
    • End-to-End: Serverless Hacks & Cloud Skills
    • What's Next: Hello, Containers & Microservices
    • Challenge: Complete the Learning Path


    Week 1 Recap: #30Days & Functions

    Congratulations!! We made it to the end of Week 1 of #ServerlessSeptember. Let's recap what we learned so far:

    • In Core Concepts we looked at where Azure Functions fits into the serverless options available on Azure. And we learned about key concepts like Triggers, Bindings, Custom Handlers and Durable Functions.
    • In Build Your First Function we looked at the tooling options for creating Functions apps, testing them locally, and deploying them to Azure - as we built and deployed our first Functions app.
    • In the next 4 posts, we explored new Triggers, Integrations, and Scenarios - as we looked at building Functions Apps in Java, JavaScript, .NET and Python.
    • And in the Zero-To-Hero series, we learned about Durable Entities - and how we can use them to create stateful serverless solutions using a Chirper Sample as an example scenario.

    The illustrated roadmap below summarizes what we covered each day this week, as we bring our Functions-as-a-Service exploration to a close.


    Advanced Functions: Code Samples

    So, now that we've got our first Functions app under our belt, and validated our local development setup for tooling, where can we go next? A good next step is to explore different triggers and bindings, that drive richer end-to-end scenarios. For example:

    • Integrate Functions with Azure Logic Apps - we'll discuss Azure Logic Apps in Week 3. For now, think of it as a workflow automation tool that lets you integrate seamlessly with other supported Azure services to drive an end-to-end scenario. In this tutorial, we set up a workflow connecting Twitter (get tweet) to Azure Cognitive Services (analyze sentiment) - and use that to trigger an Azure Functions app to send email about the result.
    • Integrate Functions with Event Grid - we'll discuss Azure Event Grid in Week 3. For now, think of it as an eventing service connecting event sources (publishers) to event handlers (subscribers) at cloud scale. In this tutorial, we handle a common use case - a workflow where loading an image to Blob Storage triggers an Azure Functions app that implements a resize function, helping automatically generate thumbnails for the uploaded image.
    • Integrate Functions with CosmosDB and SignalR to bring real-time push-based notifications to your web app. It achieves this by using a Functions app that is triggered by changes in a CosmosDB backend, causing it to broadcast that update (push notification to connected web clients over SignalR, in real time.

    Want more ideas? Check out the Azure Samples for Functions for implementations, and browse the Azure Architecture Center for reference architectures from real-world scenarios that involve Azure Functions usage.


    E2E Scenarios: Hacks & Cloud Skills

    Want to systematically work your way through a single End-to-End scenario involving Azure Functions alongside other serverless support technologies? Check out the Serverless Hacks activity happening during #ServerlessSeptember, and learn to build this "Serverless Tollbooth Application" in a series of 10 challenges. Check out the video series for a reference solution in .NET and sign up for weekly office hours to join peers and discuss your solutions or challenges.

    Or perhaps you prefer to learn core concepts with code in a structured learning path? We have that covered. Check out the 12-module "Create Serverless Applications" course from Microsoft Learn which walks your through concepts, one at a time, with code. Even better - sign up for the free Cloud Skills Challenge and complete the same path (in under 30 days) but this time, with the added fun of competing against your peers for a spot on a leaderboard, and swag.


    What's Next? Hello, Cloud-Native!

    So where to next? In Week 2 we turn our attention from Functions-as-a-Service to building more complex backends using Containers and Microservices. We'll focus on two core technologies - Azure Container Apps and Dapr (Distributed Application Runtime) - both key components of a broader vision around Building Cloud-Native Applications in Azure.

    What is Cloud-Native you ask?

    Fortunately for you, we have an excellent introduction in our Zero-to-Hero article on Go Cloud-Native with Azure Container Apps - that explains the 5 pillars of Cloud-Native and highlights the value of Azure Container Apps (scenarios) and Dapr (sidecar architecture) for simplified microservices-based solution with auto-scale capability. Prefer a visual summary? Here's an illustrate guide to that article for convenience.

    Go Cloud-Native Download a higher resolution version of the image


    Take The Challenge

    We typically end each post with an exercise or activity to reinforce what you learned. For Week 1, we encourage you to take the Cloud Skills Challenge and work your way through at least a subset of the modules, for hands-on experience with the different Azure Functions concepts, integrations, and usage.

    See you in Week 2!

    - + \ No newline at end of file diff --git a/blog/tags/serverless-hacks/index.html b/blog/tags/serverless-hacks/index.html index a2a5fea926..f92931429b 100644 --- a/blog/tags/serverless-hacks/index.html +++ b/blog/tags/serverless-hacks/index.html @@ -14,13 +14,13 @@ - +

    One post tagged with "serverless-hacks"

    View All Tags

    · 5 min read
    Nitya Narasimhan
    Devanshi Joshi

    SEP 08: CHANGE IN PUBLISHING SCHEDULE

    Starting from Week 2 (Sep 8), we'll be publishing blog posts in batches rather than on a daily basis, so you can read a series of related posts together. Don't want to miss updates? Just subscribe to the feed


    Welcome to Day 8 of #30DaysOfServerless!

    This marks the end of our Week 1 Roadmap focused on Azure Functions!! Today, we'll do a quick recap of all #ServerlessSeptember activities in Week 1, set the stage for Week 2 - and leave you with some excellent tutorials you should explore to build more advanced scenarios with Azure Functions.

    Ready? Let's go.


    What We'll Cover

    • Azure Functions: Week 1 Recap
    • Advanced Functions: Explore Samples
    • End-to-End: Serverless Hacks & Cloud Skills
    • What's Next: Hello, Containers & Microservices
    • Challenge: Complete the Learning Path


    Week 1 Recap: #30Days & Functions

    Congratulations!! We made it to the end of Week 1 of #ServerlessSeptember. Let's recap what we learned so far:

    • In Core Concepts we looked at where Azure Functions fits into the serverless options available on Azure. And we learned about key concepts like Triggers, Bindings, Custom Handlers and Durable Functions.
    • In Build Your First Function we looked at the tooling options for creating Functions apps, testing them locally, and deploying them to Azure - as we built and deployed our first Functions app.
    • In the next 4 posts, we explored new Triggers, Integrations, and Scenarios - as we looked at building Functions Apps in Java, JavaScript, .NET and Python.
    • And in the Zero-To-Hero series, we learned about Durable Entities - and how we can use them to create stateful serverless solutions using a Chirper Sample as an example scenario.

    The illustrated roadmap below summarizes what we covered each day this week, as we bring our Functions-as-a-Service exploration to a close.


    Advanced Functions: Code Samples

    So, now that we've got our first Functions app under our belt, and validated our local development setup for tooling, where can we go next? A good next step is to explore different triggers and bindings, that drive richer end-to-end scenarios. For example:

    • Integrate Functions with Azure Logic Apps - we'll discuss Azure Logic Apps in Week 3. For now, think of it as a workflow automation tool that lets you integrate seamlessly with other supported Azure services to drive an end-to-end scenario. In this tutorial, we set up a workflow connecting Twitter (get tweet) to Azure Cognitive Services (analyze sentiment) - and use that to trigger an Azure Functions app to send email about the result.
    • Integrate Functions with Event Grid - we'll discuss Azure Event Grid in Week 3. For now, think of it as an eventing service connecting event sources (publishers) to event handlers (subscribers) at cloud scale. In this tutorial, we handle a common use case - a workflow where loading an image to Blob Storage triggers an Azure Functions app that implements a resize function, helping automatically generate thumbnails for the uploaded image.
    • Integrate Functions with CosmosDB and SignalR to bring real-time push-based notifications to your web app. It achieves this by using a Functions app that is triggered by changes in a CosmosDB backend, causing it to broadcast that update (push notification to connected web clients over SignalR, in real time.

    Want more ideas? Check out the Azure Samples for Functions for implementations, and browse the Azure Architecture Center for reference architectures from real-world scenarios that involve Azure Functions usage.


    E2E Scenarios: Hacks & Cloud Skills

    Want to systematically work your way through a single End-to-End scenario involving Azure Functions alongside other serverless support technologies? Check out the Serverless Hacks activity happening during #ServerlessSeptember, and learn to build this "Serverless Tollbooth Application" in a series of 10 challenges. Check out the video series for a reference solution in .NET and sign up for weekly office hours to join peers and discuss your solutions or challenges.

    Or perhaps you prefer to learn core concepts with code in a structured learning path? We have that covered. Check out the 12-module "Create Serverless Applications" course from Microsoft Learn which walks your through concepts, one at a time, with code. Even better - sign up for the free Cloud Skills Challenge and complete the same path (in under 30 days) but this time, with the added fun of competing against your peers for a spot on a leaderboard, and swag.


    What's Next? Hello, Cloud-Native!

    So where to next? In Week 2 we turn our attention from Functions-as-a-Service to building more complex backends using Containers and Microservices. We'll focus on two core technologies - Azure Container Apps and Dapr (Distributed Application Runtime) - both key components of a broader vision around Building Cloud-Native Applications in Azure.

    What is Cloud-Native you ask?

    Fortunately for you, we have an excellent introduction in our Zero-to-Hero article on Go Cloud-Native with Azure Container Apps - that explains the 5 pillars of Cloud-Native and highlights the value of Azure Container Apps (scenarios) and Dapr (sidecar architecture) for simplified microservices-based solution with auto-scale capability. Prefer a visual summary? Here's an illustrate guide to that article for convenience.

    Go Cloud-Native Download a higher resolution version of the image


    Take The Challenge

    We typically end each post with an exercise or activity to reinforce what you learned. For Week 1, we encourage you to take the Cloud Skills Challenge and work your way through at least a subset of the modules, for hands-on experience with the different Azure Functions concepts, integrations, and usage.

    See you in Week 2!

    - + \ No newline at end of file diff --git a/blog/tags/serverless-september/index.html b/blog/tags/serverless-september/index.html index cb3552a9ce..099719bf5f 100644 --- a/blog/tags/serverless-september/index.html +++ b/blog/tags/serverless-september/index.html @@ -14,14 +14,14 @@ - +

    33 posts tagged with "serverless-september"

    View All Tags

    · 7 min read
    Devanshi Joshi

    It's Serverless September in a Nutshell! Join us as we unpack our month-long learning journey exploring the core technology pillars for Serverless architectures on Azure. Then end with a look at next steps to build your Cloud-native applications on Azure.


    What We'll Cover

    • Functions-as-a-Service (FaaS)
    • Microservices and Containers
    • Serverless Integrations
    • End-to-End Solutions
    • Developer Tools & #Hacktoberfest

    Banner for Serverless September


    Building Cloud-native Apps

    By definition, cloud-native technologies empower organizations to build and run scalable applications in modern, dynamic environments such as public, private, and hybrid clouds. You can learn more about cloud-native in Kendall Roden's #ServerlessSeptember post on Going Cloud-native with Azure Container Apps.

    Serveless technologies accelerate productivity and minimize costs for deploying applications at cloud scale. So, what can we build with serverless technologies in cloud-native on Azure? Anything that is event-driven - examples include:

    • Microservices - scaled by KEDA-compliant triggers
    • Public API Endpoints - scaled by #concurrent HTTP requests
    • Event-Driven Applications - scaled by length of message queue
    • Web Applications - scaled by #concurrent HTTP requests
    • Background Process - scaled by CPU and Memory usage

    Great - but as developers, we really want to know how we can get started building and deploying serverless solutions on Azure. That was the focus of our #ServerlessSeptember journey. Let's take a quick look at the four key themes.

    Functions-as-a-Service (FaaS)

    Functions-as-a-Service (FaaS) is the epitome of developer productivity for full-stack modern apps. As developers, you don't manage infrastructure and focus only on business logic and application code. And, with Serverless Compute you only pay for when your code runs - making this the simplest first step to begin migrating your application to cloud-native.

    In Azure, FaaS is provided by Azure Functions. Check out our Functions + Serverless on Azure to go from learning core concepts, to building your first Functions app in your programming language of choice. Azure functions support multiple programming languages including C#, F#, Java, JavaScript, Python, Typescript, and PowerShell.

    Want to get extended language support for languages like Go, and Rust? You can Use Custom Handlers to make this happen! But what if you want to have long-running functions, or create complex workflows involving more than one function? Read our post on Durable Entities to learn how you can orchestrate this with Azure Functions.

    Check out this recent AskTheExpert Q&A session with the Azure Functions team to get answers to popular community questions on Azure Functions features and usage.

    Microservices and Containers

    Functions-as-a-Service is an ideal first step towards serverless development. But Functions are just one of the 5 pillars of cloud-native. This week we'll look at two of the other pillars: microservices and containers - with specific focus on two core technologies: Azure Container Apps and Dapr (Distributed Application Runtime).

    In this 6-part series of posts, we walk through each technology independently, before looking at the value of building Azure Container Apps with Dapr.

    • In Hello Container Apps we learned core concepts & deployed our first ACA.
    • In Microservices Communication we learned about ACA environments and virtual networks, and how microservices communicate in ACA with a hands-on tutorial.
    • In Scaling Your Container Apps we learned about KEDA (Kubernetes Event-Driven Autoscaler) and configuring ACA for autoscaling with KEDA-compliant triggers.
    • In Build with Dapr we introduced the Distributed Application Runtime (Dapr), exploring its Building Block APIs and sidecar architecture for working with ACA.
    • In Secure ACA Access we learned how to secure ACA access to external services with - and without - Dapr, covering Secret Stores and Managed Identity.
    • Finally, Build ACA with Dapr tied it all together with a enterprise app scenario where an orders processor (ACA) uses Dapr APIs (PubSub, State Management) to receive and store order messages from Azure Service Bus.

    Build ACA with Dapr

    Check out this recent AskTheExpert Q&A session with the Azure Container Apps team for answers to popular community questions on core features and usage.

    Serverless Integrations

    In the first half of the month we looked at compute resources for building and deploying serverless applications. In the second half, we look at integration tools and resources that automate developer workflows to streamline the end-to-end developer experience.

    In Azure, this is enabled by services like Azure Logic Apps and Azure Event Grid. Azure Logic Apps provides a visual designer to create and automate workflows with little or no code involved. Azure Event Grid provides a highly-scable event broker with support for pub/sub communications to drive async event-driven architectures.

    • In Tracking Weather Data Changes With Logic Apps we look at how you can use Logic Apps to integrate the MSN weather service with Azure CosmosDB, allowing automated collection of weather data on changes.

    • In Teach the Cloud to Read & Categorize Mail we take it a step further, using Logic Apps to automate a workflow that includes a Computer Vision service to "read" images and store the results to CosmosDB.

    • In Integrate with Microsoft Graph we explore a multi-cloud scenario (Azure + M365) where change notifications from Microsoft Graph can be integrated using Logic Apps and Event Hubs to power an onboarding workflow.

    • In Cloud Events with Event Grid we learn about the CloudEvents specification (for consistently describing event data) - and learn how Event Grid brokers events in this format. Azure Logic Apps can be an Event handler (subscriber) that uses the event to trigger an automated workflow on receipt.

      Azure Event Grid And Logic Apps

    Want to explore other such integrations? Browse Azure Architectures and filter by selected Azure services for more real-world scenarios.


    End-to-End Solutions

    We've covered serverless compute solutions (for building your serverless applications) and serverless integration services to automate end-to-end workflows in synchronous or asynchronous event-driven architectures. In this final week, we want to leave you with a sense of end-to-end development tools and use cases that can be enabled by Serverless on Azure. Here are some key examples:

    ArticleDescription
    In this tutorial, you'll learn to deploy a containerized ASP.NET Core 6.0 application to Azure Container Apps - with a Blazor front-end and two Web API projects
    Deploy Java containers to cloudIn this tutorial you learn to build and deploy a Java application running on Spring Boot, by publishing it in a container to Azure Container Registry, then deploying to Azure Container Apps,, from ACR, via the Azure Portal.
    **Where am I? My GPS Location with Serverless Power Platform Custom Connector**In this step-by-step tutorial you learn to integrate a serverless application (built on Azure Functions and OpenAPI) with Power Platforms custom connectors via Azure API Management (API-M).This pattern can empower a new ecosystem of fusion apps for cases like inventory management.
    And in our Serverless Hacks initiative, we walked through an 8-step hack to build a serverless tollbooth. Check out this 12-part video walkthrough of a reference solution using .NET.

    Developer Tools

    But wait - there's more. Those are a sample of the end-to-end application scenarios that are built on serverless on Azure. But what about the developer experience? In this article, we say hello to the Azure Developer CLI - an open-source tool that streamlines your develop-deploy workflow, with simple commands that map to core stages of your development journey. Go from code to cloud with one CLI

    And watch this space for more such tutorials and content through October, including a special #Hacktoberfest focused initiative to encourage and support first-time contributors to open-source. Here's a sneak peek at the project we plan to share - the new awesome-azd templates gallery.


    Join us at Microsoft Ignite!

    Want to continue your learning journey, and learn about what's next for Serverless on Azure? Microsoft Ignite happens Oct 12-14 this year and has multiple sessions on relevant technologies and tools. Check out the Session Catalog and register here to attend online.

    - + \ No newline at end of file diff --git a/blog/tags/serverless-september/page/10/index.html b/blog/tags/serverless-september/page/10/index.html index 71c46aa7bd..d295b0fcb4 100644 --- a/blog/tags/serverless-september/page/10/index.html +++ b/blog/tags/serverless-september/page/10/index.html @@ -14,14 +14,14 @@ - +

    33 posts tagged with "serverless-september"

    View All Tags

    · 5 min read
    Mike Morton

    Welcome to Day 19 of #30DaysOfServerless!

    Today, we have a special set of posts from our Zero To Hero 🚀 initiative, featuring blog posts authored by our Product Engineering teams for #ServerlessSeptember. Posts were originally published on the Apps on Azure blog on Microsoft Tech Community.


    What We'll Cover

    • Log Streaming - in Azure Portal
    • Console Connect - in Azure Portal
    • Metrics - using Azure Monitor
    • Log Analytics - using Azure Monitor
    • Metric Alerts and Log Alerts - using Azure Monitor


    In past weeks, @kendallroden wrote about what it means to be Cloud-Native and @Anthony Chu the various ways to get your apps running on Azure Container Apps. Today, we will talk about the observability tools you can use to observe, debug, and diagnose your Azure Container Apps.

    Azure Container Apps provides several observability features to help you debug and diagnose your apps. There are both Azure portal and CLI options you can use to help understand the health of your apps and help identify when issues arise.

    While these features are helpful throughout your container app’s lifetime, there are two that are especially helpful. Log streaming and console connect can be a huge help in the initial stages when issues often rear their ugly head. Let's dig into both of these a little.

    Log Streaming

    Log streaming allows you to use the Azure portal to view the streaming logs from your app. You’ll see the logs written from the app to the container’s console (stderr and stdout). If your app is running multiple revisions, you can choose from which revision to view logs. You can also select a specific replica if your app is configured to scale. Lastly, you can choose from which container to view the log output. This is useful when you are running a custom or Dapr sidecar container. view streaming logs

    Here’s an example CLI command to view the logs of a container app.

    az containerapp logs show -n MyContainerapp -g MyResourceGroup

    You can find more information about the different options in our CLI docs.

    Console Connect

    In the Azure portal, you can connect to the console of a container in your app. Like log streaming, you can select the revision, replica, and container if applicable. After connecting to the console of the container, you can execute shell commands and utilities that you have installed in your container. You can view files and their contents, monitor processes, and perform other debugging tasks.

    This can be great for checking configuration files or even modifying a setting or library your container is using. Of course, updating a container in this fashion is not something you should do to a production app, but tweaking and re-testing an app in a non-production environment can speed up development.

    Here’s an example CLI command to connect to the console of a container app.

    az containerapp exec -n MyContainerapp -g MyResourceGroup

    You can find more information about the different options in our CLI docs.

    Metrics

    Azure Monitor collects metric data from your container app at regular intervals to help you gain insights into the performance and health of your container app. Container apps provide these metrics:

    • CPU usage
    • Memory working set bytes
    • Network in bytes
    • Network out bytes
    • Requests
    • Replica count
    • Replica restart count

    Here you can see the metrics explorer showing the replica count for an app as it scaled from one replica to fifteen, and then back down to one.

    You can also retrieve metric data through the Azure CLI.

    Log Analytics

    Azure Monitor Log Analytics is great for viewing your historical logs emitted from your container apps. There are two custom tables of interest, the ContainerAppConsoleLogs_CL which contains all the log messages written by your app (stdout and stderr), and the ContainerAppSystemLogs_CL which contain the system messages from the Azure Container Apps service.

    You can also query Log Analytics through the Azure CLI.

    Alerts

    Azure Monitor alerts notify you so that you can respond quickly to critical issues. There are two types of alerts that you can define:

    You can create alert rules from metric charts in the metric explorer and from queries in Log Analytics. You can also define and manage alerts from the Monitor|Alerts page.

    Here is what creating an alert looks like in the Azure portal. In this case we are setting an alert rule from the metric explorer to trigger an alert if the replica restart count for a specific container app is greater than two within the last fifteen minutes.

    To learn more about alerts, refer to Overview of alerts in Microsoft Azure.

    Conclusion

    In this article, we looked at the several ways to observe, debug, and diagnose your Azure Container Apps. As you can see there are rich portal tools and a complete set of CLI commands to use. All the tools are helpful throughout the lifecycle of your app, be sure to take advantage of them when having an issue and/or to prevent issues.

    To learn more, visit Azure Container Apps | Microsoft Azure today!

    ASK THE EXPERT: LIVE Q&A

    The Azure Container Apps team will answer questions live on September 29.

    - + \ No newline at end of file diff --git a/blog/tags/serverless-september/page/11/index.html b/blog/tags/serverless-september/page/11/index.html index 4942428047..6c3a5c4846 100644 --- a/blog/tags/serverless-september/page/11/index.html +++ b/blog/tags/serverless-september/page/11/index.html @@ -14,14 +14,14 @@ - +

    33 posts tagged with "serverless-september"

    View All Tags

    · 10 min read
    Brian Benz

    Welcome to Day 18 of #30DaysOfServerless!

    Yesterday my Serverless September post introduced you to making Azure Logic Apps and Azure Cosmos DB work together with a sample application that collects weather data. Today I'm sharing a more robust solution that actually reads my mail. Let's learn about Teaching the cloud to read your mail!

    Ready? Let's go!


    What We'll Cover

    • Introduction to the ReadMail solution
    • Setting up Azure storage, Cosmos DB and Computer Vision
    • Connecting it all together with a Logic App
    • Resources: For self-study!


    Introducing the ReadMail solution

    The US Postal system offers a subscription service that sends you images of mail it will be delivering to your home. I decided it would be cool to try getting Azure to collect data based on these images, so that I could categorize my mail and track the types of mail that I received.

    To do this, I used Azure storage, Cosmos DB, Logic Apps, and computer vision. When a new email comes in from the US Postal service (USPS), it triggers a logic app that:

    • Posts attachments to Azure storage
    • Triggers Azure Computer vision to perform an OCR function on attachments
    • Extracts any results into a JSON document
    • Writes the JSON document to Cosmos DB

    workflow for the readmail solution

    In this post I'll walk you through setting up the solution for yourself.

    Prerequisites

    Setup Azure Services

    First, we'll create all of the target environments we need to be used by our Logic App, then we;ll create the Logic App.

    1. Azure Storage

    We'll be using Azure storage to collect attached images from emails as they arrive. Adding images to Azure storage will also trigger a workflow that performs OCR on new attached images and stores the OCR data in Cosmos DB.

    To create a new Azure storage account from the portal dashboard, Select Create a resource > Storage account > Create.

    The Basics tab covers all of the features and information that we will need for this solution:

    SectionFieldRequired or optionalDescription
    Project detailsSubscriptionRequiredSelect the subscription for the new storage account.
    Project detailsResource groupRequiredCreate a new resource group that you will use for storage, Cosmos DB, Computer Vision and the Logic App.
    Instance detailsStorage account nameRequiredChoose a unique name for your storage account. Storage account names must be between 3 and 24 characters in length and may contain numbers and lowercase letters only.
    Instance detailsRegionRequiredSelect the appropriate region for your storage account.
    Instance detailsPerformanceRequiredSelect Standard performance for general-purpose v2 storage accounts (default).
    Instance detailsRedundancyRequiredSelect locally-redundant Storage (LRS) for this example.

    Select Review + create to accept the remaining default options, then validate and create the account.

    2. Azure CosmosDB

    CosmosDB will be used to store the JSON documents returned by the COmputer Vision OCR process.

    See more details and screen shots for setting up CosmosDB in yesterday's Serverless September post - Using Logic Apps with Cosmos DB

    To get started with Cosmos DB, you create an account, then a database, then a container to store JSON documents. To create a new Cosmos DB account from the portal dashboard, Select Create a resource > Azure Cosmos DB > Create. Choose core SQL for the API.

    Select your subscription, then for simplicity use the same resource group you created when you set up storage. Enter an account name and choose a location, select provisioned throughput capacity mode and apply the free tier discount. From here you can select Review and Create, then Create

    Next, create a new database and container. Go to the Data Explorer in your new Cosmos DB account, and choose New Container. Name the database, and keep all the other defaults except:

    SettingAction
    Container IDid
    Container partition/id

    Press OK to create a database and container

    3. Azure Computer Vision

    Azure Cognitive Services' Computer Vision will perform an OCR process on each image attachment that is stored in Azure storage.

    From the portal dashboard, Select Create a resource > AI + Machine Learning > Computer Vision > Create.

    The Basics and Identity tabs cover all of the features and information that we will need for this solution:

    Basics Tab

    SectionFieldRequired or optionalDescription
    Project detailsSubscriptionRequiredSelect the subscription for the new service.
    Project detailsResource groupRequiredUse the same resource group that you used for Azure storage and Cosmos DB.
    Instance detailsRegionRequiredSelect the appropriate region for your Computer Vision service.
    Instance detailsNameRequiredChoose a unique name for your Computer Vision service.
    Instance detailsPricingRequiredSelect the free tier for this example.

    Identity Tab

    SectionFieldRequired or optionalDescription
    System assigned managed identityStatusRequiredEnable system assigned identity to grant the resource access to other existing resources.

    Select Review + create to accept the remaining default options, then validate and create the account.


    Connect it all with a Logic App

    Now we're ready to put this all together in a Logic App workflow!

    1. Create Logic App

    From the portal dashboard, Select Create a resource > Integration > Logic App > Create. Name your Logic App and select a location, the rest of the settings can be left at their defaults.

    2. Create Workflow: Add Trigger

    Once the Logic App is created, select Create a workflow from designer.

    A workflow is a series of steps that defines a task or process. Each workflow starts with a single trigger, after which you must add one or more actions.

    When in designer, search for outlook.com on the right under Add a trigger. Choose outlook.com. Choose When a new email arrives as the trigger.

    A trigger is always the first step in any workflow and specifies the condition for running any further steps in that workflow.

    Set the following values:

    ParameterValue
    FolderInbox
    ImportanceAny
    Only With AttachmentsYes
    Include AttachmentsYes

    Then add a new parameter:

    ParameterValue
    FromAdd the email address that sends you the email with attachments
    3. Create Workflow: Add Action (for Trigger)

    Choose add an action and choose control > for-each.

    logic app for each

    Inside the for-each action, in Select an output from previous steps, choose attachments. Then, again inside the for-each action, add the create blob action:

    Set the following values:

    ParameterValue
    Folder Path/mailreaderinbox
    Blob NameAttachments Name
    Blob ContentAttachments Content

    This extracts attachments from the email and created a new blob for each attachment.

    Next, inside the same for-each action, add the get blob content action.

    Set the following values:

    ParameterValue
    Blobid
    Infer content typeYes

    We create and read from a blob for each attachment because Computer Vision needs a non-virtual source to read from when performing an OCR process. Because we enabled system assigned identity to grant Computer Vision to other existing resources, it can access the blob but not the outlook.com attachment. Also, we pass the ID of the blob to use as a unique ID when writing to Cosmos DB.

    create blob from attachments

    Next, inside the same for-each action, choose add an action and choose control > condition. Set the value to Media Type > is equal to > image/JPEG

    The USPS sends attachments of multiple types, but we only want to scan attachments that have images of our mail, which are always JPEG images. If the condition is true, we will process the image with Computer Vision OCR and write the results to a JSON document in CosmosDB.

    In the True section of the condition, add an action and choose Computer Vision API > Optical Character Recognition (OCR) to JSON.

    Set the following values:

    ParameterValue
    Image SourceImage Content
    Image contentFile Content

    In the same True section of the condition, choose add an action and choose Cosmos DB. Choose Create or Update Document from the actions. Select Access Key, and provide the primary read-write key (found under keys in Cosmos DB), and the Cosmos DB account ID (without 'documents.azure.com').

    Next, fill in your Cosmos DB Database ID and Collection ID. Create a JSON document by selecting dynamic content elements and wrapping JSON formatting around them.

    Be sure to use the ID passed from blob storage as your unique ID for CosmosDB. That way you can troubleshoot and JSON or OCR issues by tracing back the JSON document in Cosmos Db to the blob in Azure storage. Also, include the Computer Vision JSON response, as it contains the results of the Computer Vision OCR scan. all other elements are optional.

    4. TEST WORKFLOW

    When complete, you should have an action the Logic App designer that looks something like this:

    Logic App workflow create or update document in cosmosdb

    Save the workflow and test the connections by clicking Run Trigger > Run. If connections are working, you should see documents flowing into Cosmos DB each time that an email arrives with image attachments.

    Check the data in Cosmos Db by opening the Data explorer, then choosing the container you created and selecting items. You should see documents similar to this:

    Logic App workflow with trigger and action

    1. Congratulations

    You just built your personal ReadMail solution with Logic Apps! 🎉


    Resources: For self-study!

    Once you have an understanding of the basics in ths post, there is so much more to learn!

    Thanks for stopping by!

    - + \ No newline at end of file diff --git a/blog/tags/serverless-september/page/12/index.html b/blog/tags/serverless-september/page/12/index.html index 9130857bca..fac7fe784c 100644 --- a/blog/tags/serverless-september/page/12/index.html +++ b/blog/tags/serverless-september/page/12/index.html @@ -14,14 +14,14 @@ - +

    33 posts tagged with "serverless-september"

    View All Tags

    · 6 min read
    Brian Benz

    Welcome to Day 17 of #30DaysOfServerless!

    In past weeks, we've covered serverless technologies that provide core capabilities (functions, containers, microservices) for building serverless solutions. This week we're looking at technologies that make service integrations more seamless, starting with Logic Apps. Let's look at one usage example today!

    Ready? Let's Go!


    What We'll Cover

    • Introduction to Logic Apps
    • Settng up Cosmos DB for Logic Apps
    • Setting up a Logic App connection and event
    • Writing data to Cosmos DB from a Logic app
    • Resources: For self-study!


    Introduction to Logic Apps

    Previously in Serverless September, we've covered Azure Functions, where the event triggers code. In Logic Apps, the event triggers a workflow that you design. Logic Apps enable serverless applications to connect to external sources for data then automate business processes via workflows.

    In this post I'll walk you through setting up a Logic App that works with Cosmos DB. For this example, we'll connect to the MSN weather service, an design a logic app workflow that collects data when weather changes, and writes the data to Cosmos DB.

    PREREQUISITES

    Setup Cosmos DB for Logic Apps

    Cosmos DB has many APIs to choose from, but to use the default Logic App connection, we need to choose the a Cosmos DB SQL API. We'll set this up via the Azure Portal.

    To get started with Cosmos DB, you create an account, then a database, then a container to store JSON documents. To create a new Cosmos DB account from the portal dashboard, Select Create a resource > Azure Cosmos DB > Create. Choose core SQL for the API.

    Select your subscription, then create a new resource group called CosmosWeather. Enter an account name and choose a location, select provisioned throughput capacity mode and apply the free tier discount. From here you can select Review and Create, then Create

    Azure Cosmos DB is available in two different capacity modes: provisioned throughput and serverless. You can perform the same database operations in both modes, but the way you get billed for these operations is different. We wil be using provisioned throughput and the free tier for this example.

    Setup the CosmosDB account

    Next, create a new database and container. Go to the Data Explorer in your new Cosmos DB account, and choose New Container. Name the database, and keep all the orher defaults except:

    SettingAction
    Container IDid
    Container partition/id

    Press OK to create a database and container

    A database is analogous to a traditional DBMS namespace. It's used to organize one or more containers.

    Setup the CosmosDB Container

    Now we're ready to set up our logic app an write to Cosmos DB!

    Setup Logic App connection + event

    Once the Cosmos DB SQL API account is created, we can set up our Logic App. From the portal dashboard, Select Create a resource > Integration > Logic App > Create. Name your Logic App and select a location, the rest fo the settings can be left at their defaults. Once you new Logic App is created, select Create a workflow from designer to get started.

    A workflow is a series of steps that defines a task or process. Each workflow starts with a single trigger, after which you must add one or more actions.

    When in designer, search for weather on the right under Add a trigger. Choose MSN Weather. Choose When the current conditions change as the trigger.

    A trigger is always the first step in any workflow and specifies the condition for running any further steps in that workflow.

    Add a location. Valid locations are City, Region, State, Country, Landmark, Postal Code, latitude and longitude. This triggers a new workflow when the conditions change for a location.

    Write data from Logic App to Cosmos DB

    Now we are ready to set up the action to write data to Cosmos DB. Choose add an action and choose Cosmos DB.

    An action is each step in a workflow after the trigger. Every action runs some operation in a workflow.

    In this case, we will be writing a JSON document to the Cosmos DB container we created earlier. Choose Create or Update Document from the actions. At this point you should have a workflow in designer that looks something like this:

    Logic App workflow with trigger

    Start wth the connection for set up the Cosmos DB action. Select Access Key, and provide the primary read-write key (found under keys in Cosmos DB), and the Cosmos DB account ID (without 'documents.azure.com').

    Next, fill in your Cosmos DB Database ID and Collection ID. Create a JSON document bt selecting dynamic content elements and wrapping JSON formatting around them.

    You will need a unique ID for each document that you write to Cosmos DB, for that you can use an expression. Because we declared id to be our unique ID in Cosmos DB, we will use use that for the name. Under expressions, type guid() and press enter to add a unique ID to the JSON document. When complete, you should have a workflow in designer that looks something like this:

    Logic App workflow with trigger and action

    Save the workflow and test the connections by clicking Run Trigger > Run. If connections are working, you should see documents flowing into Cosmos DB over the next few minutes.

    Check the data in Cosmos Db by opening the Data explorer, then choosing the container you created and selecting items. You should see documents similar to this:

    Logic App workflow with trigger and action

    Resources: For self-study!

    Once you've grasped the basics in ths post, there is so much more to learn!

    Thanks for stopping by!

    - + \ No newline at end of file diff --git a/blog/tags/serverless-september/page/13/index.html b/blog/tags/serverless-september/page/13/index.html index 997c19e1cb..1fa874f8b5 100644 --- a/blog/tags/serverless-september/page/13/index.html +++ b/blog/tags/serverless-september/page/13/index.html @@ -14,13 +14,13 @@ - +

    33 posts tagged with "serverless-september"

    View All Tags

    · 4 min read
    Nitya Narasimhan
    Devanshi Joshi

    Welcome to Day 15 of #30DaysOfServerless!

    This post marks the midpoint of our Serverless on Azure journey! Our Week 2 Roadmap showcased two key technologies - Azure Container Apps (ACA) and Dapr - for building serverless microservices. We'll also look at what happened elsewhere in #ServerlessSeptember, then set the stage for our next week's focus: Serverless Integrations.

    Ready? Let's Go!


    What We'll Cover

    • ICYMI: This Week on #ServerlessSeptember
    • Recap: Microservices, Azure Container Apps & Dapr
    • Coming Next: Serverless Integrations
    • Exercise: Take the Cloud Skills Challenge
    • Resources: For self-study!

    This Week In Events

    We had a number of activities happen this week - here's a quick summary:

    This Week in #30Days

    In our #30Days series we focused on Azure Container Apps and Dapr.

    • In Hello Container Apps we learned how Azure Container Apps helps you run microservices and containerized apps on serverless platforms. And we build and deployed our first ACA.
    • In Microservices Communication we explored concepts like environments and virtual networking, with a hands-on example to show how two microservices communicate in a deployed ACA.
    • In Scaling Your Container Apps we learned about KEDA (Kubernetes Event-Driven Autoscaler) and how to configure autoscaling for your ACA based on KEDA-supported triggers.
    • In Build with Dapr we introduced the Distributed Application Runtime (Dapr) and learned how its Building Block APIs and sidecar architecture make it easier to develop microservices with ACA.
    • In Secure ACA Access we learned how to secure ACA access to external services with - and without - Dapr, covering Secret Stores and Managed Identity.
    • Finally, Build ACA with Dapr tied it all together with a enterprise app scenario where an orders processor (ACA) uses Dapr APIs (PubSub, State Management) to receive and store order messages from Azure Service Bus.

    Here's a visual recap:

    Self Study: Code Samples & Tutorials

    There's no better way to get familiar with the concepts, than to dive in and play with code samples and hands-on tutorials. Here are 4 resources to bookmark and try out:

    1. Dapr Quickstarts - these walk you through samples showcasing individual Building Block APIs - with multiple language options available.
    2. Dapr Tutorials provides more complex examples of microservices applications and tools usage, including a Distributed Calculator polyglot app.
    3. Next, try to Deploy a Dapr application to Azure Container Apps to get familiar with the process of setting up the environment, then deploying the app.
    4. Or, explore the many Azure Container Apps samples showcasing various features and more complex architectures tied to real world scenarios.

    What's Next: Serverless Integrations!

    So far we've talked about core technologies (Azure Functions, Azure Container Apps, Dapr) that provide foundational support for your serverless solution. Next, we'll look at Serverless Integrations - specifically at technologies like Azure Logic Apps and Azure Event Grid that automate workflows and create seamless end-to-end solutions that integrate other Azure services in serverless-friendly ways.

    Take the Challenge!

    The Cloud Skills Challenge is still going on, and we've already had hundreds of participants join and complete the learning modules to skill up on Serverless.

    There's still time to join and get yourself on the leaderboard. Get familiar with Azure Functions, SignalR, Logic Apps, Azure SQL and more - in serverless contexts!!


    - + \ No newline at end of file diff --git a/blog/tags/serverless-september/page/14/index.html b/blog/tags/serverless-september/page/14/index.html index 6ce05a093d..69c034d8f2 100644 --- a/blog/tags/serverless-september/page/14/index.html +++ b/blog/tags/serverless-september/page/14/index.html @@ -14,7 +14,7 @@ - + @@ -24,7 +24,7 @@ Image showing container apps role assignment

  • Lastly, we need to restart the container app revision, to do so run the command below:

     ##Get revision name and assign it to a variable
    $REVISION_NAME = (az containerapp revision list `
    --name $BACKEND_SVC_NAME `
    --resource-group $RESOURCE_GROUP `
    --query [0].name)

    ##Restart revision by name
    az containerapp revision restart `
    --resource-group $RESOURCE_GROUP `
    --name $BACKEND_SVC_NAME `
    --revision $REVISION_NAME
  • Run end-to-end Test on Azure

    From the Azure Portal, select the Azure Container App orders-processor and navigate to Log stream under Monitoring tab, leave the stream connected and opened. From the Azure Portal, select the Azure Service Bus Namespace ordersservices, select the topic orderreceivedtopic, select the subscription named orders-processor-subscription, then click on Service Bus Explorer (preview). From there we need to publish/send a message. Use the JSON payload below

    ```json
    {
    "data": {
    "reference": "Order 150",
    "quantity": 150,
    "createdOn": "2022-05-10T12:45:22.0983978Z"
    }
    }
    ```

    If all is configured correctly, you should start seeing the information logs in Container Apps Log stream, similar to the images below Image showing publishing messages from Azure Service

    Information logs on the Log stream of the deployed Azure Container App Image showing ACA Log Stream

    🎉 CONGRATULATIONS

    You have successfully deployed to the cloud an Azure Container App and configured Dapr Pub/Sub API with Azure Service Bus.

    9. Clean up

    If you are done with the tutorial, use the following command to delete the resource group and all its contained resources to avoid incurring further costs.

    az group delete --name $RESOURCE_GROUP

    Exercise

    I left for you the configuration of the Dapr State Store API with Azure Cosmos DB :)

    When you look at the action method OrderReceived in controller ExternalOrdersController, you will see that I left a line with ToDo: note, this line is responsible to save the received message (OrderModel) into Azure Cosmos DB.

    There is no need to change anything on the code base (other than removing this commented line), that's the beauty of Dapr Building Blocks and how easy it allows us to plug components to our microservice application without any plumping and brining external SDKs.

    For sure you need to work on the configuration part of Dapr State Store by creating a new component file like what we have done with the Pub/Sub API, things that you need to work on are:

    • Provision Azure Cosmos DB Account and obtain its masterKey.
    • Create a Dapr Component file adhering to Dapr Specs.
    • Create an Azure Container Apps component file adhering to ACA component specs.
    • Test locally on your dev machine using Dapr Component file.
    • Register the new Dapr State Store component with Azure Container Apps Environment and set the Cosmos Db masterKey from the Azure Portal. If you want to challenge yourself more, use the Managed Identity approach as done in this post! The right way to protect your keys and you will not worry about managing CosmosDb keys anymore!
    • Build a new image of the application and push it to Azure Container Registry.
    • Update Azure Container Apps and create a new revision which contains the updated code.
    • Verify the results by checking Azure Cosmos DB, you should see the Order Model stored in Cosmos DB.

    If you need help, you can always refer to my blog post Azure Container Apps State Store With Dapr State Management API which contains exactly what you need to implement here, so I'm very confident you will be able to complete this exercise with no issues, happy coding :)

    What's Next?

    If you enjoyed working with Dapr and Azure Container Apps, and you want to have a deep dive with more complex scenarios (Dapr bindings, service discovery, auto scaling with KEDA, sync services communication, distributed tracing, health probes, etc...) where multiple services deployed to a single Container App Environment; I have created a detailed tutorial which should walk you through step by step with through details to build the application.

    So far, the published posts below, and I'm publishing more posts on weekly basis, so stay tuned :)

    Resources

    - + \ No newline at end of file diff --git a/blog/tags/serverless-september/page/15/index.html b/blog/tags/serverless-september/page/15/index.html index 5a4adfab68..f4c75808d2 100644 --- a/blog/tags/serverless-september/page/15/index.html +++ b/blog/tags/serverless-september/page/15/index.html @@ -14,14 +14,14 @@ - +

    33 posts tagged with "serverless-september"

    View All Tags

    · 11 min read
    Kendall Roden

    Welcome to Day 13 of #30DaysOfServerless!

    In the previous post, we learned about all things Distributed Application Runtime (Dapr) and highlighted the capabilities you can unlock through managed Dapr in Azure Container Apps! Today, we'll dive into how we can make use of Container Apps secrets and managed identities to securely access cloud-hosted resources that your Container Apps depend on!

    Ready? Let's go.


    What We'll Cover

    • Secure access to external services overview
    • Using Container Apps Secrets
    • Using Managed Identity for connecting to Azure resources
    • Using Dapr secret store component references (Dapr-only)
    • Conclusion
    • Resources: For self-study!


    Securing access to external services

    In most, if not all, microservice-based applications, one or more services in the system will rely on other cloud-hosted resources; Think external services like databases, secret stores, message brokers, event sources, etc. To interact with these services, an application must have the ability to establish a secure connection. Traditionally, an application will authenticate to these backing resources using some type of connection string or password.

    I'm not sure if it was just me, but one of the first things I learned as a developer was to ensure credentials and other sensitive information were never checked into the codebase. The ability to inject these values at runtime is a non-negotiable.

    In Azure Container Apps, applications can securely leverage connection information via Container Apps Secrets. If the resource is Azure-based, a more ideal solution that removes the dependence on secrets altogether is using Managed Identity.

    Specifically for Dapr-enabled container apps, users can now tap into the power of the Dapr secrets API! With this new capability unlocked in Container Apps, users can call the Dapr secrets API from application code to securely access secrets from Key Vault or other backing secret stores. In addition, customers can also make use of a secret store component reference when wiring up Dapr state store components and more!

    ALSO, I'm excited to share that support for Dapr + Managed Identity is now available!!. What does this mean? It means that you can enable Managed Identity for your container app - and when establishing connections via Dapr, the Dapr sidecar can use this identity! This means simplified components without the need for secrets when connecting to Azure services!

    Let's dive a bit deeper into the following three topics:

    1. Using Container Apps secrets in your container apps
    2. Using Managed Identity to connect to Azure services
    3. Connecting to services securely for Dapr-enabled apps

    Secure access to external services without Dapr

    Leveraging Container Apps secrets at runtime

    Users can leverage this approach for any values which need to be securely stored, however, it is recommended to use Managed Identity where possible when connecting to Azure-specific resources.

    First, let's establish a few important points regarding secrets in container apps:

    • Secrets are scoped at the container app level, meaning secrets cannot be shared across container apps today
    • When running in multiple-revision mode,
      • changes to secrets do not generate a new revision
      • running revisions will not be automatically restarted to reflect changes. If you want to force-update existing container app revisions to reflect the changed secrets values, you will need to perform revision restarts.
    STEP 1

    Provide the secure value as a secret parameter when creating your container app using the syntax "SECRET_NAME=SECRET_VALUE"

    az containerapp create \
    --resource-group "my-resource-group" \
    --name queuereader \
    --environment "my-environment-name" \
    --image demos/queuereader:v1 \
    --secrets "queue-connection-string=$CONNECTION_STRING"
    STEP 2

    Create an environment variable which references the value of the secret created in step 1 using the syntax "ENV_VARIABLE_NAME=secretref:SECRET_NAME"

    az containerapp create \
    --resource-group "my-resource-group" \
    --name myQueueApp \
    --environment "my-environment-name" \
    --image demos/myQueueApp:v1 \
    --secrets "queue-connection-string=$CONNECTIONSTRING" \
    --env-vars "QueueName=myqueue" "ConnectionString=secretref:queue-connection-string"

    This ConnectionString environment variable can be used within your application code to securely access the connection string value at runtime.

    Using Managed Identity to connect to Azure services

    A managed identity from Azure Active Directory (Azure AD) allows your container app to access other Azure AD-protected resources. This approach is recommended where possible as it eliminates the need for managing secret credentials in your container apps and allows you to properly scope the permissions needed for a given container app using role-based access control. Both system-assigned and user-assigned identities are available in container apps. For more background on managed identities in Azure AD, see Managed identities for Azure resources.

    To configure your app with a system-assigned managed identity you will follow similar steps to the following:

    STEP 1

    Run the following command to create a system-assigned identity for your container app

    az containerapp identity assign \
    --name "myQueueApp" \
    --resource-group "my-resource-group" \
    --system-assigned
    STEP 2

    Retrieve the identity details for your container app and store the Principal ID for the identity in a variable "PRINCIPAL_ID"

    az containerapp identity show \
    --name "myQueueApp" \
    --resource-group "my-resource-group"
    STEP 3

    Assign the appropriate roles and permissions to your container app's managed identity using the Principal ID in step 2 based on the resources you need to access (example below)

    az role assignment create \
    --role "Storage Queue Data Contributor" \
    --assignee $PRINCIPAL_ID \
    --scope "/subscriptions/<subscription>/resourceGroups/<resource-group>/providers/Microsoft.Storage/storageAccounts/<storage-account>/queueServices/default/queues/<queue>"

    After running the above commands, your container app will be able to access your Azure Store Queue because it's managed identity has been assigned the "Store Queue Data Contributor" role. The role assignments you create will be contingent solely on the resources your container app needs to access. To instrument your code to use this managed identity, see more details here.

    In addition to using managed identity to access services from your container app, you can also use managed identity to pull your container images from Azure Container Registry.

    Secure access to external services with Dapr

    For Dapr-enabled apps, there are a few ways to connect to the resources your solutions depend on. In this section, we will discuss when to use each approach.

    1. Using Container Apps secrets in your Dapr components
    2. Using Managed Identity with Dapr Components
    3. Using Dapr Secret Stores for runtime secrets and component references

    Using Container Apps secrets in Dapr components

    Prior to providing support for the Dapr Secret's Management building block, this was the only approach available for securely storing sensitive values for use in Dapr components.

    In Dapr OSS, when no secret store reference is provided in a Dapr component file, the default secret store is set to "Kubernetes secrets". In Container Apps, we do not expose the ability to use this default store. Rather, Container Apps secrets can be used in it's place.

    With the introduction of the Secrets API and the ability to use Dapr + Managed Identity, this approach is useful for a limited number of scenarios:

    • Quick demos and dev/test scenarios using the Container Apps CLI
    • Securing values when a secret store is not configured or available for use
    • Using service principal credentials to configure an Azure Key Vault secret store component (Using Managed Identity is recommend)
    • Securing access credentials which may be required when creating a non-Azure secret store component
    STEP 1

    Create a Dapr component which can be used by one or more services in the container apps environment. In the below example, you will create a secret to store the storage account key and reference this secret from the appropriate Dapr metadata property.

       componentType: state.azure.blobstorage
    version: v1
    metadata:
    - name: accountName
    value: testStorage
    - name: accountKey
    secretRef: account-key
    - name: containerName
    value: myContainer
    secrets:
    - name: account-key
    value: "<STORAGE_ACCOUNT_KEY>"
    scopes:
    - myApp
    STEP 2

    Deploy the Dapr component using the below command with the appropriate arguments.

     az containerapp env dapr-component set \
    --name "my-environment" \
    --resource-group "my-resource-group" \
    --dapr-component-name statestore \
    --yaml "./statestore.yaml"

    Using Managed Identity with Dapr Components

    Dapr-enabled container apps can now make use of managed identities within Dapr components. This is the most ideal path for connecting to Azure services securely, and allows for the removal of sensitive values in the component itself.

    The Dapr sidecar makes use of the existing identities available within a given container app; Dapr itself does not have it's own identity. Therefore, the steps to enable Dapr + MI are similar to those in the section regarding managed identity for non-Dapr apps. See example steps below specifically for using a system-assigned identity:

    1. Create a system-assigned identity for your container app

    2. Retrieve the identity details for your container app and store the Principal ID for the identity in a variable "PRINCIPAL_ID"

    3. Assign the appropriate roles and permissions (for accessing resources backing your Dapr components) to your ACA's managed identity using the Principal ID

    4. Create a simplified Dapr component without any secrets required

          componentType: state.azure.blobstorage
      version: v1
      metadata:
      - name: accountName
      value: testStorage
      - name: containerName
      value: myContainer
      scopes:
      - myApp
    5. Deploy the component to test the connection from your container app via Dapr!

    Keep in mind, all Dapr components will be loaded by each Dapr-enabled container app in an environment by default. In order to avoid apps without the appropriate permissions from loading a component unsuccessfully, use scopes. This will ensure that only applications with the appropriate identities to access the backing resource load the component.

    Using Dapr Secret Stores for runtime secrets and component references

    Dapr integrates with secret stores to provide apps and other components with secure storage and access to secrets such as access keys and passwords. The Dapr Secrets API is now available for use in Container Apps.

    Using Dapr’s secret store building block typically involves the following:

    • Setting up a component for a specific secret store solution.
    • Retrieving secrets using the Dapr secrets API in the application code.
    • Optionally, referencing secrets in Dapr component files.

    Let's walk through a couple sample workflows involving the use of Dapr's Secrets Management capabilities!

    Setting up a component for a specific secret store solution

    1. Create an Azure Key Vault instance for hosting the secrets required by your application.

      az keyvault create --name "<your-unique-keyvault-name>" --resource-group "my-resource-group" --location "<your-location>"
    2. Create an Azure Key Vault component in your environment without the secrets values, as the connection will be established to Azure Key Vault via Managed Identity.

          componentType: secretstores.azure.keyvault
      version: v1
      metadata:
      - name: vaultName
      value: "[your_keyvault_name]"
      scopes:
      - myApp
      az containerapp env dapr-component set \
      --name "my-environment" \
      --resource-group "my-resource-group" \
      --dapr-component-name secretstore \
      --yaml "./secretstore.yaml"
    3. Run the following command to create a system-assigned identity for your container app

      az containerapp identity assign \
      --name "myApp" \
      --resource-group "my-resource-group" \
      --system-assigned
    4. Retrieve the identity details for your container app and store the Principal ID for the identity in a variable "PRINCIPAL_ID"

      az containerapp identity show \
      --name "myApp" \
      --resource-group "my-resource-group"
    5. Assign the appropriate roles and permissions to your container app's managed identity to access Azure Key Vault

      az role assignment create \
      --role "Key Vault Secrets Officer" \
      --assignee $PRINCIPAL_ID \
      --scope /subscriptions/{subscriptionid}/resourcegroups/{resource-group-name}/providers/Microsoft.KeyVault/vaults/{key-vault-name}
    6. Begin using the Dapr Secrets API in your application code to retrieve secrets! See additional details here.

    Referencing secrets in Dapr component files

    Once a Dapr secret store component is available in the environment, it can be used to retrieve secrets for use in other components. For example, when creating a state store component, you can add a reference to the Dapr secret store from which you would like to source connection information. You will no longer use secrets directly in the component spec, but rather will instruct the Dapr sidecar to retrieve the secrets from the specified store.

          componentType: state.azure.blobstorage
    version: v1
    metadata:
    - name: accountName
    value: testStorage
    - name: accountKey
    secretRef: account-key
    - name: containerName
    value: myContainer
    secretStoreComponent: "<SECRET_STORE_COMPONENT_NAME>"
    scopes:
    - myApp

    Summary

    In this post, we have covered the high-level details on how to work with secret values in Azure Container Apps for both Dapr and Non-Dapr apps. In the next article, we will walk through a complex Dapr example from end-to-end which makes use of the new support for Dapr + Managed Identity. Stayed tuned for additional documentation around Dapr secrets as it will be release in the next two weeks!

    Resources

    Here are the main resources to explore for self-study:

    - + \ No newline at end of file diff --git a/blog/tags/serverless-september/page/16/index.html b/blog/tags/serverless-september/page/16/index.html index 09547e4e1c..6a597f2d25 100644 --- a/blog/tags/serverless-september/page/16/index.html +++ b/blog/tags/serverless-september/page/16/index.html @@ -14,13 +14,13 @@ - +

    33 posts tagged with "serverless-september"

    View All Tags

    · 8 min read
    Nitya Narasimhan

    Welcome to Day 12 of #30DaysOfServerless!

    So far we've looked at Azure Container Apps - what it is, how it enables microservices communication, and how it enables auto-scaling with KEDA compliant scalers. Today we'll shift gears and talk about Dapr - the Distributed Application Runtime - and how it makes microservices development with ACA easier with core building blocks and a sidecar architecture!

    Ready? Let's go!


    What We'll Cover

    • What is Dapr and why use it?
    • Building Block APIs
    • Dapr Quickstart and Tutorials
    • Dapr-enabled ACA: A Sidecar Approach
    • Exercise: Build & Deploy a Dapr-enabled ACA.
    • Resources: For self-study!


    Hello, Dapr!

    Building distributed applications is hard. Building reliable and portable microservces means having middleware that deals with challenges like service discovery, sync and async communications, state management, secure information sharing and more. Integrating these support services into your application can be challenging from both development and maintenance perspectives, adding complexity that is independent of the core application logic you want to focus on.

    This is where Dapr (Distributed Application Runtime) shines - it's defined as::

    a portable, event-driven runtime that makes it easy for any developer to build resilient, stateless and stateful applications that run on the cloud and edge and embraces the diversity of languages and developer frameworks.

    But what does this actually mean to me as an app developer?


    Dapr + Apps: A Sidecar Approach

    The strength of Dapr lies in its ability to:

    • abstract complexities of distributed systems middleware - with Building Block APIs that implement components using best practices to tackle key challenges.
    • implement a Sidecar Pattern with interactions via APIs - allowing applications to keep their codebase clean and focus on app logic.
    • be Incrementally Adoptable - allowing developers to start by integrating one API, then evolving to use more as and when needed.
    • be Platform Agnostic - allowing applications to be developed in a preferred language or framework without impacting integration capabilities.

    The application-dapr sidecar interaction is illustrated below. The API abstraction allows applications to get the desired functionality without having to know how it was implemented, or without having to integrate Dapr-specific code into their codebase. Note how the sidecar process listens on port 3500 and the API provides clear routes for the specific building blocks supported by Dapr (e.g, /secrets, /state etc.)


    Dapr Building Blocks: API Interactions

    Dapr Building Blocks refers to HTTP and gRPC endpoints exposed by Dapr API endpoints exposed by the Dapr sidecar, providing key capabilities like state management, observability, service-to-service invocation, pub/sub messaging and more to the associated application.

    Building Blocks: Under the Hood
    The Dapr API is implemented by modular components that codify best practices for tackling the specific challenge that they represent. The API abstraction allows component implementations to evolve, or alternatives to be used , without requiring changes to the application codebase.

    The latest Dapr release has the building blocks shown in the above figure. Not all capabilities are available to Azure Container Apps by default - check the documentation for the latest updates on this. For now, Azure Container Apps + Dapr integration provides the following capabilities to the application:

    In the next section, we'll dive into Dapr-enabled Azure Container Apps. Before we do that, here are a couple of resources to help you explore the Dapr platform by itself, and get more hands-on experience with the concepts and capabilities:

    • Dapr Quickstarts - build your first Dapr app, then explore quickstarts for a core APIs including service-to-service invocation, pub/sub, state mangement, bindings and secrets management.
    • Dapr Tutorials - go beyond the basic quickstart and explore more realistic service integrations and usage scenarios. Try the distributed calculator example!

    Integrate Dapr & Azure Container Apps

    Dapr currently has a v1.9 (preview) version, but Azure Container Apps supports Dapr v1.8. In this section, we'll look at what it takes to enable, configure, and use, Dapr integration with Azure Container Apps. It involves 3 steps: enabling Dapr using settings, configuring Dapr components (API) for use, then invoking the APIs.

    Here's a simple a publisher-subscriber scenario from the documentation. We have two Container apps identified as publisher-app and subscriber-app deployed in a single environment. Each ACA has an activated daprd sidecar, allowing them to use the Pub/Sub API to communicate asynchronously with each other - without having to write the underlying pub/sub implementation themselves. Rather, we can see that the Dapr API uses a pubsub,azure.servicebus component to implement that capability.

    Pub/sub example

    Let's look at how this is setup.

    1. Enable Dapr in ACA: Settings

    We can enable Dapr integration in the Azure Container App during creation by specifying settings in one of two ways, based on your development preference:

    • Using Azure CLI: use custom commandline options for each setting
    • Using Infrastructure-as-Code (IaC): using properties for Bicep, ARM templates

    Once enabled, Dapr will run in the same environment as the Azure Container App, and listen on port 3500 for API requests. The Dapr sidecar can be shared my multiple Container Apps deployed in the same environment.

    There are four main settings we will focus on for this demo - the example below shows the ARM template properties, but you can find the equivalent CLI parameters here for comparison.

    • dapr.enabled - enable Dapr for Azure Container App
    • dapr.appPort - specify port on which app is listening
    • dapr.appProtocol - specify if using http (default) or gRPC for API
    • dapr.appId - specify unique application ID for service discovery, usage

    These are defined under the properties.configuration section for your resource. Changing Dapr settings does not update the revision but it will restart ACA revisions and replicas. Here is what the relevant section of the ARM template looks like for the publisher-app ACA in the scenario shown above.

    "dapr": {
    "enabled": true,
    "appId": "publisher-app",
    "appProcotol": "http",
    "appPort": 80
    }

    2. Configure Dapr in ACA: Components

    The next step after activating the Dapr sidecar, is to define the APIs that you want to use and potentially specify the Dapr components (specific implementations of that API) that you prefer. These components are created at environment-level and by default, Dapr-enabled containers apps in an environment will load the complete set of deployed components -- use the scopes property to ensure only components needed by a given app are loaded at runtime. Here's what the ARM template resources section looks like for the example above. This tells us that the environment has a dapr-pubsub component of type pubsub.azure.servicebus deployed - where that component is loaded by container apps with dapr ids (publisher-app, subscriber-app).

    USING MANAGED IDENTITY + DAPR

    The secrets approach used here is idea for demo purposes. However, we recommend using Managed Identity with Dapr in production. For more details on secrets, check out tomorrow's post on Secrets and Managed Identity in Azure Container Apps

    {
    "resources": [
    {
    "type": "daprComponents",
    "name": "dapr-pubsub",
    "properties": {
    "componentType": "pubsub.azure.servicebus",
    "version": "v1",
    "secrets": [
    {
    "name": "sb-root-connectionstring",
    "value": "value"
    }
    ],
    "metadata": [
    {
    "name": "connectionString",
    "secretRef": "sb-root-connectionstring"
    }
    ],
    // Application scopes
    "scopes": ["publisher-app", "subscriber-app"]

    }
    }
    ]
    }

    With this configuration, the ACA is now set to use pub/sub capabilities from the Dapr sidecar, using standard HTTP requests to the exposed API endpoint for this service.

    Exercise: Deploy Dapr-enabled ACA

    In the next couple posts in this series, we'll be discussing how you can use the Dapr secrets API and doing a walkthrough of a more complex example, to show how Dapr-enabled Azure Container Apps are created and deployed.

    However, you can get hands-on experience with these concepts by walking through one of these two tutorials, each providing an alternative approach to configure and setup the application describe in the scenario below:

    Resources

    Here are the main resources to explore for self-study:

    - + \ No newline at end of file diff --git a/blog/tags/serverless-september/page/17/index.html b/blog/tags/serverless-september/page/17/index.html index 71fb74c208..6b9c84b6c1 100644 --- a/blog/tags/serverless-september/page/17/index.html +++ b/blog/tags/serverless-september/page/17/index.html @@ -14,13 +14,13 @@ - +

    33 posts tagged with "serverless-september"

    View All Tags

    · 6 min read
    Melony Qin

    Welcome to Day 12 of #30DaysOfServerless!

    Today, we have a special set of posts from our Zero To Hero 🚀 initiative, featuring blog posts authored by our Product Engineering teams for #ServerlessSeptember. Posts were originally published on the Apps on Azure blog on Microsoft Tech Community.


    What We'll Cover

    • What are Custom Handlers, and why use them?
    • How Custom Handler Works
    • Message Processing With Azure Custom Handler
    • Azure Portal Monitoring


    If you have been working with Azure Functions for a while, you may know Azure Functions is a serverless FaaS (Function as a Service) offered provided by Microsoft Azure, which is built for your key scenarios, including building web APIs, processing file uploads, responding to database changes, processing IoT data streams, managing message queues, and more.

    Custom Handlers: What and Why

    Azure functions support multiple programming languages including C#, F#, Java, JavaScript, Python, typescript, and PowerShell. If you want to get extended language support with Azure functions for other languages such as Go, and Rust, that’s where custom handler comes in.

    An Azure function custom handler allows the use of any language that supports HTTP primitives and author Azure functions. With custom handlers, you can use triggers and input and output bindings via extension bundles, hence it supports all the triggers and bindings you're used to with Azure functions.

    How a Custom Handler Works

    Let’s take a look at custom handlers and how it works.

    • A request is sent to the function host when an event is triggered. It’s up to the function host to issue a request payload to the custom handler, which holds the trigger and inputs binding data as well as other metadata for the function.
    • The custom handler is a local HTTP web server. It executes the function code and returns a response payload to the Functions host.
    • The Functions host passes data from the response to the function's output bindings which will be passed to the downstream stream services for data processing.

    Check out this article to know more about Azure functions custom handlers.


    Message processing with Custom Handlers

    Message processing is one of the key scenarios that Azure functions are trying to address. In the message-processing scenario, events are often collected in queues. These events can trigger Azure functions to execute a piece of business logic.

    You can use the Service Bus trigger to respond to messages from an Azure Service Bus queue - it's then up to the Azure functions custom handlers to take further actions to process the messages. The process is described in the following diagram:

    Building Serverless Go Applications with Azure functions custom handlers

    In Azure function, the function.json defines the function's trigger, input and output bindings, and other configuration settings. Note that every function can have multiple bindings, but it can only have one trigger. The following is an example of setting up the Service Bus queue trigger in the function.json file :

    {
    "bindings": [
    {
    "name": "queueItem",
    "type": "serviceBusTrigger",
    "direction": "in",
    "queueName": "functionqueue",
    "connection": "ServiceBusConnection"
    }
    ]
    }

    You can add a binding definition in the function.json to write the output to a database or other locations of your desire. Supported bindings can be found here.

    As we’re programming in Go, so we need to set the value of defaultExecutablePath to handler in the customHandler.description section in the host.json file.

    Assume we’re programming in Windows OS, and we have named our go application as server.go, after we executed go build server.go command, it produces an executable called server.exe. So here we set server.exe in the host.json as the following example :

      "customHandler": {
    "description": {
    "defaultExecutablePath": "./server.exe",
    "workingDirectory": "",
    "arguments": []
    }
    }

    We’re showcasing a simple Go application here with Azure functions custom handlers where we print out the messages received from the function host. The following is the full code of server.go application :

    package main

    import (
    "encoding/json"
    "fmt"
    "log"
    "net/http"
    "os"
    )

    type InvokeRequest struct {
    Data map[string]json.RawMessage
    Metadata map[string]interface{}
    }

    func queueHandler(w http.ResponseWriter, r *http.Request) {
    var invokeRequest InvokeRequest

    d := json.NewDecoder(r.Body)
    d.Decode(&invokeRequest)

    var parsedMessage string
    json.Unmarshal(invokeRequest.Data["queueItem"], &parsedMessage)

    fmt.Println(parsedMessage)
    }

    func main() {
    customHandlerPort, exists := os.LookupEnv("FUNCTIONS_CUSTOMHANDLER_PORT")
    if !exists {
    customHandlerPort = "8080"
    }
    mux := http.NewServeMux()
    mux.HandleFunc("/MessageProcessorFunction", queueHandler)
    fmt.Println("Go server Listening on: ", customHandlerPort)
    log.Fatal(http.ListenAndServe(":"+customHandlerPort, mux))

    }

    Ensure you have Azure functions core tools installed, then we can use func start command to start our function. Then we’ll have have a C#-based Message Sender application on Github to send out 3000 messages to the Azure service bus queue. You’ll see Azure functions instantly start to process the messages and print out the message as the following:

    Monitoring Serverless Go Applications with Azure functions custom handlers


    Azure portal monitoring

    Let’s go back to Azure portal portal the events see how those messages in Azure Service Bus queue were being processed. There was 3000 messages were queued in the Service Bus queue ( the Blue line stands for incoming Messages ). The outgoing messages (the red line in smaller wave shape ) showing there are progressively being read by Azure functions as the following :

    Monitoring Serverless Go Applications with Azure functions custom handlers

    Check out this article about monitoring Azure Service bus for further information.

    Next steps

    Thanks for following along, we’re looking forward to hearing your feedback. Also, if you discover potential issues, please record them on Azure Functions host GitHub repository or tag us @AzureFunctions on Twitter.

    RESOURCES

    Start to build your serverless applications with custom handlers, check out the official documentation:

    Life is a journey of learning. Let’s stay tuned!

    - + \ No newline at end of file diff --git a/blog/tags/serverless-september/page/18/index.html b/blog/tags/serverless-september/page/18/index.html index 385a1d4f8e..6be40d668e 100644 --- a/blog/tags/serverless-september/page/18/index.html +++ b/blog/tags/serverless-september/page/18/index.html @@ -14,13 +14,13 @@ - +

    33 posts tagged with "serverless-september"

    View All Tags

    · 5 min read
    Anthony Chu

    Welcome to Day 12 of #30DaysOfServerless!

    Today, we have a special set of posts from our Zero To Hero 🚀 initiative, featuring blog posts authored by our Product Engineering teams for #ServerlessSeptember. Posts were originally published on the Apps on Azure blog on Microsoft Tech Community.


    What We'll Cover

    • Using Visual Studio
    • Using Visual Studio Code: Docker, ACA extensions
    • Using Azure CLI
    • Using CI/CD Pipelines


    Last week, @kendallroden wrote about what it means to be Cloud-Native and how Azure Container Apps provides a serverless containers platform for hosting all of your Cloud-Native applications. Today, we’ll walk through a few ways to get your apps up and running on Azure Container Apps.

    Depending on where you are in your Cloud-Native app development journey, you might choose to use different tools to deploy your apps.

    • “Right-click, publish” – Deploying an app directly from an IDE or code editor is often seen as a bad practice, but it’s one of the quickest ways to test out an app in a cloud environment.
    • Command line interface – CLIs are useful for deploying apps from a terminal. Commands can be run manually or in a script.
    • Continuous integration/deployment – To deploy production apps, the recommended approach is to automate the process in a robust CI/CD pipeline.

    Let's explore some of these options in more depth.

    Visual Studio

    Visual Studio 2022 has built-in support for deploying .NET applications to Azure Container Apps. You can use the familiar publish dialog to provision Container Apps resources and deploy to them directly. This helps you prototype an app and see it run in Azure Container Apps with the least amount of effort.

    Journey to the cloud with Azure Container Apps

    Once you’re happy with the app and it’s ready for production, Visual Studio allows you to push your code to GitHub and set up a GitHub Actions workflow to build and deploy your app every time you push changes. You can do this by checking a box.

    Journey to the cloud with Azure Container Apps

    Visual Studio Code

    There are a couple of valuable extensions that you’ll want to install if you’re working in VS Code.

    Docker extension

    The Docker extension provides commands for building a container image for your app and pushing it to a container registry. It can even do this without requiring Docker Desktop on your local machine --- the “Build image in Azure” command remotely builds and pushes a container image to Azure Container Registry.

    Journey to the cloud with Azure Container Apps

    And if your app doesn’t have a dockerfile, the extension will generate one for you.

    Journey to the cloud with Azure Container Apps

    Azure Container Apps extension

    Once you’ve built your container image and pushed it to a registry, the Azure Container Apps VS Code extension provides commands for creating a container app and deploying revisions using the image you’ve built.

    Journey to the cloud with Azure Container Apps

    Azure CLI

    The Azure CLI can be used to manage pretty much anything in Azure. For Azure Container Apps, you’ll find commands for creating, updating, and managing your Container Apps resources.

    Just like in VS Code, with a few commands in the Azure CLI, you can create your Azure resources, build and push your container image, and then deploy it to a container app.

    To make things as simple as possible, the Azure CLI also has an “az containerapp up” command. This single command takes care of everything that’s needed to turn your source code from your local machine to a cloud-hosted application in Azure Container Apps.

    az containerapp up --name myapp --source ./src

    We saw earlier that Visual Studio can generate a GitHub Actions workflow to automatically build and deploy your app on every commit. “az containerapp up” can do this too. The following adds a workflow to a repo.

    az containerapp up --name myapp --repo https://github.com/myorg/myproject

    CI/CD pipelines

    When it’s time to take your app to production, it’s strongly recommended to set up a CI/CD pipeline to automatically and repeatably build, test, and deploy it. We’ve already seen that tools such as Visual Studio and Azure CLI can automatically generate a workflow for GitHub Actions. You can set up a pipeline in Azure DevOps too. This is an example Azure DevOps pipeline.

    trigger:
    branches:
    include:
    - main

    pool:
    vmImage: ubuntu-latest

    stages:

    - stage: Build

    jobs:
    - job: build
    displayName: Build app

    steps:
    - task: Docker@2
    inputs:
    containerRegistry: 'myregistry'
    repository: 'hello-aca'
    command: 'buildAndPush'
    Dockerfile: 'hello-container-apps/Dockerfile'
    tags: '$(Build.BuildId)'

    - stage: Deploy

    jobs:
    - job: deploy
    displayName: Deploy app

    steps:
    - task: AzureCLI@2
    inputs:
    azureSubscription: 'my-subscription(5361b9d6-46ea-43c3-a898-15f14afb0db6)'
    scriptType: 'bash'
    scriptLocation: 'inlineScript'
    inlineScript: |
    # automatically install Container Apps CLI extension
    az config set extension.use_dynamic_install=yes_without_prompt

    # ensure registry is configured in container app
    az containerapp registry set \
    --name hello-aca \
    --resource-group mygroup \
    --server myregistry.azurecr.io \
    --identity system

    # update container app
    az containerapp update \
    --name hello-aca \
    --resource-group mygroup \
    --image myregistry.azurecr.io/hello-aca:$(Build.BuildId)

    Conclusion

    In this article, we looked at a few ways to deploy your Cloud-Native applications to Azure Container Apps and how to decide which one to use based on where you are in your app’s journey to the cloud.

    To learn more, visit Azure Container Apps | Microsoft Azure today!

    ASK THE EXPERT: LIVE Q&A

    The Azure Container Apps team will answer questions live on September 29.

    - + \ No newline at end of file diff --git a/blog/tags/serverless-september/page/19/index.html b/blog/tags/serverless-september/page/19/index.html index e542f387a4..eaff63067c 100644 --- a/blog/tags/serverless-september/page/19/index.html +++ b/blog/tags/serverless-september/page/19/index.html @@ -14,13 +14,13 @@ - +

    33 posts tagged with "serverless-september"

    View All Tags

    · 7 min read
    Paul Yu

    Welcome to Day 11 of #30DaysOfServerless!

    Yesterday we explored Azure Container Concepts related to environments, networking and microservices communication - and illustrated these with a deployment example. Today, we turn our attention to scaling your container apps with demand.


    What We'll Cover

    • What makes ACA Serverless?
    • What is Keda?
    • Scaling Your ACA
    • ACA Scaling In Action
    • Exercise: Explore azure-opensource-labs examples
    • Resources: For self-study!


    So, what makes Azure Container Apps "serverless"?

    Today we are going to focus on what makes Azure Container Apps (ACA) a "serverless" offering. But what does the term "serverless" really mean? As much as we'd like to think there aren't any servers involved, that is certainly not the case. In general, "serverless" means that most (if not all) server maintenance has been abstracted away from you.

    With serverless, you don't spend any time managing and patching servers. This concern is offloaded to Azure and you simply focus on adding business value through application delivery. In addition to operational efficiency, cost efficiency can be achieved with serverless on-demand pricing models. Your workload horizontally scales out based on need and you only pay for what you use. To me, this is serverless, and my teammate @StevenMurawski said it best... "being able to scale to zero is what gives ACA it's serverless magic."

    Scaling your Container Apps

    If you don't know by now, ACA is built on a solid open-source foundation. Behind the scenes, it runs on a managed Kubernetes cluster and includes several open-source components out-of-the box including Dapr to help you build and run microservices, Envoy Proxy for ingress capabilities, and KEDA for event-driven autoscaling. Again, you do not need to install these components yourself. All you need to be concerned with is enabling and/or configuring your container app to leverage these components.

    Let's take a closer look at autoscaling in ACA to help you optimize your container app.

    What is KEDA?

    KEDA stands for Kubernetes Event-Driven Autoscaler. It is an open-source project initially started by Microsoft and Red Hat and has been donated to the Cloud-Native Computing Foundation (CNCF). It is being maintained by a community of 200+ contributors and adopted by many large organizations. In terms of its status as a CNCF project it is currently in the Incubating Stage which means the project has gone through significant due diligence and on its way towards the Graduation Stage.

    Prior to KEDA, horizontally scaling your Kubernetes deployment was achieved through the Horizontal Pod Autoscaler (HPA) which relies on resource metrics such as CPU and memory to determine when additional replicas should be deployed. Being limited to CPU and memory falls a bit short for certain workloads. This is especially true for apps that need to processes messages from a queue or HTTP-based apps that can handle a specific amount of incoming HTTP requests at a time. KEDA aims to fill that gap and provides a much more robust framework for scaling by working in conjunction with HPA. It offers many scalers for you to implement and even allows your deployments to scale to zero! 🥳

    KEDA architecture

    Configuring ACA scale rules

    As I mentioned above, ACA's autoscaling feature leverages KEDA and gives you the ability to configure the number of replicas to deploy based on rules (event triggers). The number of replicas can be configured as a static number or a range (minimum and maximum). So if you need your containers to run 24/7, set the min and max to be the same value. By default, when you deploy a container app, it is set to scale from 0 to 10 replicas. The default scaling rule uses HTTP scaling and defaults to a minimum of 10 concurrent requests per second. Once the threshold of 10 concurrent request per second is met, another replica will be deployed until it reaches the maximum number of replicas.

    At the time of this writing, a container app can have up to 30 replicas.

    Default autoscaler

    As a best practice, if you have a Min / max replicas range configured, you should configure a scaling rule even if it is just explicitly setting the default values.

    Adding HTTP scaling rule

    In addition to HTTP scaling, you can also configure an Azure queue rule, which allows you to use Azure Storage Queues as an event data source.

    Adding Azure Queue scaling rule

    The most flexibility comes with the Custom rule type. This opens up a LOT more options for scaling. All of KEDA's event-based scalers are supported with this option 🚀

    Adding Custom scaling rule

    Translating KEDA templates to Azure templates

    When you implement Custom rules, you need to become familiar with translating KEDA templates to Azure Resource Manager templates or ACA YAML manifests. The KEDA scaler documentation is great and it should be simple to translate KEDA template metadata to an ACA rule metadata.

    The images below shows how to translated a scaling rule which uses Azure Service Bus as an event data source. The custom rule type is set to azure-servicebus and details of the service bus is added to the Metadata section. One important thing to note here is that the connection string to the service bus was added as a secret on the container app and the trigger parameter must be set to connection.

    Azure Container App custom rule metadata

    Azure Container App custom rule metadata

    Additional examples of KEDA scaler conversion can be found in the resources section and example video below.

    See Container App scaling in action

    Now that we've built up some foundational knowledge on how ACA autoscaling is implemented and configured, let's look at a few examples.

    Autoscaling based on HTTP traffic load

    Autoscaling based on Azure Service Bus message queues

    Summary

    ACA brings you a true serverless experience and gives you the ability to configure autoscaling rules based on KEDA scaler templates. This gives you flexibility to scale based on a wide variety of data sources in an event-driven manner. With the amount built-in scalers currently available, there is probably a scaler out there for all your use cases. If not, I encourage you to get involved with the KEDA community and help make it better!

    Exercise

    By now, you've probably read and seen enough and now ready to give this autoscaling thing a try. The example I walked through in the videos above can be found at the azure-opensource-labs repo. I highly encourage you to head over to the containerapps-terraform folder and try the lab out. There you'll find instructions which will cover all the steps and tools you'll need implement autoscaling container apps within your own Azure subscription.

    If you have any questions or feedback, please let us know in the comments below or reach out on Twitter @pauldotyu

    Have fun scaling your containers!

    Resources

    - + \ No newline at end of file diff --git a/blog/tags/serverless-september/page/2/index.html b/blog/tags/serverless-september/page/2/index.html index 37910950da..d8d983a5bf 100644 --- a/blog/tags/serverless-september/page/2/index.html +++ b/blog/tags/serverless-september/page/2/index.html @@ -14,7 +14,7 @@ - + @@ -26,7 +26,7 @@

    ...and that's it! We've successfully deployed our application on Azure!

    But there's more!

    Best practices: Monitoring and CI/CD!

    In my opinion, it's not enough to just set up the application on Azure! I want to know that my web app is performant and serving my users reliably! I also want to make sure that I'm not inadvertently breaking my application as I continue to make changes to it. Thankfully, the Azure Developer CLI also handles all of this via two additional commands - azd monitor and azd pipeline config.

    Application Monitoring

    When we provisioned all of our infrastructure, we also set up application monitoring via a Bicep file in our .infra/ directory that spec'd out an Application Insights dashboard. By running azd monitor we can see the dashboard with live metrics that was configured for the application.

    We can also navigate to the Application Dashboard by clicking on the resource group name, where you can set a specific refresh rate for the dashboard, and see usage, reliability, and performance metrics over time.

    I don't know about everyone else but I have spent a ton of time building out similar dashboards. It can be super time-consuming to write all the queries and create the visualizations so this feels like a real time saver.

    CI/CD

    Finally let's talk about setting up CI/CD! This might be my favorite azd feature. As I mentioned before, the Azure Developer CLI has a command, azd pipeline config, which uses the files in the .github/ directory to set up a GitHub Action. More than that, if there is no upstream repo, the Developer CLI will actually help you create one. But what does this mean exactly? Because our GitHub Action is using the same commands you'd run in the CLI under the hood, we're actually going to have CI/CD set up to run on every commit into the repo, against real Azure resources. What a sweet collaboration feature!

    That's it! We've gone end-to-end with the Azure Developer CLI - initialized a project, provisioned the resources on Azure, deployed our code on Azure, set up monitoring logs and dashboards, and set up a CI/CD pipeline with GitHub Actions to run on every commit into the repo (on real Azure resources!).

    Exercise: Try it yourself or create your own template!

    As an exercise, try out the workflow above with any template on GitHub!

    Or, try turning your own project into an Azure Developer CLI-enabled template by following this guidance. If you create your own template, don't forget to tag the repo with the azd-templates topic on GitHub to help others find it (unfamiliar with GitHub topics? Learn how to add topics to your repo)! We'd also love to chat with you about your experience creating an azd template - if you're open to providing feedback around this, please fill out this form!

    Resources

    - + \ No newline at end of file diff --git a/blog/tags/serverless-september/page/20/index.html b/blog/tags/serverless-september/page/20/index.html index 9c33290b75..d96b60e9a6 100644 --- a/blog/tags/serverless-september/page/20/index.html +++ b/blog/tags/serverless-september/page/20/index.html @@ -14,13 +14,13 @@ - +

    33 posts tagged with "serverless-september"

    View All Tags

    · 8 min read
    Paul Yu

    Welcome to Day 10 of #30DaysOfServerless!

    We continue our exploraton into Azure Container Apps, with today's focus being communication between microservices, and how to configure your Azure Container Apps environment in the context of a deployment example.


    What We'll Cover

    • ACA Environments & Virtual Networking
    • Basic Microservices Communications
    • Walkthrough: ACA Deployment Example
    • Summary and Next Steps
    • Exercise: Try this yourself!
    • Resources: For self-study!


    Introduction

    In yesterday's post, we learned what the Azure Container Apps (ACA) service is and the problems it aims to solve. It is considered to be a Container-as-a-Service platform since much of the complex implementation details of running a Kubernetes cluster is managed for you.

    Some of the use cases for ACA include event-driven processing jobs and background tasks, but this article will focus on hosting microservices, and how they can communicate with each other within the ACA service. At the end of this article, you will have a solid understanding of how networking and communication is handled and will leave you with a few tutorials to try.

    Environments and virtual networking in ACA

    Before we jump into microservices communication, we should review how networking works within ACA. With ACA being a managed service, Azure will take care of most of your underlying infrastructure concerns. As you provision an ACA resource, Azure provisions an Environment to deploy Container Apps into. This environment is your isolation boundary.

    Azure Container Apps Environment

    By default, Azure creates and manages a new Virtual Network (VNET) for you and the VNET is associated with the environment. As you deploy container apps, they are deployed into the same VNET and the environment is assigned a static public IP address which allows your apps to be accessible over the internet. This VNET is not visible or manageable.

    If you need control of the networking flows within the VNET, you can pre-provision one and tell Azure to deploy an environment within it. This "bring-your-own" VNET model allows you to deploy an environment in either External or Internal modes. Deploying an environment in External mode gives you the flexibility of managing your own VNET, while still allowing your containers to be accessible from outside the environment; a static public IP address is assigned to the environment. When deploying in Internal mode, your containers are accessible within the environment and/or VNET but not accessible from the internet.

    Bringing your own VNET will require some planning and you will need dedicate an empty subnet which will be used exclusively by the ACA environment. The size of your subnet will be dependant on how many containers you plan on deploying and your scaling requirements and one requirement to know is that the subnet address range must have have a /23 CIDR prefix at minimum. You will also need to think about your deployment strategy since ACA has the concept of Revisions which will also consume IPs from your subnet.

    Some additional restrictions to consider when planning your subnet address space is listed in the Resources section below and can be addressed in future posts, so be sure to follow us on dev.to and bookmark the ServerlessSeptember site.

    Basic microservices communication in ACA

    When it comes to communications between containers, ACA addresses this concern with its Ingress capabilities. With HTTP Ingress enabled on your container app, you can expose your app on a HTTPS endpoint.

    If your environment is deployed using default networking and your containers needs to be accessible from outside the environment, you will need to set the Ingress traffic option to Accepting traffic from anywhere. This will generate a Full-Qualified Domain Name (FQDN) which you can use to access your app right away. The ingress feature also generates and assigns a Secure Socket Layer (SSL) certificate for the FQDN.

    External ingress on Container App

    If your environment is deployed using default networking and your containers only need to communicate with other containers in the environment, you'll need to set the Ingress traffic option to Limited to Container Apps Environment. You get a FQDN here as well, but in the section below we'll see how that changes.

    Internal ingress on Container App

    As mentioned in the networking section above, if you deploy your ACA environment into a VNET in internal mode, your options will be Limited to Container Apps Environment or Limited to VNet.

    Ingress on internal virtual network

    Note how the Accepting traffic from anywhere option is greyed out. If your VNET is deployed in external mode, then the option will be available.

    Let's walk though an example ACA deployment

    The diagram below illustrates a simple microservices application that I deployed to ACA. The three container apps all have ingress enabled. The greeting-service app calls two backend services; a hello-service that returns the text Hello (in random casing) and a world-service that returns the text World (in a few random languages). The greeting-service concatenates the two strings together and returns Hello World to the browser. The greeting-service is the only service accessible via external ingress while two backend services are only accessible via internal ingress.

    Greeting Service overview

    With ingress enabled, let's take a quick look at the FQDN structures. Here is the FQDN of the external greeting-service.

    https://greeting-service.victoriouswave-3749d046.eastus.azurecontainerapps.io

    We can break it down into these components:

    https://[YOUR-CONTAINER-APP-NAME].[RANDOM-NAME]-[RANDOM-CHARACTERS].[AZURE-REGION].containerapps.io

    And here is the FQDN of the internal hello-service.

    https://hello-service.internal.victoriouswave-3749d046.eastus.azurecontainerapps.io

    Can you spot the difference between FQDNs?

    That was too easy 😉... the word internal is added as a subdomain in the FQDN between your container app name and the random name for all internal ingress endpoints.

    https://[YOUR-CONTAINER-APP-NAME].internal.[RANDOM-NAME]-[RANDOM-CHARACTERS].[AZURE-REGION].containerapps.io

    Now that we know the internal service FQDNs, we use them in the greeting-service app to achieve basic service-to-service communications.

    So we can inject FQDNs of downstream APIs to upstream apps using environment variables, but the downside to this approach is that need to deploy downstream containers ahead of time and this dependency will need to be planned for during your deployment process. There are ways around this and one option is to leverage the auto-injected environment variables within your app code.

    If I use the Console blade for the hello-service container app and run the env command, you will see environment variables named CONTAINER_APP_NAME and CONTAINER_APP_ENV_DNS_SUFFIX. You can use these values to determine FQDNs within your upstream app.

    hello-service environment variables

    Back in the greeting-service container I can invoke the hello-service container's sayhello method. I know the container app name is hello-service and this service is exposed over an internal ingress, therefore, if I add the internal subdomain to the CONTAINER_APP_ENV_DNS_SUFFIX I can invoke a HTTP request to the hello-service from my greeting-service container.

    Invoke the sayHello method from the greeting-service container

    As you can see, the ingress feature enables communications to other container apps over HTTP/S and ACA will inject environment variables into our container to help determine what the ingress FQDNs would be. All we need now is a little bit of code modification in the greeting-service app and build the FQDNs of our backend APIs by retrieving these environment variables.

    Greeting service code

    ... and now we have a working microservices app on ACA! 🎉

    Hello World

    Summary and next steps

    We've covered Container Apps networking and the basics of how containers communicate with one another. However, there is a better way to address service-to-service invocation using Dapr, which is an open-source framework for building microservices. It is natively integrated into the ACA service and in a future post, you'll learn how to enable it in your Container App to address microservices concerns and more. So stay tuned!

    Exercises

    As a takeaway for today's post, I encourage you to complete this tutorial and if you'd like to deploy the sample app that was presented in this article, my teammate @StevenMurawski is hosting a docker-compose-examples repo which includes samples for deploying to ACA using Docker Compose files. To learn more about the az containerapp compose command, a link to his blog articles are listed in the Resources section below.

    If you have any questions or feedback, please let us know in the comments below or reach out on Twitter @pauldotyu

    Have fun packing and shipping containers! See you in the next post!

    Resources

    The sample app presented here was inspired by services demonstrated in the book Introducing Distributed Application Runtime (Dapr): Simplifying Microservices Applications Development Through Proven and Reusable Patterns and Practices. Go check it out to leran more about Dapr!

    - + \ No newline at end of file diff --git a/blog/tags/serverless-september/page/21/index.html b/blog/tags/serverless-september/page/21/index.html index e258591665..47f0d7edc2 100644 --- a/blog/tags/serverless-september/page/21/index.html +++ b/blog/tags/serverless-september/page/21/index.html @@ -14,13 +14,13 @@ - +

    33 posts tagged with "serverless-september"

    View All Tags

    · 12 min read
    Nitya Narasimhan

    Welcome to Day 9 of #30DaysOfServerless!


    What We'll Cover

    • The Week Ahead
    • Hello, Container Apps!
    • Quickstart: Build Your First ACA!
    • Under The Hood: Core ACA Concepts
    • Exercise: Try this yourself!
    • Resources: For self-study!


    The Week Ahead

    Welcome to Week 2 of #ServerlessSeptember, where we put the focus on Microservices and building Cloud-Native applications that are optimized for serverless solutions on Azure. One week is not enough to do this complex topic justice so consider this a 7-part jumpstart to the longer journey.

    1. Hello, Container Apps (ACA) - Learn about Azure Container Apps, a key service that helps you run microservices and containerized apps on a serverless platform. Know the core concepts. (Tutorial 1: First ACA)
    2. Communication with Microservices - Dive deeper into two key concepts: environments and virtual networking. Learn how microservices communicate in ACA, and walkthrough an example. (Tutorial 2: ACA with 3 Microservices)
    3. Scaling Your Container Apps - Learn about KEDA. Understand how to configure your ACA for auto-scaling with KEDA-supported triggers. Put this into action by walking through a tutorial. (Tutorial 3: Configure Autoscaling)
    4. Hello, Distributed Application Runtime (Dapr) - Learn about Dapr and how its Building Block APIs simplify microservices development with ACA. Know how the sidecar pattern enables incremental adoption of Dapr APIs without requiring any Dapr code integration in app. (Tutorial 4: Setup & Explore Dapr)
    5. Building ACA with Dapr - See how Dapr works with ACA by building a Dapr-enabled Azure Container App. Walk through a .NET tutorial using Pub/Sub and State Management APIs in an enterprise scenario. (Tutorial 5: Build ACA with Dapr)
    6. Managing Secrets With Dapr - We'll look at the Secrets API (a key Building Block of Dapr) and learn how it simplifies management of sensitive information in ACA.
    7. Microservices + Serverless On Azure - We recap Week 2 (Microservices) and set the stage for Week 3 ( Integrations) of Serverless September. Plus, self-study resources including ACA development tutorials in different languages.

    Ready? Let's go!


    Azure Container Apps!

    When building your application, your first decision is about where you host your application. The Azure Architecture Center has a handy chart to help you decide between choices like Azure Functions, Azure App Service, Azure Container Instances, Azure Container Apps and more. But if you are new to this space, you'll need a good understanding of the terms and concepts behind the services Today, we'll focus on Azure Container Apps (ACA) - so let's start with the fundamentals.

    Containerized App Defined

    A containerized app is one where the application components, dependencies, and configuration, are packaged into a single file (container image), which can be instantiated in an isolated runtime environment (container) that is portable across hosts (OS). This makes containers lightweight and scalable - and ensures that applications behave consistently on different host platforms.

    Container images can be shared via container registries (public or private) helping developers discover and deploy related apps with less effort. Scaling a containerized app can be as simple as activating more instances of its container image. However, this requires container orchestrators to automate the management of container apps for efficiency. Orchestrators use technologies like Kubernetes to support capabilities like workload scheduling, self-healing and auto-scaling on demand.

    Cloud-Native & Microservices

    Containers are seen as one of the 5 pillars of Cloud-Native app development, an approach where applications are designed explicitly to take advantage of the unique benefits of modern dynamic environments (involving public, private and hybrid clouds). Containers are particularly suited to serverless solutions based on microservices.

    • With serverless - developers use managed services instead of managing their own infrastructure. Services are typically event-driven and can be configured for autoscaling with rules tied to event triggers. Serverless is cost-effective, with developers paying only for the compute cycles and resources they use.
    • With microservices - developers compose their applications from independent components. Each component can be deployed in its own container, and scaled at that granularity. This simplifies component reuse (across apps) and maintainability (over time) - with developers evolving functionality at microservice (vs. app) levels.

    Hello, Azure Container Apps!

    Azure Container Apps is the managed service that helps you run containerized apps and microservices as a serverless compute solution, on Azure. You can:

    • deploy serverless API endpoints - autoscaled by HTTP request traffic
    • host background processing apps - autoscaled by CPU or memory load
    • handle event-driven processing - autoscaled by #messages in queue
    • run microservices - autoscaled by any KEDA-supported scaler.

    Want a quick intro to the topic? Start by watching the short video below - then read these two posts from our ZeroToHero series:


    Deploy Your First ACA

    Dev Options

    We typically have three options for development:

    • Use the Azure Portal - provision and deploy from a browser.
    • Use Visual Studio Code (with relevant extensions) - if you prefer an IDE
    • Using Azure CLI - if you prefer to build and deploy from command line.

    The documentation site has quickstarts for three contexts:

    For this quickstart, we'll go with the first option (sample image) so we can move quickly to core concepts. We'll leave the others as an exercise for you to explore.

    1. Setup Resources

    PRE-REQUISITES

    You need:

    • An Azure account with an active subscription
    • An installed Azure CLI

    Start by logging into Azure from the CLI. The command should launch a browser to complete the auth flow (or give you an option to take an alternative path).

    $ az login

    Successful authentication will result in extensive command-line output detailing the status of your subscription.

    Next, install the Azure Container Apps extension for the CLI

    $ az extension add --name containerapp --upgrade
    ...
    The installed extension 'containerapp' is in preview.

    Once successfully installed, register the Microsoft.App namespace.

    $ az provider register --namespace Microsoft.App

    Then set local environment variables in that terminal - and verify they are set correctly:

    $ RESOURCE_GROUP="my-container-apps"
    $ LOCATION="canadacentral"
    $ CONTAINERAPPS_ENVIRONMENT="my-environment"

    $ echo $LOCATION $RESOURCE_GROUP $CONTAINERAPPS_ENVIRONMENT
    canadacentral my-container-apps my-environment

    Now you can use Azure CLI to provision a resource group for this tutorial. Creating a resource group also makes it easier for us to delete/reclaim all resources used at the end of this tutorial.

    az group create \
    --name $RESOURCE_GROUP \
    --location $LOCATION
    Congratulations

    You completed the Setup step!

    On completion, the console should print out the details of the newly created resource group. You should also be able to visit the Azure Portal and find the newly-active my-container-apps resource group under your active subscription.

    2. Create Environment

    An environment is like the picket fence around your property. It creates a secure boundary that contains a group of container apps - such that all apps deployed to it share the same virtual network and logging resources.

    $ az containerapp env create \
    --name $CONTAINERAPPS_ENVIRONMENT \
    --resource-group $RESOURCE_GROUP \
    --location $LOCATION

    No Log Analytics workspace provided.
    Generating a Log Analytics workspace with name ...

    This can take a few minutes. When done, you will see the terminal display more details. You can also check the resource group in the portal and see that a Container Apps Environment and a Log Analytics Workspace are created for you as part of this step.

    You've got the fence set up. Now it's time to build your home - er, container app!

    3. Create Container App

    Here's the command we'll use to create our first Azure Container App. Note that the --image argument provides the link to a pre-existing containerapps-helloworld image.

    az containerapp create \
    --name my-container-app \
    --resource-group $RESOURCE_GROUP \
    --environment $CONTAINERAPPS_ENVIRONMENT \
    --image mcr.microsoft.com/azuredocs/containerapps-helloworld:latest \
    --target-port 80 \
    --ingress 'external' \
    --query properties.configuration.ingress.fqdn
    ...
    ...

    Container app created. Access your app at <URL>

    The --ingress property shows that the app is open to external requests; in other words, it is publicly visible at the <URL> that is printed out on the terminal on successsful completion of this step.

    4. Verify Deployment

    Let's see if this works. You can verify that your container app by visitng the URL returned above in your browser. You should see something like this!

    Container App Hello World

    You can also visit the Azure Portal and look under the created Resource Group. You should see a new Container App type of resource was created after this step.

    Congratulations

    You just created and deployed your first "Hello World" Azure Container App! This validates your local development environment setup and existence of a valid Azure subscription.

    5. Clean Up Your Resources

    It's good practice to clean up resources once you are done with a tutorial.

    THIS ACTION IS IRREVERSIBLE

    This command deletes the resource group we created above - and all resources in it. So make sure you specified the right name, then confirm deletion.

    $ az group delete --name $RESOURCE_GROUP
    Are you sure you want to perform this operation? (y/n):

    Note that you can also delete the resource group from the Azure Portal interface if that feels more comfortable. For now, we'll just use the Portal to verify that deletion occurred. If you had previously opened the Resource Group page for the created resource, just refresh it. You should see something like this:

    Resource Not Found


    Core Concepts

    COMING SOON

    An illustrated guide summarizing these concepts in a single sketchnote.

    We covered a lot today - we'll stop with a quick overview of core concepts behind Azure Container Apps, each linked to documentation for self-study. We'll dive into more details on some of these concepts in upcoming articles:

    • Environments - are the secure boundary around a group of container apps that are deployed in the same virtual network. They write logs to a shared Log Analytics workspace and can communicate seamlessly using Dapr, if used.
    • Containers refer to the container image deployed in the Azure Container App. They can use any runtime, programming language, or development stack - and be discovered using any public or private container registry. A container app can support multiple containers.
    • Revisions are immutable snapshots of an Azure Container App. The first revision is created when the ACA is first deployed, with new revisions created when redeployment occurs with revision-scope changes. Multiple revisions can run concurrently in an environment.
    • Application Lifecycle Management revolves around these revisions, with a container app having three phases: deployment, update and deactivation.
    • Microservices are independent units of functionality in Cloud-Native architectures. A single container app typically represents a single microservice, and can be composed from one or more containers. Microservices can now be scaled and upgraded indepedently, giving your application more flexbility and control.
    • Networking architecture consist of a virtual network (VNET) associated with the environment. Unless you provide a custom VNET at environment creation time, a default VNET is automatically created. The VNET configuration determines access (ingress, internal vs. external) and can influence auto-scaling choices (e.g., use HTTP Edge Proxy and scale based on number of HTTP requests).
    • Observability is about monitoring the health of your application and diagnosing it to improve reliability or performance. Azure Container Apps has a number of features - from Log streaming and Container console to integration with Azure Monitor - to provide a holistic view of application status over time.
    • Easy Auth is possible with built-in support for authentication and authorization including support for popular identity providers like Facebook, Google, Twitter and GitHub - alongside the Microsoft Identity Platform.

    Keep these terms in mind as we walk through more tutorials this week, to see how they find application in real examples. Finally, a note on Dapr, the Distributed Application Runtime that abstracts away many of the challenges posed by distributed systems - and lets you focus on your application logic.

    DAPR INTEGRATION MADE EASY

    Dapr uses a sidecar architecture, allowing Azure Container Apps to communicate with Dapr Building Block APIs over either gRPC or HTTP. Your ACA can be built to run with or without Dapr - giving you the flexibility to incrementally adopt specific APIs and unlock related capabilities as the need arises.

    In later articles this week, we'll do a deeper dive into Dapr and build our first Dapr-enable Azure Container App to get a better understanding of this integration.

    Exercise

    Congratulations! You made it! By now you should have a good idea of what Cloud-Native development means, why Microservices and Containers are important to that vision - and how Azure Container Apps helps simplify the building and deployment of microservices based applications using serverless architectures on Azure.

    Now it's your turn to reinforce learning by doing.

    Resources

    Three key resources to bookmark and explore:

    - + \ No newline at end of file diff --git a/blog/tags/serverless-september/page/22/index.html b/blog/tags/serverless-september/page/22/index.html index b5523fce97..747b5de36d 100644 --- a/blog/tags/serverless-september/page/22/index.html +++ b/blog/tags/serverless-september/page/22/index.html @@ -14,13 +14,13 @@ - +

    33 posts tagged with "serverless-september"

    View All Tags

    · 5 min read
    Nitya Narasimhan
    Devanshi Joshi

    SEP 08: CHANGE IN PUBLISHING SCHEDULE

    Starting from Week 2 (Sep 8), we'll be publishing blog posts in batches rather than on a daily basis, so you can read a series of related posts together. Don't want to miss updates? Just subscribe to the feed


    Welcome to Day 8 of #30DaysOfServerless!

    This marks the end of our Week 1 Roadmap focused on Azure Functions!! Today, we'll do a quick recap of all #ServerlessSeptember activities in Week 1, set the stage for Week 2 - and leave you with some excellent tutorials you should explore to build more advanced scenarios with Azure Functions.

    Ready? Let's go.


    What We'll Cover

    • Azure Functions: Week 1 Recap
    • Advanced Functions: Explore Samples
    • End-to-End: Serverless Hacks & Cloud Skills
    • What's Next: Hello, Containers & Microservices
    • Challenge: Complete the Learning Path


    Week 1 Recap: #30Days & Functions

    Congratulations!! We made it to the end of Week 1 of #ServerlessSeptember. Let's recap what we learned so far:

    • In Core Concepts we looked at where Azure Functions fits into the serverless options available on Azure. And we learned about key concepts like Triggers, Bindings, Custom Handlers and Durable Functions.
    • In Build Your First Function we looked at the tooling options for creating Functions apps, testing them locally, and deploying them to Azure - as we built and deployed our first Functions app.
    • In the next 4 posts, we explored new Triggers, Integrations, and Scenarios - as we looked at building Functions Apps in Java, JavaScript, .NET and Python.
    • And in the Zero-To-Hero series, we learned about Durable Entities - and how we can use them to create stateful serverless solutions using a Chirper Sample as an example scenario.

    The illustrated roadmap below summarizes what we covered each day this week, as we bring our Functions-as-a-Service exploration to a close.


    Advanced Functions: Code Samples

    So, now that we've got our first Functions app under our belt, and validated our local development setup for tooling, where can we go next? A good next step is to explore different triggers and bindings, that drive richer end-to-end scenarios. For example:

    • Integrate Functions with Azure Logic Apps - we'll discuss Azure Logic Apps in Week 3. For now, think of it as a workflow automation tool that lets you integrate seamlessly with other supported Azure services to drive an end-to-end scenario. In this tutorial, we set up a workflow connecting Twitter (get tweet) to Azure Cognitive Services (analyze sentiment) - and use that to trigger an Azure Functions app to send email about the result.
    • Integrate Functions with Event Grid - we'll discuss Azure Event Grid in Week 3. For now, think of it as an eventing service connecting event sources (publishers) to event handlers (subscribers) at cloud scale. In this tutorial, we handle a common use case - a workflow where loading an image to Blob Storage triggers an Azure Functions app that implements a resize function, helping automatically generate thumbnails for the uploaded image.
    • Integrate Functions with CosmosDB and SignalR to bring real-time push-based notifications to your web app. It achieves this by using a Functions app that is triggered by changes in a CosmosDB backend, causing it to broadcast that update (push notification to connected web clients over SignalR, in real time.

    Want more ideas? Check out the Azure Samples for Functions for implementations, and browse the Azure Architecture Center for reference architectures from real-world scenarios that involve Azure Functions usage.


    E2E Scenarios: Hacks & Cloud Skills

    Want to systematically work your way through a single End-to-End scenario involving Azure Functions alongside other serverless support technologies? Check out the Serverless Hacks activity happening during #ServerlessSeptember, and learn to build this "Serverless Tollbooth Application" in a series of 10 challenges. Check out the video series for a reference solution in .NET and sign up for weekly office hours to join peers and discuss your solutions or challenges.

    Or perhaps you prefer to learn core concepts with code in a structured learning path? We have that covered. Check out the 12-module "Create Serverless Applications" course from Microsoft Learn which walks your through concepts, one at a time, with code. Even better - sign up for the free Cloud Skills Challenge and complete the same path (in under 30 days) but this time, with the added fun of competing against your peers for a spot on a leaderboard, and swag.


    What's Next? Hello, Cloud-Native!

    So where to next? In Week 2 we turn our attention from Functions-as-a-Service to building more complex backends using Containers and Microservices. We'll focus on two core technologies - Azure Container Apps and Dapr (Distributed Application Runtime) - both key components of a broader vision around Building Cloud-Native Applications in Azure.

    What is Cloud-Native you ask?

    Fortunately for you, we have an excellent introduction in our Zero-to-Hero article on Go Cloud-Native with Azure Container Apps - that explains the 5 pillars of Cloud-Native and highlights the value of Azure Container Apps (scenarios) and Dapr (sidecar architecture) for simplified microservices-based solution with auto-scale capability. Prefer a visual summary? Here's an illustrate guide to that article for convenience.

    Go Cloud-Native Download a higher resolution version of the image


    Take The Challenge

    We typically end each post with an exercise or activity to reinforce what you learned. For Week 1, we encourage you to take the Cloud Skills Challenge and work your way through at least a subset of the modules, for hands-on experience with the different Azure Functions concepts, integrations, and usage.

    See you in Week 2!

    - + \ No newline at end of file diff --git a/blog/tags/serverless-september/page/23/index.html b/blog/tags/serverless-september/page/23/index.html index a0e815a568..58e8d3ac79 100644 --- a/blog/tags/serverless-september/page/23/index.html +++ b/blog/tags/serverless-september/page/23/index.html @@ -14,7 +14,7 @@ - + @@ -22,7 +22,7 @@

    33 posts tagged with "serverless-september"

    View All Tags

    · 7 min read
    Jay Miller

    Welcome to Day 7 of #30DaysOfServerless!

    Over the past couple of days, we've explored Azure Functions from the perspective of specific programming languages. Today we'll continue that trend by looking at Python - exploring the Timer Trigger and CosmosDB binding, and showcasing integration with a FastAPI-implemented web app.

    Ready? Let's go.


    What We'll Cover

    • Developer Guidance: Azure Functions On Python
    • Build & Deploy: Wildfire Detection Apps with Timer Trigger + CosmosDB
    • Demo: My Fire Map App: Using FastAPI and Azure Maps to visualize data
    • Next Steps: Explore Azure Samples
    • Exercise: Try this yourself!
    • Resources: For self-study!


    Developer Guidance

    If you're a Python developer new to serverless on Azure, start with the Azure Functions Python Developer Guide. It covers:

    • Quickstarts with Visual Studio Code and Azure CLI
    • Adopting best practices for hosting, reliability and efficiency.
    • Tutorials showcasing Azure automation, image classification and more
    • Samples showcasing Azure Functions features for Python developers

    Now let's dive in and build our first Python-based Azure Functions app.


    Detecting Wildfires Around the World?

    I live in California which is known for lots of wildfires. I wanted to create a proof of concept for developing an application that could let me know if there was a wildfire detected near my home.

    NASA has a few satelites orbiting the Earth that can detect wildfires. These satelites take scans of the radiative heat in and use that to determine the likelihood of a wildfire. NASA updates their information about every 30 minutes and it can take about four hours for to scan and process information.

    Fire Point Near Austin, TX

    I want to get the information but I don't want to ping NASA or another service every time I check.

    What if I occaisionally download all the data I need? Then I can ping that as much as I like.

    I can create a script that does just that. Any time I say I can create a script that is a verbal queue for me to consider using an Azure function. With the function being ran in the cloud, I can ensure the script runs even when I'm not at my computer.

    How the Timer Trigger Works

    This function will utilize the Timer Trigger. This means Azure will call this function to run at a scheduled interval. This isn't the only way to keep the data in sync, but we know that arcgis, the service that we're using says that data is only updated every 30 minutes or so.

    To learn more about the TimerTrigger as a concept, check out the Azure Functions documentation around Timers.

    When we create the function we tell it a few things like where the script will live (in our case in __init__.py) the type and direction and notably often it should run. We specify the timer using schedule": <The CRON INTERVAL>. For us we're using 0 0,30 * * * which means every 30 minutes at the hour and half-hour.

    {
    "scriptFile": "__init__.py",
    "bindings": [
    {
    "name": "reqTimer",
    "type": "timerTrigger",
    "direction": "in",
    "schedule": "0 0,30 * * * *"
    }
    ]
    }

    Next, we create the code that runs when the function is called.

    Connecting to the Database and our Source

    Disclaimer: The data that we're pulling is for educational purposes only. This is not meant to be a production level application. You're welcome play with this project but ensure that you're using the data in compliance with Esri.

    Our function does two important things.

    1. It pulls data from ArcGIS that meets the parameters
    2. It stores that pulled data into our database

    If you want to check out the code in its entirety, check out the GitHub repository.

    Pulling the data from ArcGIS is easy. We can use the ArcGIS Python API. Then, we need to load the service layer. Finally we query that layer for the specific data.

    def write_new_file_data(gis_id:str, layer:int=0) -> FeatureSet:
    """Returns a JSON String of the Dataframe"""
    fire_data = g.content.get(gis_id)
    feature = fire_data.layers[layer] # Loading Featured Layer from ArcGIS
    q = feature.query(
    where="confidence >= 65 AND hours_old <= 4", #The filter for the query
    return_distince_values=True,
    out_fields="confidence, hours_old", # The data we want to store with our points
    out_sr=4326, # The spatial reference of the data
    )
    return q

    Then we need to store the data in our database.

    We're using Cosmos DB for this. COSMOSDB is a NoSQL database, which means that the data looks a lot like a python dictionary as it's JSON. This means that we don't need to worry about converting the data into a format that can be stored in a relational database.

    The second reason is that Cosmos DB is tied into the Azure ecosystem so that if we want to create functions Azure events around it, we can.

    Our script grabs the information that we pulled from ArcGIS and stores it in our database.

    async with CosmosClient.from_connection_string(COSMOS_CONNECTION_STRING) as client:
    container = database.get_container_client(container=CONTAINER)
    for record in data:
    await container.create_item(
    record,
    enable_automatic_id_generation=True,
    )

    In our code each of these functions live in their own space. So in the main function we focus solely on what azure functions will be doing. The script that gets called is __init__.py. There we'll have the function call the other functions running.

    We created another function called load_and_write that does all the work outlined above. __init__.py will call that.

    async def main(reqTimer: func.TimerRequest) -> None:
    database=database
    container=container
    await update_db.load_and_write(gis_id=GIS_LAYER_ID, database=database, container=container)

    Then we deploy the function to Azure. I like to use VS Code's Azure Extension but you can also deploy it a few other ways.

    Deploying the function via VS Code

    Once the function is deployed we can load the Azure portal and see a ping whenever the function is called. The pings correspond to the Function being ran

    We can also see the data now living in the datastore. Document in Cosmos DB

    It's in the Database, Now What?

    Now the real fun begins. We just loaded the last bit of fire data into a database. We can now query that data and serve it to others.

    As I mentioned before, our Cosmos DB data is also stored in Azure, which means that we can deploy Azure Functions to trigger when new data is added. Perhaps you can use this to check for fires near you and use a Logic App to send an alert to your phone or email.

    Another option is to create a web application that talks to the database and displays the data. I've created an example of this using FastAPI – https://jm-func-us-fire-notify.azurewebsites.net.

    Website that Checks for Fires


    Next Steps

    This article showcased the Timer Trigger and the HTTP Trigger for Azure Functions in Python. Now try exploring other triggers and bindings by browsing Bindings code samples for Python and Azure Functions samples for Python

    Once you've tried out the samples, you may want to explore more advanced integrations or extensions for serverless Python scenarios. Here are some suggestions:

    And check out the resources for more tutorials to build up your Azure Functions skills.

    Exercise

    I encourage you to fork the repository and try building and deploying it yourself! You can see the TimerTrigger and a HTTPTrigger building the website.

    Then try extending it. Perhaps if wildfires are a big thing in your area, you can use some of the data available in Planetary Computer to check out some other datasets.

    Resources

    - + \ No newline at end of file diff --git a/blog/tags/serverless-september/page/24/index.html b/blog/tags/serverless-september/page/24/index.html index e121105e96..eaebfa2a73 100644 --- a/blog/tags/serverless-september/page/24/index.html +++ b/blog/tags/serverless-september/page/24/index.html @@ -14,13 +14,13 @@ - +

    33 posts tagged with "serverless-september"

    View All Tags

    · 10 min read
    Mike James
    Matt Soucoup

    Welcome to Day 6 of #30DaysOfServerless!

    The theme for this week is Azure Functions. Today we're going to talk about why Azure Functions are a great fit for .NET developers.


    What We'll Cover

    • What is serverless computing?
    • How does Azure Functions fit in?
    • Let's build a simple Azure Function in .NET
    • Developer Guide, Samples & Scenarios
    • Exercise: Explore the Create Serverless Applications path.
    • Resources: For self-study!

    A banner image that has the title of this article with the author&#39;s photo and a drawing that summarizes the demo application.


    The leaves are changing colors and there's a chill in the air, or for those lucky folks in the Southern Hemisphere, the leaves are budding and a warmth is in the air. Either way, that can only mean one thing - it's Serverless September!🍂 So today, we're going to take a look at Azure Functions - what they are, and why they're a great fit for .NET developers.

    What is serverless computing?

    For developers, serverless computing means you write highly compact individual functions that do one thing - and run in the cloud. These functions are triggered by some external event. That event could be a record being inserted into a database, a file uploaded into BLOB storage, a timer interval elapsed, or even a simple HTTP request.

    But... servers are still definitely involved! What has changed from other types of cloud computing is that the idea and ownership of the server has been abstracted away.

    A lot of the time you'll hear folks refer to this as Functions as a Service or FaaS. The defining characteristic is all you need to do is put together your application logic. Your code is going to be invoked in response to events - and the cloud provider takes care of everything else. You literally get to focus on only the business logic you need to run in response to something of interest - no worries about hosting.

    You do not need to worry about wiring up the plumbing between the service that originates the event and the serverless runtime environment. The cloud provider will handle the mechanism to call your function in response to whatever event you chose to have the function react to. And it passes along any data that is relevant to the event to your code.

    And here's a really neat thing. You only pay for the time the serverless function is running. So, if you have a function that is triggered by an HTTP request, and you rarely get requests to your function, you would rarely pay.

    How does Azure Functions fit in?

    Microsoft's Azure Functions is a modern serverless architecture, offering event-driven cloud computing that is easy for developers to use. It provides a way to run small pieces of code or Functions in the cloud without developers having to worry themselves about the infrastructure or platform the Function is running on.

    That means we're only concerned about writing the logic of the Function. And we can write that logic in our choice of languages... like C#. We are also able to add packages from NuGet to Azure Functions—this way, we don't have to reinvent the wheel and can use well-tested libraries.

    And the Azure Functions runtime takes care of a ton of neat stuff for us, like passing in information about the event that caused it to kick off - in a strongly typed variable. It also "binds" to other services, like Azure Storage, we can easily access those services from our code without having to worry about new'ing them up.

    Let's build an Azure Function!

    Scaffold the Function

    Don't worry about having an Azure subscription or even being connected to the internet—we can develop and debug Azure Functions locally using either Visual Studio or Visual Studio Code!

    For this example, I'm going to use Visual Studio Code to build up a Function that responds to an HTTP trigger and then writes a message to an Azure Storage Queue.

    Diagram of the how the Azure Function will use the HTTP trigger and the Azure Storage Queue Binding

    The incoming HTTP call is the trigger and the message queue the Function writes to is an output binding. Let's have at it!

    info

    You do need to have some tools downloaded and installed to get started. First and foremost, you'll need Visual Studio Code. Then you'll need the Azure Functions extension for VS Code to do the development with. Finally, you'll need the Azurite Emulator installed as well—this will allow us to write to a message queue locally.

    Oh! And of course, .NET 6!

    Now with all of the tooling out of the way, let's write a Function!

    1. Fire up Visual Studio Code. Then, from the command palette, type: Azure Functions: Create New Project

      Screenshot of create a new function dialog in VS Code

    2. Follow the steps as to which directory you want to create the project in and which .NET runtime and language you want to use.

      Screenshot of VS Code prompting which directory and language to use

    3. Pick .NET 6 and C#.

      It will then prompt you to pick the folder in which your Function app resides and then select a template.

      Screenshot of VS Code prompting you to pick the Function trigger template

      Pick the HTTP trigger template. When prompted for a name, call it: PostToAQueue.

    Execute the Function Locally

    1. After giving it a namespace, it prompts for an authorization level—pick Anonymous. Now we have a Function! Let's go ahead and hit F5 and see it run!
    info

    After the templates have finished installing, you may get a prompt to download additional components—these are NuGet packages. Go ahead and do that.

    When it runs, you'll see the Azure Functions logo appear in the Terminal window with the URL the Function is located at. Copy that link.

    Screenshot of the Azure Functions local runtime starting up

    1. Type the link into a browser, adding a name parameter as shown in this example: http://localhost:7071/api/PostToAQueue?name=Matt. The Function will respond with a message. You can even set breakpoints in Visual Studio Code and step through the code!

    Write To Azure Storage Queue

    Next, we'll get this HTTP trigger Function to write to a local Azure Storage Queue. First we need to add the Storage NuGet package to our project. In the terminal, type:

    dotnet add package Microsoft.Azure.WebJobs.Extensions.Storage

    Then set a configuration setting to tell the Function runtime where to find the Storage. Open up local.settings.json and set "AzureWebJobsStorage" to "UseDevelopmentStorage=true". The full file will look like:

    {
    "IsEncrypted": false,
    "Values": {
    "AzureWebJobsStorage": "UseDevelopmentStorage=true",
    "AzureWebJobsDashboard": ""
    }
    }

    Then create a new class within your project. This class will hold nothing but properties. Call it whatever you want and add whatever properties you want to it. I called mine TheMessage and added an Id and Name properties to it.

    public class TheMessage
    {
    public string Id { get; set; }
    public string Name { get; set; }
    }

    Finally, change your PostToAQueue Function, so it looks like the following:


    public static class PostToAQueue
    {
    [FunctionName("PostToAQueue")]
    public static async Task<IActionResult> Run(
    [HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)] HttpRequest req,
    [Queue("demoqueue", Connection = "AzureWebJobsStorage")] IAsyncCollector<TheMessage> messages,
    ILogger log)
    {
    string name = req.Query["name"];

    await messages.AddAsync(new TheMessage { Id = System.Guid.NewGuid().ToString(), Name = name });

    return new OkResult();
    }
    }

    Note the addition of the messages variable. This is telling the Function to use the storage connection we specified before via the Connection property. And it is also specifying which queue to use in that storage account, in this case demoqueue.

    All the code is doing is pulling out the name from the query string, new'ing up a new TheMessage class and adding that to the IAsyncCollector variable.

    That will add the new message to the queue!

    Make sure Azurite is started within VS Code (both the queue and blob emulators). Run the app and send the same GET request as before: http://localhost:7071/api/PostToAQueue?name=Matt.

    If you have the Azure Storage Explorer installed, you can browse your local Queue and see the new message in there!

    Screenshot of Azure Storage Explorer with the new message in the queue

    Summing Up

    We had a quick look at what Microsoft's serverless offering, Azure Functions, is comprised of. It's a full-featured FaaS offering that enables you to write functions in your language of choice, including reusing packages such as those from NuGet.

    A highlight of Azure Functions is the way they are triggered and bound. The triggers define how a Function starts, and bindings are akin to input and output parameters on it that correspond to external services. The best part is that the Azure Function runtime takes care of maintaining the connection to the external services so you don't have to worry about new'ing up or disposing of the connections yourself.

    We then wrote a quick Function that gets triggered off an HTTP request and then writes a query string parameters from that request into a local Azure Storage Queue.

    What's Next

    So, where can you go from here?

    Think about how you can build real-world scenarios by integrating other Azure services. For example, you could use serverless integrations to build a workflow where the input payload received using an HTTP Trigger, is now stored in Blob Storage (output binding), which in turn triggers another service (e.g., Cognitive Services) that processes the blob and returns an enhanced result.

    Keep an eye out for an update to this post where we walk through a scenario like this with code. Check out the resources below to help you get started on your own.

    Exercise

    This brings us close to the end of Week 1 with Azure Functions. We've learned core concepts, built and deployed our first Functions app, and explored quickstarts and scenarios for different programming languages. So, what can you do to explore this topic on your own?

    • Explore the Create Serverless Applications learning path which has several modules that explore Azure Functions integrations with various services.
    • Take up the Cloud Skills Challenge and complete those modules in a fun setting where you compete with peers for a spot on the leaderboard!

    Then come back tomorrow as we wrap up the week with a discussion on end-to-end scenarios, a recap of what we covered this week, and a look at what's ahead next week.

    Resources

    Start here for developer guidance in getting started with Azure Functions as a .NET/C# developer:

    Then learn about supported Triggers and Bindings for C#, with code snippets to show how they are used.

    Finally, explore Azure Functions samples for C# and learn to implement serverless solutions. Examples include:

    - + \ No newline at end of file diff --git a/blog/tags/serverless-september/page/25/index.html b/blog/tags/serverless-september/page/25/index.html index a34715babf..fcdcc3ed01 100644 --- a/blog/tags/serverless-september/page/25/index.html +++ b/blog/tags/serverless-september/page/25/index.html @@ -14,7 +14,7 @@ - + @@ -22,7 +22,7 @@

    33 posts tagged with "serverless-september"

    View All Tags

    · 8 min read
    David Justo

    Welcome to Day 6 of #30DaysOfServerless!

    Today, we have a special set of posts from our Zero To Hero 🚀 initiative, featuring blog posts authored by our Product Engineering teams for #ServerlessSeptember. Posts were originally published on the Apps on Azure blog on Microsoft Tech Community.


    What We'll Cover

    • What are Durable Entities
    • Some Background
    • A Programming Model
    • Entities for a Micro-Blogging Platform


    Durable Entities are a special type of Azure Function that allow you to implement stateful objects in a serverless environment. They make it easy to introduce stateful components to your app without needing to manually persist data to external storage, so you can focus on your business logic. We’ll demonstrate their power with a real-life example in the last section.

    Entities 101: Some Background

    Programming Durable Entities feels a lot like object-oriented programming, except that these “objects” exist in a distributed system. Like objects, each Entity instance has a unique identifier, i.e. an entity ID that can be used to read and manipulate their internal state. Entities define a list of operations that constrain how their internal state is managed, like an object interface.

    Some experienced readers may realize that Entities sound a lot like an implementation of the Actor Pattern. For a discussion of the relationship between Entities and Actors, please refer to this documentation.

    Entities are a part of the Durable Functions Extension, an extension of Azure Functions that empowers programmers with stateful abstractions for serverless, such as Orchestrations (i.e. workflows).

    Durable Functions is available in most Azure Functions runtime environments: .NET, Node.js, Python, PowerShell, and Java (preview). For this article, we’ll focus on the C# experience, but note that Entities are also available in Node.js and Python; their availability in other languages is underway.

    Entities 102: The programming model

    Imagine you want to implement a simple Entity that just counts things. Its interface allows you to get the current count, add to the current count, and to reset the count to zero.

    If you implement this in an object-oriented way, you’d probably define a class (say “Counter”) with a method to get the current count (say “Counter.Get”), another to add to the count (say “Counter.Add”), and another to reset the count (say “Counter.Reset”). Well, the implementation of an Entity in C# is not that different from this sketch:

    [JsonObject(MemberSerialization.OptIn)] 
    public class Counter
    {
    [JsonProperty("value")]
    public int Value { get; set; }

    public void Add(int amount)
    {
    this.Value += amount;
    }

    public Task Reset()
    {
    this.Value = 0;
    return Task.CompletedTask;
    }

    public Task<int> Get()
    {
    return Task.FromResult(this.Value);
    }
    [FunctionName(nameof(Counter))]
    public static Task Run([EntityTrigger] IDurableEntityContext ctx)
    => ctx.DispatchAsync<Counter>();

    }

    We’ve defined a class named Counter, with an internal count stored in the variable “Value” which is manipulated through the “Add” and “Reset” methods, and which can be read via “Get”.

    The “Run” method is simply boilerplate required for the Azure Functions framework to interact with the object we’ve defined – it’s the method that the framework calls internally when it needs to load the Entity object. When DispatchAsync is called, the Entity and its corresponded state (the last count in “Value”) is loaded from storage. Again, this is mostly just boilerplate: your Entity’s business logic lies in the rest of the class.

    Finally, the Json annotation on top of the class and the Value field tells the Durable Functions framework that the “Value” field is to be durably persisted as part of the durable state on each Entity invocation. If you were to annotate other class variables with JsonProperty, they would also become part of the managed state.

    Entities for a micro-blogging platform

    We’ll try to implement a simple micro-blogging platform, a la Twitter. Let’s call it “Chirper”. In Chirper, users write chirps (i.e tweets), they can follow, and unfollow other users, and they can read the chirps of users they follow.

    Defining Entity

    Just like in OOP, it’s useful to begin by identifying what are the stateful agents of this scenario. In this case, users have state (who they follow and their chirps), and chirps have state in the form of their content. So, we could model these stateful agents as Entities!

    Below is a potential way to implement a User for Chirper as an Entity:

      [JsonObject(MemberSerialization = MemberSerialization.OptIn)] 
    public class User: IUser
    {
    [JsonProperty]
    public List<string> FollowedUsers { get; set; } = new List<string>();

    public void Add(string user)
    {
    FollowedUsers.Add(user);
    }

    public void Remove(string user)
    {
    FollowedUsers.Remove(user);
    }

    public Task<List<string>> Get()
    {
    return Task.FromResult(FollowedUsers);
    }
    // note: removed boilerplate “Run” method, for conciseness.
    }

    In this case, our Entity’s internal state is stored in “FollowedUsers” which is an array of accounts followed by this user. The operations exposed by this entity allow clients to read and modify this data: it can be read by “Get”, a new follower can be added via “Add”, and a user can be unfollowed via “Remove”.

    With that, we’ve modeled a Chirper’s user as an Entity! Recall that Entity instances each has a unique ID, so we can consider that unique ID to correspond to a specific user account.

    What about chirps? Should we represent them as Entities as well? That would certainly be valid. However, we would then need to create a mapping between an entity ID and every chirp entity ID that this user wrote.

    For demonstration purposes, a simpler approach would be to create an Entity that stores the list of all chirps authored by a given user; call it UserChirps. Then, we could fix each User Entity to share the same entity ID as its corresponding UserChirps Entity, making client operations easier.

    Below is a simple implementation of UserChirps:

      [JsonObject(MemberSerialization = MemberSerialization.OptIn)] 
    public class UserChirps : IUserChirps
    {
    [JsonProperty]
    public List<Chirp> Chirps { get; set; } = new List<Chirp>();

    public void Add(Chirp chirp)
    {
    Chirps.Add(chirp);
    }

    public void Remove(DateTime timestamp)
    {
    Chirps.RemoveAll(chirp => chirp.Timestamp == timestamp);
    }

    public Task<List<Chirp>> Get()
    {
    return Task.FromResult(Chirps);
    }

    // Omitted boilerplate “Run” function
    }

    Here, our state is stored in Chirps, a list of user posts. Our operations are the same as before: Get, Read, and Add. It’s the same pattern as before, but we’re representing different data.

    To put it all together, let’s set up Entity clients to generate and manipulate these Entities according to some REST API.

    Interacting with Entity

    Before going there, let’s talk briefly about how you can interact with an Entity. Entity interactions take one of two forms -- calls and signals:

    Calling an entity is a two-way communication. You send an operation message to the entity and then wait for the response message before you continue. The response can be a result value or an error. Signaling an entity is a one-way (fire-and-forget) communication. You send an operation message but don’t wait for a response. You have the reassurance that the message will be delivered eventually, but you don’t know when and don’t know what the response is. For example, when you read the state of an Entity, you are performing a “call” interaction. When you record that a user has followed another, you may choose to simply signal it.

    Now say user with a given userId (say “durableFan99” ) wants to post a chirp. For this, you can write an HTTP endpoint to signal the UserChips entity to record that chirp. We can leverage the HTTP Trigger functionality from Azure Functions and pair it with an entity client binding that signals the Add operation of our Chirp Entity:

    [FunctionName("UserChirpsPost")] 
    public static async Task<HttpResponseMessage> UserChirpsPost(
    [HttpTrigger(AuthorizationLevel.Function, "post", Route = "user/{userId}/chirps")]
    HttpRequestMessage req,
    DurableClient] IDurableClient client,
    ILogger log,
    string userId)
    {
    Authenticate(req, userId);
    var chirp = new Chirp()
    {
    UserId = userId,
    Timestamp = DateTime.UtcNow,
    Content = await req.Content.ReadAsStringAsync(),
    };
    await client.SignalEntityAsync<IUserChirps>(userId, x => x.Add(chirp));
    return req.CreateResponse(HttpStatusCode.Accepted, chirp);
    }

    Following the same pattern as above, to get all the chirps from a user, you could read the status of your Entity via ReadEntityStateAsync, which follows the call-interaction pattern as your client expects a response:

    [FunctionName("UserChirpsGet")] 
    public static async Task<HttpResponseMessage> UserChirpsGet(
    [HttpTrigger(AuthorizationLevel.Function, "get", Route = "user/{userId}/chirps")] HttpRequestMessage req,
    [DurableClient] IDurableClient client,
    ILogger log,
    string userId)
    {

    Authenticate(req, userId);
    var target = new EntityId(nameof(UserChirps), userId);
    var chirps = await client.ReadEntityStateAsync<UserChirps>(target);
    return chirps.EntityExists
    ? req.CreateResponse(HttpStatusCode.OK, chirps.EntityState.Chirps)
    : req.CreateResponse(HttpStatusCode.NotFound);
    }

    And there you have it! To play with a complete implementation of Chirper, you can try out our sample in the Durable Functions extension repo.

    Thank you!

    info

    Thanks for following along, and we hope you find Entities as useful as we do! If you have questions or feedback, please file issues in the repo above or tag us @AzureFunctions on Twitter

    - + \ No newline at end of file diff --git a/blog/tags/serverless-september/page/26/index.html b/blog/tags/serverless-september/page/26/index.html index 5f3856ab31..6be1989d27 100644 --- a/blog/tags/serverless-september/page/26/index.html +++ b/blog/tags/serverless-september/page/26/index.html @@ -14,13 +14,13 @@ - +

    33 posts tagged with "serverless-september"

    View All Tags

    · 8 min read
    Kendall Roden

    Welcome to Day 6 of #30DaysOfServerless!

    Today, we have a special set of posts from our Zero To Hero 🚀 initiative, featuring blog posts authored by our Product Engineering teams for #ServerlessSeptember. Posts were originally published on the Apps on Azure blog on Microsoft Tech Community.


    What We'll Cover

    • Defining Cloud-Native
    • Introduction to Azure Container Apps
    • Dapr In Azure Container Apps
    • Conclusion


    Defining Cloud-Native

    While I’m positive I’m not the first person to ask this, I think it’s an appropriate way for us to kick off this article: “How many developers does it take to define Cloud-Native?” I hope you aren’t waiting for a punch line because I seriously want to know your thoughts (drop your perspectives in the comments..) but if you ask me, the limit does not exist!

    A quick online search of the topic returns a laundry list of articles, e-books, twitter threads, etc. all trying to nail down the one true definition. While diving into the rabbit hole of Cloud-Native, you will inevitably find yourself on the Cloud-Native Computing Foundation (CNCF) site. The CNCF is part of the Linux Foundation and aims to make "Cloud-Native computing ubiquitous" through deep open source project and community involvement. The CNCF has also published arguably the most popularized definition of Cloud-Native which begins with the following statement:

    “Cloud-Native technologies empower organizations to build and run scalable applications in modern, dynamic environments such as public, private, and hybrid clouds."

    Over the past four years, my day-to-day work has been driven primarily by the surging demand for application containerization and the drastic adoption of Kubernetes as the de-facto container orchestrator. Customers are eager to learn and leverage patterns, practices and technologies that enable building "loosely coupled systems that are resilient, manageable, and observable". Enterprise developers at these organizations are being tasked with rapidly deploying event-driven, horizontally-scalable, polyglot services via repeatable, code-to-cloud pipelines.

    While building Cloud-Native solutions can enable rapid innovation, the transition to adopting a Cloud-Native architectural approach comes with a steep learning curve and a new set of considerations. In a document published by Microsoft called What is Cloud-Native?, there are a few key areas highlighted to aid customers in the adoption of best practices for building modern, portable applications which I will summarize below:

    Cloud infrastructure

    • Cloud-Native applications leverage cloud infrastructure and make use of Platform-as-a-service offerings
    • Cloud-Native applications depend on highly-elastic infrastructure with automatic scaling, self-healing, and monitoring capabilities

    Modern application design

    • Cloud-Native applications should be constructed using principles outlined in the 12 factor methodology

    Microservices

    • Cloud-Native applications are typically composed of microservices where each core function, or service, is built and deployed independently

    Containers

    • Cloud-Native applications are typically deployed using containers as a packaging mechanism where an application's code and dependencies are bundled together for consistency of deployment
    • Cloud-Native applications leverage container orchestration technologies- primarily Kubernetes- for achieving capabilities such as workload scheduling, self-healing, auto-scale, etc.

    Backing services

    • Cloud-Native applications are ideally stateless workloads which retrieve and store data in data stores external to the application hosting infrastructure. Cloud providers like Azure provide an array of backing data services which can be securely accessed from application code and provide capabilities for ensuring application data is highly-available

    Automation

    • Cloud-Native solutions should use deployment automation for backing cloud infrastructure via versioned, parameterized Infrastructure as Code (IaC) templates which provide a consistent, repeatable process for provisioning cloud resources.
    • Cloud-Native solutions should make use of modern CI/CD practices and pipelines to ensure successful, reliable infrastructure and application deployment.

    Azure Container Apps

    In many of the conversations I've had with customers that involve talk of Kubernetes and containers, the topics of cost-optimization, security, networking, and reducing infrastructure and operations inevitably arise. I personally have yet to meet with any customers eager to have their developers get more involved with infrastructure concerns.

    One of my former colleagues, Jeff Hollan, made a statement while appearing on a 2019 episode of The Cloud-Native Show where he shared his perspective on Cloud-Native:

    "When I think about Cloud-Native... it's writing applications in a way where you are specifically thinking about the benefits the cloud can provide... to me, serverless is the perfect realization of that because the only reason you can write serverless applications is because the cloud exists."

    I must say that I agree with Jeff's perspective. In addition to optimizing development practices for the Cloud-Native world, reducing infrastructure exposure and operations is equally as important to many organizations and can be achieved as a result of cloud platform innovation.

    In May of 2022, Microsoft announced the general availability of Azure Container Apps. Azure Container Apps provides customers with the ability to run microservices and containerized applications on a serverless, consumption-based platform.

    For those interested in taking advantage of the open source ecosystem while reaping the benefits of a managed platform experience, Container Apps run on Kubernetes and provides a set of managed open source projects embedded directly into the platform including the Kubernetes Event Driven Autoscaler (KEDA), the Distributed Application Runtime (Dapr) and Envoy.

    Azure Kubernetes Service vs. Azure Container Apps

    Container apps provides other Cloud-Native features and capabilities in addition to those above including, but not limited to:

    The ability to dynamically scale and support growing numbers of users, events, and requests is one of the core requirements for most Cloud-Native, distributed applications. Azure Container Apps is purpose-built with this and other Cloud-Native tenants in mind.

    What can you build with Azure Container Apps?

    Dapr in Azure Container Apps

    As a quick personal note before we dive into this section I will say I am a bit bias about Dapr. When Dapr was first released, I had an opportunity to immediately get involved and became an early advocate for the project. It is created by developers for developers, and solves tangible problems customers architecting distributed systems face:

    HOW DO I
    • integrate with external systems that my app has to react and respond to?
    • create event driven apps which reliably send events from one service to another?
    • observe the calls and events between my services to diagnose issues in production?
    • access secrets securely from within my application?
    • discover other services and call methods on them?
    • prevent committing to a technology early and have the flexibility to swap out an alternative based on project or environment changes?

    While existing solutions were in the market which could be used to address some of the concerns above, there was not a lightweight, CNCF-backed project which could provide a unified approach to solve the more fundamental ask from customers: "How do I make it easy for developers to build microservices based on Cloud-Native best practices?"

    Enter Dapr!

    The Distributed Application Runtime (Dapr) provides APIs that simplify microservice connectivity. Whether your communication pattern is service to service invocation or pub/sub messaging, Dapr helps you write resilient and secured microservices. By letting Dapr’s sidecar take care of the complex challenges such as service discovery, message broker integration, encryption, observability, and secret management, you can focus on business logic and keep your code simple."

    The Container Apps platform provides a managed and supported Dapr integration which eliminates the need for deploying and managing the Dapr OSS project. In addition to providing managed upgrades, the platform also exposes a simplified Dapr interaction model to increase developer productivity and reduce the friction required to leverage Dapr capabilities. While the Dapr integration makes it easier for customers to adopt Cloud-Native best practices in container apps it is not required to make use of the container apps platform.

    Image on Dapr

    For additional insight into the dapr integration visit aka.ms/aca-dapr.

    Conclusion

    Backed by and integrated with powerful Cloud-Native technologies, Azure Container Apps strives to make developers productive, while reducing the operational overhead and learning curve that typically accompanies adopting a cloud-native strategy.

    If you are interested in building resilient, portable and highly-scalable apps visit Azure Container Apps | Microsoft Azure today!

    ASK THE EXPERT: LIVE Q&A

    The Azure Container Apps team will answer questions live on September 29.

    - + \ No newline at end of file diff --git a/blog/tags/serverless-september/page/27/index.html b/blog/tags/serverless-september/page/27/index.html index dc36885ad5..4b2d1b3eb9 100644 --- a/blog/tags/serverless-september/page/27/index.html +++ b/blog/tags/serverless-september/page/27/index.html @@ -14,13 +14,13 @@ - +

    33 posts tagged with "serverless-september"

    View All Tags

    · 7 min read
    Aaron Powell

    Welcome to Day 5 of #30DaysOfServerless!

    Yesterday we looked at Azure Functions from the perspective of a Java developer. Today, we'll do a similar walkthrough from the perspective of a JavaScript developer.

    And, we'll use this to explore another popular usage scenario for Azure Functions: building a serverless HTTP API using JavaScript.

    Ready? Let's go.


    What We'll Cover

    • Developer Guidance
    • Create Azure Function with CLI
    • Calling an external API
    • Azure Samples & Scenarios for JS
    • Exercise: Support searching
    • Resources: For self-study!


    Developer Guidance

    If you're a JavaScript developer new to serverless on Azure, start by exploring the Azure Functions JavaScript Developers Guide. It covers:

    • Quickstarts for Node.js - using Visual Code, CLI or Azure Portal
    • Guidance on hosting options and performance considerations
    • Azure Functions bindings and (code samples) for JavaScript
    • Scenario examples - integrations with other Azure Services

    Node.js 18 Support

    Node.js 18 Support (Public Preview)

    Azure Functions support for Node.js 18 entered Public Preview on Aug 31, 2022 and is supported by the Azure Functions v.4.x runtime!

    As we continue to explore how we can use Azure Functions, today we're going to look at using JavaScript to create one, and we're going to be using the newly released Node.js 18 support for Azure Functions to make the most out of the platform.

    Ensure you have Node.js 18 and Azure Functions v4.x versions installed, along with a text editor (I'll use VS Code in this post), and a terminal, then we're ready to go.

    Scenario: Calling The GitHub API

    The application we're going to be building today will use the GitHub API to return a random commit message, so that we don't need to come up with one ourselves! After all, naming things can be really hard! 🤣

    Creating the Azure Function

    To create our Azure Function, we're going to use the Azure Functions CLI, which we can install using npm:

    npm install --global azure-function-core-tools

    Once that's installed, we can use the new func command to initalise our project:

    func init --worker-runtime node --language javascript

    When running func init we can either provide the worker-runtime and language as arguments, or use the menu system that the tool will provide us. For brevity's stake, I've used the arguments here, specifying that we want node as the runtime and javascript as the language, but you could change that to typescript if you'd prefer to use TypeScript.

    Once the init command is completed, you should have a .vscode folder, and the files .gitignore, host.json, local.settings.json, and package.json.

    Files generated by func initFiles generated by func init

    Adding a HTTP Trigger

    We have an empty Functions app so far, what we need to do next is create a Function that it will run, and we're going to make a HTTP Trigger Function, which is a Function that responds to HTTP requests. We'll use the func new command to create that:

    func new --template "HTTP Trigger" --name "get-commit-message"

    When this completes, we'll have a folder for the Function, using the name we provided, that contains the filesfunction.json and index.js. Let's open the function.json to understand it a little bit:

    {
    "bindings": [
    {
    "authLevel": "function",
    "type": "httpTrigger",
    "direction": "in",
    "name": "req",
    "methods": [
    "get",
    "post"
    ]
    },
    {
    "type": "http",
    "direction": "out",
    "name": "res"
    }
    ]
    }

    This file is used to tell Functions about the Function that we've created and what it does, so it knows to handle the appropriate events. We have a bindings node which contains the event bindings for our Azure Function. The first binding is using the type httpTrigger, which indicates that it'll be executed, or triggered, by a HTTP event, and the methods indicates that it's listening to both GET and POST (you can change this for the right HTTP methods that you want to support). The HTTP request information will be bound to a property in the Functions context called req, so we can access query strings, the request body, etc.

    The other binding we have has the direction of out, meaning that it's something that the Function will return to the called, and since this is a HTTP API, the type is http, indicating that we'll return a HTTP response, and that response will be on a property called res that we add to the Functions context.

    Let's go ahead and start the Function and call it:

    func start

    Starting the FunctionStarting the Function

    With the Function started, access the endpoint http://localhost:7071/api/get-commit-message via a browser or using cURL:

    curl http://localhost:7071/api/get-commit-message\?name\=ServerlessSeptember

    Hello from Azure FunctionsHello from Azure Functions

    🎉 CONGRATULATIONS

    You created and ran a JavaScript function app locally!

    Calling an external API

    It's time to update the Function to do what we want to do - call the GitHub Search API and get some commit messages. The endpoint that we'll be calling is https://api.github.com/search/commits?q=language:javascript.

    Note: The GitHub API is rate limited and this sample will call it unauthenticated, so be aware of that in your own testing.

    To call this API, we'll leverage the newly released fetch support in Node 18 and async/await, to make for a very clean Function.

    Open up the index.js file, and delete the contents of the existing Function, so we have a empty one:

    module.exports = async function (context, req) {

    }

    The default template uses CommonJS, but you can use ES Modules with Azure Functions if you prefer.

    Now we'll use fetch to call the API, and unpack the JSON response:

    module.exports = async function (context, req) {
    const res = await fetch("https://api.github.com/search/commits?q=language:javascript");
    const json = await res.json();
    const messages = json.items.map(item => item.commit.message);
    context.res = {
    body: {
    messages
    }
    };
    }

    To send a response to the client, we're setting the context.res property, where res is the name of the output binding in our function.json, and giving it a body that contains the commit messages.

    Run func start again, and call the endpoint:

    curl http://localhost:7071/api/get-commit-message

    The you'll get some commit messages:

    A series of commit messages from the GitHub Search APIA series of commit messages from the GitHub Search API

    🎉 CONGRATULATIONS

    There we go, we've created an Azure Function which is used as a proxy to another API, that we call (using native fetch in Node.js 18) and from which we return a subset of the JSON payload.

    Next Steps

    Other Triggers, Bindings

    This article focused on using the HTTPTrigger and relevant bindings, to build a serverless API using Azure Functions. How can you explore other supported bindings, with code samples to illustrate usage?

    Scenarios with Integrations

    Once you've tried out the samples, try building an end-to-end scenario by using these triggers to integrate seamlessly with other services. Here are some suggestions:

    Exercise: Support searching

    The GitHub Search API allows you to provide search parameters via the q query string. In this sample, we hard-coded it to be language:javascript, but as a follow-on exercise, expand the Function to allow the caller to provide the search terms as a query string to the Azure Function, which is passed to the GitHub Search API. Hint - have a look at the req argument.

    Resources

    - + \ No newline at end of file diff --git a/blog/tags/serverless-september/page/28/index.html b/blog/tags/serverless-september/page/28/index.html index 3566099350..e9dcd39383 100644 --- a/blog/tags/serverless-september/page/28/index.html +++ b/blog/tags/serverless-september/page/28/index.html @@ -14,13 +14,13 @@ - +

    33 posts tagged with "serverless-september"

    View All Tags

    · 8 min read
    Rory Preddy

    Welcome to Day 4 of #30DaysOfServerless!

    Yesterday we walked through an Azure Functions Quickstart with JavaScript, and used it to understand the general Functions App structure, tooling and developer experience.

    Today we'll look at developing Functions app with a different programming language - namely, Java - and explore developer guidance, tools and resources to build serverless Java solutions on Azure.


    What We'll Cover


    Developer Guidance

    If you're a Java developer new to serverless on Azure, start by exploring the Azure Functions Java Developer Guide. It covers:

    In this blog post, we'll dive into one quickstart, and discuss other resources briefly, for awareness! Do check out the recommended exercises and resources for self-study!


    My First Java Functions App

    In today's post, we'll walk through the Quickstart: Azure Functions tutorial using Visual Studio Code. In the process, we'll setup our development environment with the relevant command-line tools and VS Code extensions to make building Functions app simpler.

    Note: Completing this exercise may incur a a cost of a few USD cents based on your Azure subscription. Explore pricing details to learn more.

    First, make sure you have your development environment setup and configured.

    PRE-REQUISITES
    1. An Azure account with an active subscription - Create an account for free
    2. The Java Development Kit, version 11 or 8. - Install
    3. Apache Maven, version 3.0 or above. - Install
    4. Visual Studio Code. - Install
    5. The Java extension pack - Install
    6. The Azure Functions extension for Visual Studio Code - Install

    VS Code Setup

    NEW TO VISUAL STUDIO CODE?

    Start with the Java in Visual Studio Code tutorial to jumpstart your learning!

    Install the Extension Pack for Java (shown below) to install 6 popular extensions to help development workflow from creation to testing, debugging, and deployment.

    Extension Pack for Java

    Now, it's time to get started on our first Java-based Functions app.

    1. Create App

    1. Open a command-line terminal and create a folder for your project. Use the code command to launch Visual Studio Code from that directory as shown:

      $ mkdir java-function-resource-group-api
      $ cd java-function-resource-group-api
      $ code .
    2. Open the Visual Studio Command Palette (Ctrl + Shift + p) and select Azure Functions: create new project to kickstart the create workflow. Alternatively, you can click the Azure icon (on activity sidebar), to get the Workspace window, click "+" and pick the "Create Function" option as shown below.

      Screenshot of creating function in Azure from Visual Studio Code.

    3. This triggers a multi-step workflow. Fill in the information for each step as shown in the following prompts. Important: Start this process from an empty folder - the workflow will populate it with the scaffold for your Java-based Functions app.

      PromptValue
      Choose the directory location.You should either create a new folder or choose an empty folder for the project workspace. Don't choose a project folder that is already part of a workspace.
      Select a languageChoose Java.
      Select a version of JavaChoose Java 11 or Java 8, the Java version on which your functions run in Azure. Choose a Java version that you've verified locally.
      Provide a group IDChoose com.function.
      Provide an artifact IDEnter myFunction.
      Provide a versionChoose 1.0-SNAPSHOT.
      Provide a package nameChoose com.function.
      Provide an app nameEnter HttpExample.
      Select the build tool for Java projectChoose Maven.

    Visual Studio Code uses the provided information and generates an Azure Functions project. You can view the local project files in the Explorer - it should look like this:

    Azure Functions Scaffold For Java

    2. Preview App

    Visual Studio Code integrates with the Azure Functions Core tools to let you run this project on your local development computer before you publish to Azure.

    1. To build and run the application, use the following Maven command. You should see output similar to that shown below.

      $ mvn clean package azure-functions:run
      ..
      ..
      Now listening on: http://0.0.0.0:7071
      Application started. Press Ctrl+C to shut down.

      Http Functions:

      HttpExample: [GET,POST] http://localhost:7071/api/HttpExample
      ...
    2. Copy the URL of your HttpExample function from this output to a browser and append the query string ?name=<YOUR_NAME>, making the full URL something like http://localhost:7071/api/HttpExample?name=Functions. The browser should display a message that echoes back your query string value. The terminal in which you started your project also shows log output as you make requests.

    🎉 CONGRATULATIONS

    You created and ran a function app locally!

    With the Terminal panel focused, press Ctrl + C to stop Core Tools and disconnect the debugger. After you've verified that the function runs correctly on your local computer, it's time to use Visual Studio Code and Maven to publish and test the project on Azure.

    3. Sign into Azure

    Before you can deploy, sign in to your Azure subscription.

    az login

    The az login command signs you into your Azure account.

    Use the following command to deploy your project to a new function app.

    mvn clean package azure-functions:deploy

    When the creation is complete, the following Azure resources are created in your subscription:

    • Resource group. Named as java-functions-group.
    • Storage account. Required by Functions. The name is generated randomly based on Storage account name requirements.
    • Hosting plan. Serverless hosting for your function app.The name is java-functions-app-service-plan.
    • Function app. A function app is the deployment and execution unit for your functions. The name is randomly generated based on your artifactId, appended with a randomly generated number.

    4. Deploy App

    1. Back in the Resources area in the side bar, expand your subscription, your new function app, and Functions. Right-click (Windows) or Ctrl - click (macOS) the HttpExample function and choose Execute Function Now....

      Screenshot of executing function in Azure from Visual Studio Code.

    2. In Enter request body you see the request message body value of { "name": "Azure" }. Press Enter to send this request message to your function.

    3. When the function executes in Azure and returns a response, a notification is raised in Visual Studio Code.

    You can also copy the complete Invoke URL shown in the output of the publish command into a browser address bar, appending the query parameter ?name=Functions. The browser should display similar output as when you ran the function locally.

    🎉 CONGRATULATIONS

    You deployed your function app to Azure, and invoked it!

    5. Clean up

    Use the following command to delete the resource group and all its contained resources to avoid incurring further costs.

    az group delete --name java-functions-group

    Next Steps

    So, where can you go from here? The example above used a familiar HTTP Trigger scenario with a single Azure service (Azure Functions). Now, think about how you can build richer workflows by using other triggers and integrating with other Azure or third-party services.

    Other Triggers, Bindings

    Check out Azure Functions Samples In Java for samples (and short use cases) that highlight other triggers - with code! This includes triggers to integrate with CosmosDB, Blob Storage, Event Grid, Event Hub, Kafka and more.

    Scenario with Integrations

    Once you've tried out the samples, try building an end-to-end scenario by using these triggers to integrate seamlessly with other Services. Here are a couple of useful tutorials:

    Exercise

    Time to put this into action and validate your development workflow:

    Resources

    - + \ No newline at end of file diff --git a/blog/tags/serverless-september/page/29/index.html b/blog/tags/serverless-september/page/29/index.html index 1687de253f..9a472959e1 100644 --- a/blog/tags/serverless-september/page/29/index.html +++ b/blog/tags/serverless-september/page/29/index.html @@ -14,13 +14,13 @@ - +

    33 posts tagged with "serverless-september"

    View All Tags

    · 9 min read
    Nitya Narasimhan

    Welcome to Day 3 of #30DaysOfServerless!

    Yesterday we learned core concepts and terminology for Azure Functions, the signature Functions-as-a-Service option on Azure. Today we take our first steps into building and deploying an Azure Functions app, and validate local development setup.

    Ready? Let's go.


    What We'll Cover


    Developer Guidance

    Before we jump into development, let's familiarize ourselves with language-specific guidance from the Azure Functions Developer Guide. We'll review the JavaScript version but guides for F#, Java, Python, C# and PowerShell are also available.

    1. A function is defined by two things: code (written in a supported programming language) and configuration (specified in a functions.json file, declaring the triggers, bindings and other context for execution).

    2. A function app is the unit of deployment for your functions, and is associated with a single execution context or runtime. It can contain multiple functions, but they must be in the same language.

    3. A host configuration is runtime-specific configuration that affects all functions running in a given function app instance. It is defined in a host.json file.

    4. A recommended folder structure is defined for the function app, but may vary based on the programming language used. Check the documentation on folder structures to learn the default for your preferred language.

    Here's an example of the JavaScript folder structure for a function app containing two functions with some shared dependencies. Note that host.json (runtime configuration) is defined once, in the root directory. And function.json is defined separately for each function.

    FunctionsProject
    | - MyFirstFunction
    | | - index.js
    | | - function.json
    | - MySecondFunction
    | | - index.js
    | | - function.json
    | - SharedCode
    | | - myFirstHelperFunction.js
    | | - mySecondHelperFunction.js
    | - node_modules
    | - host.json
    | - package.json
    | - local.settings.json

    We'll dive into what the contents of these files look like, when we build and deploy the first function. We'll cover local.settings.json in the About Local Testing section at the end.


    My First Function App

    The documentation provides quickstart options for all supported languages. We'll walk through the JavaScript versions in this article. You have two options for development:

    I'm a huge fan of VS Code - so I'll be working through that tutorial today.

    PRE-REQUISITES

    Don't forget to validate your setup by checking the versions of installed software.

    Install VSCode Extension

    Installing the Visual Studio Code extension should automatically open this page in your IDE with similar quickstart instructions, but potentially more recent screenshots.

    Visual Studio Code Extension for VS Code

    Note that it may make sense to install the Azure tools for Visual Studio Code extensions pack if you plan on working through the many projects in Serverless September. This includes the Azure Functions extension by default.

    Create First Function App

    Walk through the Create local [project] steps of the quickstart. The process is quick and painless and scaffolds out this folder structure and files. Note the existence (and locations) of functions.json and host.json files.

    Final screenshot for VS Code workflow

    Explore the Code

    Check out the functions.json configuration file. It shows that the function is activated by an httpTrigger with an input binding (tied to req payload) and an output binding (tied to res payload). And it supports both GET and POST requests on the exposed URL.

    {
    "bindings": [
    {
    "authLevel": "anonymous",
    "type": "httpTrigger",
    "direction": "in",
    "name": "req",
    "methods": [
    "get",
    "post"
    ]
    },
    {
    "type": "http",
    "direction": "out",
    "name": "res"
    }
    ]
    }

    Check out index.js - the function implementation. We see it logs a message to the console when invoked. It then extracts a name value from the input payload (req) and crafts a different responseMessage based on the presence/absence of a valid name. It returns this response in the output payload (res).

    module.exports = async function (context, req) {
    context.log('JavaScript HTTP trigger function processed a request.');

    const name = (req.query.name || (req.body && req.body.name));
    const responseMessage = name
    ? "Hello, " + name + ". This HTTP triggered function executed successfully."
    : "This HTTP triggered function executed successfully. Pass a name in the query string or in the request body for a personalized response.";

    context.res = {
    // status: 200, /* Defaults to 200 */
    body: responseMessage
    };
    }

    Preview Function App Locally

    You can now run this function app locally using Azure Functions Core Tools. VS Code integrates seamlessly with this CLI-based tool, making it possible for you to exploit all its capabilities without leaving the IDE. In fact, the workflow will even prompt you to install those tools if they didn't already exist in your local dev environment.

    Now run the function app locally by clicking on the "Run and Debug" icon in the activity bar (highlighted, left) and pressing the "▶️" (Attach to Node Functions) to start execution. On success, your console output should show something like this.

    Final screenshot for VS Code workflow

    You can test the function locally by visiting the Function Url shown (http://localhost:7071/api/HttpTrigger1) or by opening the Workspace region of the Azure extension, and selecting the Execute Function now menu item as shown.

    Final screenshot for VS Code workflow

    In the latter case, the Enter request body popup will show a pre-populated request of {"name":"Azure"} that you can submit.

    Final screenshot for VS Code workflow

    On successful execution, your VS Code window will show a notification as follows. Take note of the console output - it shows the message encoded in index.js.

    Final screenshot for VS Code workflow

    You can also visit the deployed function URL directly in a local browser - testing the case for a request made with no name payload attached. Note how the response in the browser now shows the non-personalized version of the message!

    Final screenshot for VS Code workflow

    🎉 Congratulations

    You created and ran a function app locally!

    (Re)Deploy to Azure

    Now, just follow the creating a function app in Azure steps to deploy it to Azure, using an active subscription! The deployed app resource should now show up under the Function App Resources where you can click Execute Function Now to test the Azure-deployed version instead. You can also look up the function URL in the portal and visit that link in your local browser to trigger the function without the name context.

    🎉 Congratulations

    You have an Azure-hosted serverless function app!

    Challenge yourself and try to change the code and redeploy to Azure to return something different. You have effectively created a serverless API endpoint!


    About Core Tools

    That was a lot to cover! In the next few days we'll have more examples for Azure Functions app development - focused on different programming languages. So let's wrap today's post by reviewing two helpful resources.

    First, let's talk about Azure Functions Core Tools - the command-line tool that lets you develop, manage, and deploy, Azure Functions projects from your local development environment. It is used transparently by the VS Code extension - but you can use it directly from a terminal for a powerful command-line end-to-end developer experience! The Core Tools commands are organized into the following contexts:

    Learn how to work with Azure Functions Core Tools. Not only can it help with quick command execution, it can also be invaluable for debugging issues that may not always be visible or understandable in an IDE.

    About Local Testing

    You might have noticed that the scaffold also produced a local.settings.json file. What is that and why is it useful? By definition, the local.settings.json file "stores app settings and settings used by local development tools. Settings in the local.settings.json file are used only when you're running your project locally."

    Read the guidance on Code and test Azure Functions Locally to learn more about how to configure development environments locally, for your preferred programming language, to support testing and debugging on the local Functions runtime.

    Exercise

    We made it! Now it's your turn!! Here are a few things you can try to apply what you learned and reinforce your understanding:

    Resources

    Bookmark and visit the #30DaysOfServerless Collection. It's the one-stop collection of resources we will keep updated with links to relevant documentation and learning resources.

    - + \ No newline at end of file diff --git a/blog/tags/serverless-september/page/3/index.html b/blog/tags/serverless-september/page/3/index.html index 9d3b231973..1f7e77ed22 100644 --- a/blog/tags/serverless-september/page/3/index.html +++ b/blog/tags/serverless-september/page/3/index.html @@ -14,13 +14,13 @@ - +

    33 posts tagged with "serverless-september"

    View All Tags

    · 14 min read
    Justin Yoo

    Welcome to Day 28 of #30DaysOfServerless!

    Since it's the serverless end-to-end week, I'm going to discuss how to use a serverless application Azure Functions with OpenAPI extension to be seamlessly integrated with Power Platform custom connector through Azure API Management - in a post I call "Where am I? My GPS Location with Serverless Power Platform Custom Connector"

    OK. Are you ready? Let's get started!


    What We'll Cover

    • What is Power Platform custom connector?
    • Proxy app to Google Maps and Naver Map API
    • API Management integration
    • Two ways of building custom connector
    • Where am I? Power Apps app
    • Exercise: Try this yourself!
    • Resources: For self-study!


    SAMPLE REPO

    Want to follow along? Check out the sample app on GitHub repository used in this post.

    What is Power Platform custom connector?

    Power Platform is a low-code/no-code application development tool for fusion teams that consist of a group of people. Those people come from various disciplines, including field experts (domain experts), IT professionals and professional developers, to draw business values successfully. Within the fusion team, the domain experts become citizen developers or low-code developers by Power Platform. In addition, Making Power Platform more powerful is that it offers hundreds of connectors to other Microsoft 365 and third-party services like SAP, ServiceNow, Salesforce, Google, etc.

    However, what if you want to use your internal APIs or APIs not yet offering their official connectors? Here's an example. If your company has an inventory management system, and you want to use it within your Power Apps or Power Automate. That point is exactly where Power Platform custom connectors is necessary.

    Inventory Management System for Power Apps

    Therefore, Power Platform custom connectors enrich those citizen developers' capabilities because those connectors can connect any API applications for the citizen developers to use.

    In this post, let's build a custom connector that provides a static map image generated by Google Maps API and Naver Map API using your GPS location.

    Proxy app to Google Maps and Naver Map API

    First, let's build an Azure Functions app that connects to Google Maps and Naver Map. Suppose that you've already got the API keys for both services. If you haven't yet, get the keys first by visiting here for Google and here for Naver. Then, store them to local.settings.json within your Azure Functions app.

    {
    "Values": {
    ...
    "Maps__Google__ApiKey": "<GOOGLE_MAPS_API_KEY>",
    "Maps__Naver__ClientId": "<NAVER_MAP_API_CLIENT_ID>",
    "Maps__Naver__ClientSecret": "<NAVER_MAP_API_CLIENT_SECRET>"
    }
    }

    Here's the sample logic to get the static image from Google Maps API. It takes the latitude and longitude of your current location and image zoom level, then returns the static map image. There are a few hard-coded assumptions, though:

    • The image size should be 400x400.
    • The image should be in .png format.
    • The marker should show be red and show my location.
    public class GoogleMapService : IMapService
    {
    public async Task<byte[]> GetMapAsync(HttpRequest req)
    {
    var latitude = req.Query["lat"];
    var longitude = req.Query["long"];
    var zoom = (string)req.Query["zoom"] ?? "14";

    var sb = new StringBuilder();
    sb.Append("https://maps.googleapis.com/maps/api/staticmap")
    .Append($"?center={latitude},{longitude}")
    .Append("&size=400x400")
    .Append($"&zoom={zoom}")
    .Append($"&markers=color:red|{latitude},{longitude}")
    .Append("&format=png32")
    .Append($"&key={this._settings.Google.ApiKey}");
    var requestUri = new Uri(sb.ToString());

    var bytes = await this._http.GetByteArrayAsync(requestUri).ConfigureAwait(false);

    return bytes;
    }
    }

    The NaverMapService class has a similar logic with the same input and assumptions. Here's the code:

    public class NaverMapService : IMapService
    {
    public async Task<byte[]> GetMapAsync(HttpRequest req)
    {
    var latitude = req.Query["lat"];
    var longitude = req.Query["long"];
    var zoom = (string)req.Query["zoom"] ?? "13";

    var sb = new StringBuilder();
    sb.Append("https://naveropenapi.apigw.ntruss.com/map-static/v2/raster")
    .Append($"?center={longitude},{latitude}")
    .Append("&w=400")
    .Append("&h=400")
    .Append($"&level={zoom}")
    .Append($"&markers=color:blue|pos:{longitude}%20{latitude}")
    .Append("&format=png")
    .Append("&lang=en");
    var requestUri = new Uri(sb.ToString());

    this._http.DefaultRequestHeaders.Clear();
    this._http.DefaultRequestHeaders.Add("X-NCP-APIGW-API-KEY-ID", this._settings.Naver.ClientId);
    this._http.DefaultRequestHeaders.Add("X-NCP-APIGW-API-KEY", this._settings.Naver.ClientSecret);

    var bytes = await this._http.GetByteArrayAsync(requestUri).ConfigureAwait(false);

    return bytes;
    }
    }

    Let's take a look at the function endpoints. Here's for the Google Maps and Naver Map. As the GetMapAsync(req) method returns a byte array value, you need to transform it as FileContentResult, with the content type of image/png.

    // Google Maps
    public class GoogleMapsTrigger
    {
    [FunctionName(nameof(GoogleMapsTrigger.GetGoogleMapImage))]
    public async Task<IActionResult> GetGoogleMapImage(
    [HttpTrigger(AuthorizationLevel.Anonymous, "GET", Route = "google/image")] HttpRequest req)
    {
    this._logger.LogInformation("C# HTTP trigger function processed a request.");

    var bytes = await this._service.GetMapAsync(req).ConfigureAwait(false);

    return new FileContentResult(bytes, "image/png");
    }
    }

    // Naver Map
    public class NaverMapsTrigger
    {
    [FunctionName(nameof(NaverMapsTrigger.GetNaverMapImage))]
    public async Task<IActionResult> GetNaverMapImage(
    [HttpTrigger(AuthorizationLevel.Anonymous, "GET", Route = "naver/image")] HttpRequest req)
    {
    this._logger.LogInformation("C# HTTP trigger function processed a request.");

    var bytes = await this._service.GetMapAsync(req).ConfigureAwait(false);

    return new FileContentResult(bytes, "image/png");
    }
    }

    Then, add the OpenAPI capability to each function endpoint. Here's the example:

    // Google Maps
    public class GoogleMapsTrigger
    {
    [FunctionName(nameof(GoogleMapsTrigger.GetGoogleMapImage))]
    // ⬇️⬇️⬇️ Add decorators provided by the OpenAPI extension ⬇️⬇️⬇️
    [OpenApiOperation(operationId: nameof(GoogleMapsTrigger.GetGoogleMapImage), tags: new[] { "google" })]
    [OpenApiParameter(name: "lat", In = ParameterLocation.Query, Required = true, Type = typeof(string), Description = "The **latitude** parameter")]
    [OpenApiParameter(name: "long", In = ParameterLocation.Query, Required = true, Type = typeof(string), Description = "The **longitude** parameter")]
    [OpenApiParameter(name: "zoom", In = ParameterLocation.Query, Required = false, Type = typeof(string), Description = "The **zoom level** parameter &ndash; Default value is `14`")]
    [OpenApiResponseWithBody(statusCode: HttpStatusCode.OK, contentType: "image/png", bodyType: typeof(byte[]), Description = "The map image as an OK response")]
    // ⬆️⬆️⬆️ Add decorators provided by the OpenAPI extension ⬆️⬆️⬆️
    public async Task<IActionResult> GetGoogleMapImage(
    [HttpTrigger(AuthorizationLevel.Anonymous, "GET", Route = "google/image")] HttpRequest req)
    {
    ...
    }
    }

    // Naver Map
    public class NaverMapsTrigger
    {
    [FunctionName(nameof(NaverMapsTrigger.GetNaverMapImage))]
    // ⬇️⬇️⬇️ Add decorators provided by the OpenAPI extension ⬇️⬇️⬇️
    [OpenApiOperation(operationId: nameof(NaverMapsTrigger.GetNaverMapImage), tags: new[] { "naver" })]
    [OpenApiParameter(name: "lat", In = ParameterLocation.Query, Required = true, Type = typeof(string), Description = "The **latitude** parameter")]
    [OpenApiParameter(name: "long", In = ParameterLocation.Query, Required = true, Type = typeof(string), Description = "The **longitude** parameter")]
    [OpenApiParameter(name: "zoom", In = ParameterLocation.Query, Required = false, Type = typeof(string), Description = "The **zoom level** parameter &ndash; Default value is `13`")]
    [OpenApiResponseWithBody(statusCode: HttpStatusCode.OK, contentType: "image/png", bodyType: typeof(byte[]), Description = "The map image as an OK response")]
    // ⬆️⬆️⬆️ Add decorators provided by the OpenAPI extension ⬆️⬆️⬆️
    public async Task<IActionResult> GetNaverMapImage(
    [HttpTrigger(AuthorizationLevel.Anonymous, "GET", Route = "naver/image")] HttpRequest req)
    {
    ...
    }
    }

    Run the function app in the local. Here are the latitude and longitude values for Seoul, Korea.

    • latitude: 37.574703
    • longitude: 126.978519

    Google Map for Seoul

    It seems to be working! Let's deploy it to Azure.

    API Management integration

    Visual Studio 2022 provides a built-in deployment tool for Azure Functions app onto Azure. In addition, the deployment tool supports seamless integration with Azure API Management as long as your Azure Functions app enables the OpenAPI capability. In this post, I'm going to use this feature. Right-mouse click on the Azure Functions project and select the "Publish" menu.

    Visual Studio context menu for publish

    Then, you will see the publish screen. Click the "➕ New" button to create a new publish profile.

    Create a new publish profile

    Choose "Azure" and click the "Next" button.

    Choose the target platform for publish

    Select the app instance. This time simply pick up the "Azure Function App (Windows)" option, then click "Next".

    Choose the target OS for publish

    If you already provision an Azure Function app instance, you will see it on the screen. Otherwise, create a new one. Then, click "Next".

    Choose the target instance for publish

    In the next step, you are asked to choose the Azure API Management instance for integration. Choose one, or create a new one. Then, click "Next".

    Choose the APIM instance for integration

    Finally, select the publish method either local publish or GitHub Actions workflow. Let's pick up the local publish method for now. Then, click "Finish".

    Choose the deployment type

    The publish profile has been created. Click "Close" to move on.

    Publish profile created

    Now the function app is ready for deployment. Click the "Publish" button and see how it goes.

    Publish function app

    The Azure function app has been deployed and integrated with the Azure API Management instance.

    Function app published

    Go to the published function app site, and everything looks OK.

    Function app on Azure

    And API Management shows the function app integrated perfectly.

    Function app integrated with APIM

    Now, you are ready to create a custom connector. Let's move on.

    Two ways of building custom connector

    There are two ways to create a custom connector.

    Export custom connector from API Management

    First, you can directly use the built-in API Management feature. Then, click the ellipsis icon and select the "Create Power Connector" menu.

    Create Power Connector menu

    Then, you are redirected to this screen. While the "API" and "API display name" fields are pre-populated, you need to choose the Power Platform environment tied to your tenant. Choose an environment, click "Authenticate", and click "Create".

    Create custom connector screen

    Check your custom connector on Power Apps or Power Automate side.

    Custom connector created on Power Apps

    However, there's a caveat to this approach. Because it's tied to your tenant, you should use the second approach if you want to use this custom connector on the other tenant.

    Import custom connector from OpenAPI document or URL

    Click the ellipsis icon again and select the "Export" menu.

    Export menu

    On the Export API screen, choose the "OpenAPI v2 (JSON)" panel because Power Platform custom connector currently accepts version 2 of the OpenAPI document.

    Select OpenAPI v2

    Download the OpenAPI document to your local computer and move to your Power Apps or Power Automate page under your desired environment. I'm going to use the Power Automate page. First, go to the "Data" ➡️ "Custom connectors" page. Then, click the "➕ New custom connector" ➡️ "Import an OpenAPI file" at the top right corner.

    New custom connector

    When a modal pops up, give the custom connector name and import the OpenAPI document exported above. Then, click "Continue".

    Import custom connector

    Actually, that's it! Next, click the "✔️ Create connector" button to create the connector.

    Create custom connector

    Go back to the custom connector page, and you will see the "Maps API" custom connector you just created.

    Custom connector imported

    So, you are ready to create a Power Apps app to display your location on Google Maps or Naver Map! Let's move on.

    Where am I? Power Apps app

    Open the Power Apps Studio, and create an empty canvas app, named Who am I with a phone layout.

    Custom connector integration

    To use the custom connector created above, you need to add it to the Power App. Click the cylinder icon on the left and click the "Add data" button.

    Add custom connector to data pane

    Search the custom connector name, "Maps API", and click the custom connector to add.

    Search custom connector

    To use the custom connector, you also need to create a connection to it. Click the "Connect" button and move on.

    Create connection to custom connector

    Now, you've got the connection to the custom connector.

    Connection to custom connector ready

    Controls

    Let's build the Power Apps app. First of all, put three controls Image, Slider and Button onto the canvas.

    Power Apps control added

    Click the "Screen1" control and change the value on the property "OnVisible" to the formula below. The formula stores the current slider value in the zoomlevel collection.

    ClearCollect(
    zoomlevel,
    Slider1.Value
    )

    Click the "Botton1" control and change the value on the property "OnSelected" to the formula below. It passes the current latitude, longitude and zoom level to the custom connector and receives the image data. The received image data is stored in the result collection.

    ClearCollect(
    result,
    MAPS.GetGoogleMapImage(
    Location.Latitude,
    Location.Longitude,
    { zoom: First(zoomlevel).Value }
    )
    )

    Click the "Image1" control and change the value on the property "Image" to the formula below. It gets the image data from the result collection.

    First(result).Url

    Click the "Slider1" control and change the value on the property "OnChange" to the formula below. It stores the current slider value to the zoomlevel collection, followed by calling the custom connector to get the image data against the current location.

    ClearCollect(
    zoomlevel,
    Slider1.Value
    );
    ClearCollect(
    result,
    MAPS.GetGoogleMapImage(
    Location.Latitude,
    Location.Longitude,
    { zoom: First(zoomlevel).Value }
    )
    )

    That seems to be OK. Let's click the "Where am I?" button. But it doesn't show the image. The First(result).Url value is actually similar to this:

    appres://blobmanager/1090a86393a843adbfcf428f0b90e91b/1

    It's the image reference value somewhere you can't get there.

    Workaround Power Automate workflow

    Therefore, you need a workaround using a Power Automate workflow to sort out this issue. Open the Power Automate Studio, create an instant cloud flow with the Power App trigger, and give it the "Where am I" name. Then add input parameters of lat, long and zoom.

    Power Apps trigger on Power Automate workflow

    Add custom connector action to get the map image.

    Select action to get the Google Maps image

    In the action, pass the appropriate parameters to the action.

    Pass parameters to the custom connector action

    Add a "Response" action and put the following values into each field.

    • "Body" field:

      {
      "base64Image": <power_automate_expression>
      }

      The <power_automate_expression> should be concat('data:', body('GetGoogleMapImage')?['$content-type'], ';base64,', body('GetGoogleMapImage')?['$content']).

    • "Response Body JSON Schema" field:

      {
      "type": "object",
      "properties": {
      "base64Image": {
      "type": "string"
      }
      }
      }

    Format the Response action

    Let's return to the Power Apps Studio and add the Power Automate workflow you created.

    Add Power Automate workflow

    Select "Button1" and change the value on the property "OnSelect" below. It replaces the direct call to the custom connector with the Power Automate workflow.

    ClearCollect(
    result,
    WhereamI.Run(
    Location.Latitude,
    Location.Longitude,
    First(zoomlevel).Value
    )
    )

    Also, change the value on the property "OnChange" of the "Slider1" control below, replacing the custom connector call with the Power Automate workflow call.

    ClearCollect(
    zoomlevel,
    Slider1.Value
    );
    ClearCollect(
    result,
    WhereamI.Run(
    Location.Latitude,
    Location.Longitude,
    First(zoomlevel).Value
    )
    )

    And finally, change the "Image1" control's "Image" property value below.

    First(result).base64Image

    The workaround has been applied. Click the "Where am I?" button to see your current location from Google Maps.

    Run Power Apps app #1

    If you change the slider left or right, you will see either the zoomed-in image or the zoomed-out image.

    Run Power Apps app #2

    Now, you've created a Power Apps app to show your current location using:

    • Google Maps API through the custom connector, and
    • Custom connector written in Azure Functions with OpenAPI extension!

    Exercise: Try this yourself!

    You can fork this GitHub repository to your account and play around with it to see how the custom connector works. After forking the repository, make sure that you create all the necessary secrets to your repository documented in the README file.

    Then, click the "Deploy to Azure" button, and it will provision all necessary Azure resources and deploy an Azure Functions app for a custom connector.

    Deploy To Azure

    Once everything is deployed successfully, try to create a Power Apps app and Power Automate workflow to see your current location in real-time!

    Resources: For self-study!

    Want to know more about Power Platform custom connector and Azure Functions OpenAPI extension? Here are several resources you can take a look at:

    - + \ No newline at end of file diff --git a/blog/tags/serverless-september/page/30/index.html b/blog/tags/serverless-september/page/30/index.html index e1ba6ce041..e1b615ddc7 100644 --- a/blog/tags/serverless-september/page/30/index.html +++ b/blog/tags/serverless-september/page/30/index.html @@ -14,7 +14,7 @@ - + @@ -22,7 +22,7 @@

    33 posts tagged with "serverless-september"

    View All Tags

    · 9 min read
    Nitya Narasimhan

    Welcome to Day 2️⃣ of #30DaysOfServerless!

    Today, we kickstart our journey into serveless on Azure with a look at Functions As a Service. We'll explore Azure Functions - from core concepts to usage patterns.

    Ready? Let's Go!


    What We'll Cover

    • What is Functions-as-a-Service? (FaaS)
    • What is Azure Functions?
    • Triggers, Bindings and Custom Handlers
    • What is Durable Functions?
    • Orchestrators, Entity Functions and Application Patterns
    • Exercise: Take the Cloud Skills Challenge!
    • Resources: #30DaysOfServerless Collection.


    1. What is FaaS?

    Faas stands for Functions As a Service (FaaS). But what does that mean for us as application developers? We know that building and deploying modern applications at scale can get complicated and it starts with us needing to take decisions on Compute. In other words, we need to answer this question: "where should I host my application given my resource dependencies and scaling requirements?"

    this useful flowchart

    Azure has this useful flowchart (shown below) to guide your decision-making. You'll see that hosting options generally fall into three categories:

    • Infrastructure as a Service (IaaS) - where you provision and manage Virtual Machines yourself (cloud provider manages infra).
    • Platform as a Service (PaaS) - where you use a provider-managed hosting environment like Azure Container Apps.
    • Functions as a Service (FaaS) - where you forget about hosting environments and simply deploy your code for the provider to run.

    Here, "serverless" compute refers to hosting options where we (as developers) can focus on building apps without having to manage the infrastructure. See serverless compute options on Azure for more information.


    2. Azure Functions

    Azure Functions is the Functions-as-a-Service (FaaS) option on Azure. It is the ideal serverless solution if your application is event-driven with short-lived workloads. With Azure Functions, we develop applications as modular blocks of code (functions) that are executed on demand, in response to configured events (triggers). This approach brings us two advantages:

    • It saves us money. We only pay for the time the function runs.
    • It scales with demand. We have 3 hosting plans for flexible scaling behaviors.

    Azure Functions can be programmed in many popular languages (C#, F#, Java, JavaScript, TypeScript, PowerShell or Python), with Azure providing language-specific handlers and default runtimes to execute them.

    Concept: Custom Handlers
    • What if we wanted to program in a non-supported language?
    • Or we wanted to use a different runtime for a supported language?

    Custom Handlers have you covered! These are lightweight webservers that can receive and process input events from the Functions host - and return responses that can be delivered to any output targets. By this definition, custom handlers can be implemented by any language that supports receiving HTTP events. Check out the quickstart for writing a custom handler in Rust or Go.

    Custom Handlers

    Concept: Trigger and Bindings

    We talked about what functions are (code blocks). But when are they invoked or executed? And how do we provide inputs (arguments) and retrieve outputs (results) from this execution?

    This is where triggers and bindings come in.

    • Triggers define how a function is invoked and what associated data it will provide. A function must have exactly one trigger.
    • Bindings declaratively define how a resource is connected to the function. The resource or binding can be of type input, output, or both. Bindings are optional. A Function can have multiple input, output bindings.

    Azure Functions comes with a number of supported bindings that can be used to integrate relevant services to power a specific scenario. For instance:

    • HTTP Triggers - invokes the function in response to an HTTP request. Use this to implement serverless APIs for your application.
    • Event Grid Triggers invokes the function on receiving events from an Event Grid. Use this to process events reactively, and potentially publish responses back to custom Event Grid topics.
    • SignalR Service Trigger invokes the function in response to messages from Azure SignalR, allowing your application to take actions with real-time contexts.

    Triggers and bindings help you abstract your function's interfaces to other components it interacts with, eliminating hardcoded integrations. They are configured differently based on the programming language you use. For example - JavaScript functions are configured in the functions.json file. Here's an example of what that looks like.

    {
    "disabled":false,
    "bindings":[
    // ... bindings here
    {
    "type": "bindingType",
    "direction": "in",
    "name": "myParamName",
    // ... more depending on binding
    }
    ]
    }

    The key thing to remember is that triggers and bindings have a direction property - triggers are always in, input bindings are in and output bindings are out. Some bindings can support a special inout direction.

    The documentation has code examples for bindings to popular Azure services. Here's an example of the bindings and trigger configuration for a BlobStorage use case.

    // function.json configuration

    {
    "bindings": [
    {
    "queueName": "myqueue-items",
    "connection": "MyStorageConnectionAppSetting",
    "name": "myQueueItem",
    "type": "queueTrigger",
    "direction": "in"
    },
    {
    "name": "myInputBlob",
    "type": "blob",
    "path": "samples-workitems/{queueTrigger}",
    "connection": "MyStorageConnectionAppSetting",
    "direction": "in"
    },
    {
    "name": "myOutputBlob",
    "type": "blob",
    "path": "samples-workitems/{queueTrigger}-Copy",
    "connection": "MyStorageConnectionAppSetting",
    "direction": "out"
    }
    ],
    "disabled": false
    }

    The code below shows the function implementation. In this scenario, the function is triggered by a queue message carrying an input payload with a blob name. In response, it copies that data to the resource associated with the output binding.

    // function implementation

    module.exports = async function(context) {
    context.log('Node.js Queue trigger function processed', context.bindings.myQueueItem);
    context.bindings.myOutputBlob = context.bindings.myInputBlob;
    };
    Concept: Custom Bindings

    What if we have a more complex scenario that requires bindings for non-supported resources?

    There is an option create custom bindings if necessary. We don't have time to dive into details here but definitely check out the documentation


    3. Durable Functions

    This sounds great, right?. But now, let's talk about one challenge for Azure Functions. In the use cases so far, the functions are stateless - they take inputs at runtime if necessary, and return output results if required. But they are otherwise self-contained, which is great for scalability!

    But what if I needed to build more complex workflows that need to store and transfer state, and complete operations in a reliable manner? Durable Functions are an extension of Azure Functions that makes stateful workflows possible.

    Concept: Orchestrator Functions

    How can I create workflows that coordinate functions?

    Durable Functions use orchestrator functions to coordinate execution of other Durable functions within a given Functions app. These functions are durable and reliable. Later in this post, we'll talk briefly about some application patterns that showcase popular orchestration scenarios.

    Concept: Entity Functions

    How do I persist and manage state across workflows?

    Entity Functions provide explicit state mangement for Durable Functions, defining operations to read and write state to durable entities. They are associated with a special entity trigger for invocation. These are currently available only for a subset of programming languages so check to see if they are supported for your programming language of choice.

    USAGE: Application Patterns

    Durable Functions are a fascinating topic that would require a separate, longer post, to do justice. For now, let's look at some application patterns that showcase the value of these starting with the simplest one - Function Chaining as shown below:

    Function Chaining

    Here, we want to execute a sequence of named functions in a specific order. As shown in the snippet below, the orchestrator function coordinates invocations on the given functions in the desired sequence - "chaining" inputs and outputs to establish the workflow. Take note of the yield keyword. This triggers a checkpoint, preserving the current state of the function for reliable operation.

    const df = require("durable-functions");

    module.exports = df.orchestrator(function*(context) {
    try {
    const x = yield context.df.callActivity("F1");
    const y = yield context.df.callActivity("F2", x);
    const z = yield context.df.callActivity("F3", y);
    return yield context.df.callActivity("F4", z);
    } catch (error) {
    // Error handling or compensation goes here.
    }
    });

    Other application patterns for durable functions include:

    There's a lot more to explore but we won't have time to do that today. Definitely check the documentation and take a minute to read the comparison with Azure Logic Apps to understand what each technology provides for serverless workflow automation.


    4. Exercise

    That was a lot of information to absorb! Thankfully, there are a lot of examples in the documentation that can help put these in context. Here are a couple of exercises you can do, to reinforce your understanding of these concepts.


    5. What's Next?

    The goal for today was to give you a quick tour of key terminology and concepts related to Azure Functions. Tomorrow, we dive into the developer experience, starting with core tools for local development and ending by deploying our first Functions app.

    Want to do some prep work? Here are a few useful links:


    6. Resources


    - + \ No newline at end of file diff --git a/blog/tags/serverless-september/page/31/index.html b/blog/tags/serverless-september/page/31/index.html index aa85010c7b..3c71a0015d 100644 --- a/blog/tags/serverless-september/page/31/index.html +++ b/blog/tags/serverless-september/page/31/index.html @@ -14,13 +14,13 @@ - +

    33 posts tagged with "serverless-september"

    View All Tags

    · 5 min read
    Nitya Narasimhan
    Devanshi Joshi

    What We'll Cover

    • What is Serverless September? (6 initiatives)
    • How can I participate? (3 actions)
    • How can I skill up (30 days)
    • Who is behind this? (Team Contributors)
    • How can you contribute? (Custom Issues)
    • Exercise: Take the Cloud Skills Challenge!
    • Resources: #30DaysOfServerless Collection.

    Serverless September

    Welcome to Day 01 of 🍂 #ServerlessSeptember! Today, we kick off a full month of content and activities to skill you up on all things Serverless on Azure with content, events, and community interactions! Read on to learn about what we have planned!


    Explore our initiatives

    We have a number of initiatives planned for the month to help you learn and skill up on relevant technologies. Click on the links to visit the relevant pages for each.

    We'll go into more details about #30DaysOfServerless in this post - don't forget to subscribe to the blog to get daily posts delivered directly to your preferred feed reader!


    Register for events!

    What are 3 things you can do today, to jumpstart your learning journey?

    Serverless Hacks


    #30DaysOfServerless

    #30DaysOfServerless is a month-long series of daily blog posts grouped into 4 themed weeks - taking you from core concepts to end-to-end solution examples in 30 days. Each article will be short (5-8 mins reading time) and provide exercises and resources to help you reinforce learnings and take next steps.

    This series focuses on the Serverless On Azure learning journey in four stages, each building on the previous week to help you skill up in a beginner-friendly way:

    We have a tentative roadmap for the topics we hope to cover and will keep this updated as we go with links to actual articles as they get published.

    Week 1: FOCUS ON FUNCTIONS ⚡️

    Here's a sneak peek at what we have planned for week 1. We'll start with a broad look at fundamentals, walkthrough examples for each targeted programming language, then wrap with a post that showcases the role of Azure Functions in powering different serverless scenarios.

    • Sep 02: Learn Core Concepts for Azure Functions
    • Sep 03: Build and deploy your first Function
    • Sep 04: Azure Functions - for Java Developers!
    • Sep 05: Azure Functions - for JavaScript Developers!
    • Sep 06: Azure Functions - for .NET Developers!
    • Sep 07: Azure Functions - for Python Developers!
    • Sep 08: Wrap: Azure Functions + Serverless on Azure

    Ways to Participate..

    We hope you are as excited as we are, to jumpstart this journey. We want to make this a useful, beginner-friendly journey and we need your help!

    Here are the many ways you can participate:

    • Follow Azure on dev.to - we'll republish posts under this series page and welcome comments and feedback there!
    • Discussions on GitHub - Use this if you have feedback for us (on how we can improve these resources), or want to chat with your peers about serverless topics.
    • Custom Issues - just pick a template, create a new issue by filling in the requested details, and submit. You can use these to:
      • submit questions for AskTheExpert (live Q&A) ahead of time
      • submit your own articles or projects for community to learn from
      • share your ServerlessHack and get listed in our Hall Of Fame!
      • report bugs or share ideas for improvements

    Here's the list of custom issues currently defined.

    Community Buzz

    Let's Get Started!

    Now you know everything! We hope you are as excited as we are to dive into a full month of active learning and doing! Don't forget to subscribe for updates in your favorite feed reader! And look out for our first Azure Functions post tomorrow!


    - + \ No newline at end of file diff --git a/blog/tags/serverless-september/page/32/index.html b/blog/tags/serverless-september/page/32/index.html index d149048fa6..f26f5ea94e 100644 --- a/blog/tags/serverless-september/page/32/index.html +++ b/blog/tags/serverless-september/page/32/index.html @@ -14,13 +14,13 @@ - +

    33 posts tagged with "serverless-september"

    View All Tags

    · 3 min read
    Sara Gibbons

    ✨ Serverless September For Students

    My love for the tech industry grows as it evolves. Not just for the new technologies to play with, but seeing how paths into a tech career continue to expand. Allowing so many new voices, ideas and perspectives to our industry. With serverless computing removing barriers of entry for so many.

    It's a reason I enjoy working with universities and students. I get to hear the excitement of learning, fresh ideas and perspectives from our student community. All you students are incredible! How you view serverless, and what it can do, so cool!

    This year for Serverless September we want to hear all the amazing ways our student community is learning and working with Azure Serverless, and have all new ways for you to participate.

    Getting Started

    If you don't already have an Azure for Students account you can easily get your FREE account created at Azure for Students Sign up.

    If you are new to serverless, here are a couple links to get you started:

    No Experience, No problem

    For Serverless September we have planned beginner friendly content all month long. Covering such services as:

    You can follow #30DaysOfServerles here on the blog for daily posts covering concepts, scenarios, and how to create end-to-end solutions.

    Join the Cloud Skills Challenge where we have selected a list of Learn Modules for you to go through at your own pace, including deploying a full stack application with Azure Static Web Apps.

    Have A Question

    We want to hear it! All month long we will have Ask The Expert sessions. Submit your questions at any time and will be be sure to get one of our Azure Serverless experts to get you an answer.

    Share What You've Created

    If you have written a blog post, recorded a video, have an open source Azure Serverless project, we'd love to see it! Here is some links for you to share your creations

    🧭 Explore Student Resources

    ⚡️ Join us!

    Multiple teams across Microsoft are working to create Serverless September! They all want to hear from our incredible student community. We can't wait to share all the Serverless September resources and hear what you have learned and created. Here are some ways to keep up to date on all Serverless September activity:

    - + \ No newline at end of file diff --git a/blog/tags/serverless-september/page/33/index.html b/blog/tags/serverless-september/page/33/index.html index feb9832548..73b7e6e8b5 100644 --- a/blog/tags/serverless-september/page/33/index.html +++ b/blog/tags/serverless-september/page/33/index.html @@ -14,13 +14,13 @@ - +

    33 posts tagged with "serverless-september"

    View All Tags

    · 3 min read
    Nitya Narasimhan
    Devanshi Joshi

    🍂 It's September?

    Well, almost! September 1 is a few days away and I'm excited! Why? Because it's the perfect time to revisit #Serverless September, a month of

    ".. content-driven learning where experts and practitioners share their insights and tutorials on how to use serverless technologies effectively in today's ecosystems"

    If the words look familiar, it's because I actually wrote them 2 years ago when we launched the 2020 edition of this series. You might even recall this whimsical image I drew to capture the concept of September (fall) and Serverless (event-driven on-demand compute). Since then, a lot has happened in the serverless ecosystem!

    You can still browse the 2020 Content Collection to find great talks, articles and code samples to get started using Serverless on Azure. But read on to learn what's new!

    🧐 What's New?

    Well - quite a few things actually. This year, Devanshi Joshi and I expanded the original concept in a number of ways. Here's just a few of them that come to mind.

    New Website

    This year, we created this website (shortcut: https://aka.ms/serverless-september) to serve as a permanent home for content in 2022 and beyond - making it a canonical source for the #serverless posts we publish to tech communities like dev.to, Azure Developer Community and Apps On Azure. We hope this also makes it easier for you to search for, or discover, current and past articles that support your learning journey!

    Start by bookmarking these two sites:

    More Options

    Previous years focused on curating and sharing content authored by Microsoft and community contributors, showcasing serverless examples and best practices. This was perfect for those who already had experience with the core devtools and concepts.

    This year, we wanted to combine beginner-friendly options (for those just starting their serverless journey) with more advanced insights (for those looking to skill up further). Here's a sneak peek at some of the initiatives we've got planned!

    We'll also explore the full spectrum of serverless - from Functions-as-a-Service (for granularity) to Containerization (for deployment) and Microservices (for scalability). Here are a few services and technologies you'll get to learn more about:

    ⚡️ Join us!

    This has been a labor of love from multiple teams at Microsoft! We can't wait to share all the resources that we hope will help you skill up on all things Serverless this September! Here are a couple of ways to participate:

    - + \ No newline at end of file diff --git a/blog/tags/serverless-september/page/4/index.html b/blog/tags/serverless-september/page/4/index.html index b06e4e3c91..6feb141f7d 100644 --- a/blog/tags/serverless-september/page/4/index.html +++ b/blog/tags/serverless-september/page/4/index.html @@ -14,14 +14,14 @@ - +

    33 posts tagged with "serverless-september"

    View All Tags

    · 5 min read
    Madhura Bharadwaj

    Welcome to Day 26 of #30DaysOfServerless!

    Today, we have a special set of posts from our Zero To Hero 🚀 initiative, featuring blog posts authored by our Product Engineering teams for #ServerlessSeptember. Posts were originally published on the Apps on Azure blog on Microsoft Tech Community.


    What We'll Cover

    • Monitoring your Azure Functions
    • Built-in log streaming
    • Live Metrics stream
    • Troubleshooting Azure Functions


    Monitoring your Azure Functions:

    Azure Functions uses Application Insights to collect and analyze log data from individual function executions in your function app.

    Using Application Insights

    Application Insights collects log, performance, and error data. By automatically detecting performance anomalies and featuring powerful analytics tools, you can more easily diagnose issues and better understand how your functions are used. These tools are designed to help you continuously improve performance and usability of your functions. You can even use Application Insights during local function app project development.

    Typically, you create an Application Insights instance when you create your function app. In this case, the instrumentation key required for the integration is already set as an application setting named APPINSIGHTS_INSTRUMENTATIONKEY. With Application Insights integration enabled, telemetry data is sent to your connected Application Insights instance. This data includes logs generated by the Functions host, traces written from your functions code, and performance data. In addition to data from your functions and the Functions host, you can also collect data from the Functions scale controller.

    By default, the data collected from your function app is stored in Application Insights. In the Azure portal, Application Insights provides an extensive set of visualizations of your telemetry data. You can drill into error logs and query events and metrics. To learn more, including basic examples of how to view and query your collected data, see Analyze Azure Functions telemetry in Application Insights.

    Using Log Streaming

    In addition to this, you can have a smoother debugging experience through log streaming. There are two ways to view a stream of log files being generated by your function executions.

    • Built-in log streaming: the App Service platform lets you view a stream of your application log files. This is equivalent to the output seen when you debug your functions during local development and when you use the Test tab in the portal. All log-based information is displayed. For more information, see Stream logs. This streaming method supports only a single instance and can't be used with an app running on Linux in a Consumption plan.
    • Live Metrics Stream: when your function app is connected to Application Insights, you can view log data and other metrics in near real-time in the Azure portal using Live Metrics Stream. Use this method when monitoring functions running on multiple-instances or on Linux in a Consumption plan. This method uses sampled data. Log streams can be viewed both in the portal and in most local development environments.
    Monitoring Azure Functions

    Learn how to configure monitoring for your Azure Functions. See Monitoring Azure Functions data reference for detailed information on the metrics and logs metrics created by Azure Functions.

    In addition to this, Azure Functions uses Azure Monitor to monitor the health of your function apps. Azure Functions collects the same kinds of monitoring data as other Azure resources that are described in Azure Monitor data collection. See Monitoring Azure Functions data reference for detailed information on the metrics and logs metrics created by Azure Functions.

    Troubleshooting your Azure Functions:

    When you do run into issues with your function app, Azure Functions diagnostics points out what’s wrong. It guides you to the right information to troubleshoot and resolve the issue more easily and quickly.

    Let’s explore how to use Azure Functions diagnostics to diagnose and solve common function app issues.

    1. Navigate to your function app in the Azure portal.
    2. Select Diagnose and solve problems to open Azure Functions diagnostics.
    3. Once you’re here, there are multiple ways to retrieve the information you’re looking for. Choose a category that best describes the issue of your function app by using the keywords in the homepage tile. You can also type a keyword that best describes your issue in the search bar. There’s also a section at the bottom of the page that will directly take you to some of the more popular troubleshooting tools. For example, you could type execution to see a list of diagnostic reports related to your function app execution and open them directly from the homepage.

    Monitoring and troubleshooting apps in Azure Functions

    1. For example, click on the Function App Down or Reporting Errors link under Popular troubleshooting tools section. You will find detailed analysis, insights and next steps for the issues that were detected. On the left you’ll see a list of detectors. Click on them to explore more, or if there’s a particular keyword you want to look for, type it Into the search bar on the top.

    Monitoring and troubleshooting apps in Azure Functions

    TROUBLESHOOTING TIP

    Here are some general troubleshooting tips that you can follow if you find your Function App throwing Azure Functions Runtime unreachable error.

    Also be sure to check out the recommended best practices to ensure your Azure Functions are highly reliable. This article details some best practices for designing and deploying efficient function apps that remain healthy and perform well in a cloud-based environment.

    Bonus tip:

    - + \ No newline at end of file diff --git a/blog/tags/serverless-september/page/5/index.html b/blog/tags/serverless-september/page/5/index.html index c47d25d8d6..26ec86491f 100644 --- a/blog/tags/serverless-september/page/5/index.html +++ b/blog/tags/serverless-september/page/5/index.html @@ -14,13 +14,13 @@ - +

    33 posts tagged with "serverless-september"

    View All Tags

    · 7 min read
    Brian Benz

    Welcome to Day 25 of #30DaysOfServerless!

    Azure Container Apps enable application code packaged in containers to run and scale without the overhead of managing cloud infrastructure and container orchestration. In this post I'll show you how to deploy a Java application running on Spring Boot in a container to Azure Container Registry and Azure Container Apps.


    What We'll Cover

    • Introduction to Deploying Java containers in the cloud
    • Step-by-step: Deploying to Azure Container Registry
    • Step-by-step: Deploying and running on Azure Container Apps
    • Resources: For self-study!


    Deploy Java containers to cloud

    We'll deploy a Java application running on Spring Boot in a container to Azure Container Registry and Azure Container Apps. Here are the main steps:

    • Create Azure Container Registry (ACR) on Azure portal
    • Create Azure Container App (ACA) on Azure portal.
    • Deploy code to Azure Container Registry from the Azure CLI.
    • Deploy container from ACR to ACA using the Azure portal.
    PRE-REQUISITES

    Sign in to Azure from the CLI using the az login command, and follow the prompts in your browser to complete the authentication process. Also, ensure you're running the latest version of the CLI by using the az upgrade command.

    1. Get Sample Code

    Fork and clone the sample GitHub repo to your local machine. Navigate to the and click Fork in the top-right corner of the page.

    The example code that we're using is a very basic containerized Spring Boot example. There are a lot more details to learn about Spring boot apps in docker, for a deep dive check out this Spring Boot Guide

    2. Run Sample Locally (Optional)

    If you have docker installed locally, you can optionally test the code on your local machine. Navigate to the root directory of the forked repository and run the following commands:

    docker build -t spring-boot-docker-aca .
    docker run -p 8080:8080 spring-boot-docker-aca

    Open a browser and go to https://localhost:8080. You should see this message:

    Hello Docker World

    That indicates the the Spring Boot app is successfully running locally in a docker container.

    Next, let's set up an Azure Container Registry an an Azure Container App and deploy this container to the cloud!


    3. Step-by-step: Deploy to ACR

    To create a container registry from the portal dashboard, Select Create a resource > Containers > Container Registry.

    Navigate to container registry in portal

    In the Basics tab, enter values for Resource group and Registry name. The registry name must be unique within Azure, and contain 5-50 alphanumeric characters. Create a new resource group in the West US location named spring-boot-docker-aca. Select the 'Basic' SKU.

    Keep the default values for the remaining settings. Then select Review + create, then Create. When the Deployment succeeded message appears, select the container registry in the portal.

    Note the registry server name ending with azurecr.io. You will use this in the following steps when you push and pull images with Docker.

    3.1 Log into registry using the Azure CLI

    Before pushing and pulling container images, you must log in to the registry instance. Sign into the Azure CLI on your local machine, then run the az acr login command. For this step, use the registry name, not the server name ending with azurecr.io.

    From the command line, type:

    az acr login --name myregistryname

    The command returns Login Succeeded once completed.

    3.2 Build & deploy with az acr build

    Next, we're going to deploy the docker container we created earlier using the AZ ACR Build command. AZ ACR Build creates a docker build from local code and pushes the container to Azure Container Registry if the build is successful.

    Go to your local clone of the spring-boot-docker-aca repo in the command line, type:

    az acr build --registry myregistryname --image spring-boot-docker-aca:v1 .

    3.3 List container images

    Once the AZ ACR Build command is complete, you should be able to view the container as a repository in the registry. In the portal, open your registry and select Repositories, then select the spring-boot-docker-aca repository you created with docker push. You should also see the v1 image under Tags.

    4. Deploy on ACA

    Now that we have an image in the Azure Container Registry, we can deploy it to Azure Container Apps. For the first deployment, we'll pull the container from our ACR as part of the ACA setup.

    4.1 Create a container app

    We'll create the container app at the same place that we created the container registry in the Azure portal. From the portal, select Create a resource > Containers > Container App. In the Basics tab, set these values:

    4.2 Enter project details

    SettingAction
    SubscriptionYour Azure subscription.
    Resource groupUse the spring-boot-docker-aca resource group
    Container app nameEnter spring-boot-docker-aca.

    4.3 Create an environment

    1. In the Create Container App environment field, select Create new.

    2. In the Create Container App Environment page on the Basics tab, enter the following values:

      SettingValue
      Environment nameEnter my-environment.
      RegionSelect westus3.
    3. Select OK.

    4. Select the Create button at the bottom of the Create Container App Environment page.

    5. Select the Next: App settings button at the bottom of the page.

    5. App settings tab

    The App settings tab is where you connect to the ACR and pull the repository image:

    SettingAction
    Use quickstart imageUncheck the checkbox.
    NameEnter spring-boot-docker-aca.
    Image sourceSelect Azure Container Registry
    RegistrySelect your ACR from the list.
    ImageSelect spring-boot-docker-aca from the list.
    Image TagSelect v1 from the list.

    5.1 Application ingress settings

    SettingAction
    IngressSelect Enabled.
    Ingress visibilitySelect External to publicly expose your container app.
    Target portEnter 8080.

    5.2 Deploy the container app

    1. Select the Review and create button at the bottom of the page.
    2. Select Create.

    Once the deployment is successfully completed, you'll see the message: Your deployment is complete.

    5.3 Verify deployment

    In the portal, go to the Overview of your spring-boot-docker-aca Azure Container App, and click on the Application Url. You should see this message in the browser:

    Hello Docker World

    That indicates the the Spring Boot app is running in a docker container in your spring-boot-docker-aca Azure Container App.

    Resources: For self-study!

    Once you have an understanding of the basics in ths post, there is so much more to learn!

    Thanks for stopping by!

    - + \ No newline at end of file diff --git a/blog/tags/serverless-september/page/6/index.html b/blog/tags/serverless-september/page/6/index.html index b23cffbfd1..23d8c6b589 100644 --- a/blog/tags/serverless-september/page/6/index.html +++ b/blog/tags/serverless-september/page/6/index.html @@ -14,13 +14,13 @@ - +

    33 posts tagged with "serverless-september"

    View All Tags

    · 19 min read
    Alex Wolf

    Welcome to Day 24 of #30DaysOfServerless!

    We continue exploring E2E scenarios with this tutorial where you'll deploy a containerized ASP.NET Core 6.0 application to Azure Container Apps.

    The application consists of a front-end web app built using Blazor Server, as well as two Web API projects to manage data. These projects will exist as three separate containers inside of a shared container apps environment.


    What We'll Cover

    • Deploy ASP.NET Core 6.0 app to Azure Container Apps
    • Automate deployment workflows using GitHub Actions
    • Provision and deploy resources using Azure Bicep
    • Exercise: Try this yourself!
    • Resources: For self-study!


    Introduction

    Azure Container Apps enables you to run microservices and containerized applications on a serverless platform. With Container Apps, you enjoy the benefits of running containers while leaving behind the concerns of manually configuring cloud infrastructure and complex container orchestrators.

    In this tutorial, you'll deploy a containerized ASP.NET Core 6.0 application to Azure Container Apps. The application consists of a front-end web app built using Blazor Server, as well as two Web API projects to manage data. These projects will exist as three separate containers inside of a shared container apps environment.

    You will use GitHub Actions in combination with Bicep to deploy the application. These tools provide an approachable and sustainable solution for building CI/CD pipelines and working with Container Apps.

    PRE-REQUISITES

    Architecture

    In this tutorial, we'll setup a container app environment with a separate container for each project in the sample store app. The major components of the sample project include:

    • A Blazor Server front-end web app to display product information
    • A products API to list available products
    • An inventory API to determine how many products are in stock
    • GitHub Actions and Bicep templates to provision Azure resources and then build and deploy the sample app.

    You will explore these templates later in the tutorial.

    Public internet traffic should be proxied to the Blazor app. The back-end APIs should only be reachable via requests from the Blazor app inside the container apps environment. This setup can be achieved using container apps environment ingress configurations during deployment.

    An architecture diagram of the shopping app


    Project Sources

    Want to follow along? Fork the sample below. The tutorial can be completed with or without Dapr integration. Pick the path you feel comfortable in. Dapr provides various benefits that make working with Microservices easier - you can learn more in the docs. For this tutorial you will need GitHub and Azure CLI.

    PICK YOUR PATH

    To follow along with this tutorial, fork the relevant sample project below.

    You can run the app locally from Visual Studio:

    • Right click on the Blazor Store project and select Set as Startup Project.
    • Press the start button at the top of Visual Studio to run the app.
    • (Once running) start each API in the background by
    • right-clicking on the project node
    • selecting Debug --> Start without debugging.

    Once the Blazor app is running, you should see something like this:

    An architecture diagram of the shopping app


    Configuring Azure credentials

    In order to deploy the application to Azure through GitHub Actions, you first need to create a service principal. The service principal will allow the GitHub Actions process to authenticate to your Azure subscription to create resources and deploy code. You can learn more about Service Principals in the Azure CLI documentation. For this step you'll need to be logged into the Azure CLI.

    1) If you have not done so already, make sure to fork the sample project to your own GitHub account or organization.

    1) Once you have completed this step, create a service principal using the Azure CLI command below:

    ```azurecli
    $subscriptionId=$(az account show --query id --output tsv)
    az ad sp create-for-rbac --sdk-auth --name WebAndApiSample --role Contributor --scopes /subscriptions/$subscriptionId
    ```

    1) Copy the JSON output of the CLI command to your clipboard

    1) Under the settings tab of your forked GitHub repo, create a new secret named AzureSPN. The name is important to match the Bicep templates included in the project, which we'll review later. Paste the copied service principal values on your clipboard into the secret and save your changes. This new secret will be used by the GitHub Actions workflow to authenticate to Azure.

    :::image type="content" source="./img/dotnet/github-secrets.png" alt-text="A screenshot of adding GitHub secrets.":::

    Deploy using Github Actions

    You are now ready to deploy the application to Azure Container Apps using GitHub Actions. The sample application includes a GitHub Actions template that is configured to build and deploy any changes to a branch named deploy. The deploy branch does not exist in your forked repository by default, but you can easily create it through the GitHub user interface.

    1) Switch to the Actions tab along the top navigation of your GitHub repository. If you have not done so already, ensure that workflows are enabled by clicking the button in the center of the page.

    A screenshot showing how to enable GitHub actions

    1) Navigate to the main Code tab of your repository and select the main dropdown. Enter deploy into the branch input box, and then select Create branch: deploy from 'main'.

    A screenshot showing how to create the deploy branch

    1) On the new deploy branch, navigate down into the .github/workflows folder. You should see a file called deploy.yml, which contains the main GitHub Actions workflow script. Click on the file to view its content. You'll learn more about this file later in the tutorial.

    1) Click the pencil icon in the upper right to edit the document.

    1) Change the RESOURCE_GROUP_NAME: value to msdocswebappapis or another valid resource group name of your choosing.

    1) In the upper right of the screen, select Start commit and then Commit changes to commit your edit. This will persist the change to the file and trigger the GitHub Actions workflow to build and deploy the app.

    A screenshot showing how to commit changes

    1) Switch to the Actions tab along the top navigation again. You should see the workflow running to create the necessary resources and deploy the app. The workflow may take several minutes to run. When it completes successfully, all of the jobs should have a green checkmark icon next to them.

    The completed GitHub workflow.

    Explore the Azure resources

    Once the GitHub Actions workflow has completed successfully you can browse the created resources in the Azure portal.

    1) On the left navigation, select Resource Groups. Next,choose the msdocswebappapis resource group that was created by the GitHub Actions workflow.

    2) You should see seven resources available that match the screenshot and table descriptions below.

    The resources created in Azure.

    Resource nameTypeDescription
    inventoryContainer appThe containerized inventory API.
    msdocswebappapisacrContainer registryA registry that stores the built Container images for your apps.
    msdocswebappapisaiApplication insightsApplication insights provides advanced monitoring, logging and metrics for your apps.
    msdocswebappapisenvContainer apps environmentA container environment that manages networking, security and resource concerns. All of your containers live in this environment.
    msdocswebappapislogsLog Analytics workspaceA workspace environment for managing logging and analytics for the container apps environment
    productsContainer appThe containerized products API.
    storeContainer appThe Blazor front-end web app.

    3) You can view your running app in the browser by clicking on the store container app. On the overview page, click the Application Url link on the upper right of the screen.

    :::image type="content" source="./img/dotnet/application-url.png" alt-text="The link to browse the app.":::

    Understanding the GitHub Actions workflow

    The GitHub Actions workflow created and deployed resources to Azure using the deploy.yml file in the .github folder at the root of the project. The primary purpose of this file is to respond to events - such as commits to a branch - and run jobs to accomplish tasks. The deploy.yml file in the sample project has three main jobs:

    • Provision: Create the necessary resources in Azure, such as the container apps environment. This step leverages Bicep templates to create the Azure resources, which you'll explore in a moment.
    • Build: Create the container images for the three apps in the project and store them in the container registry.
    • Deploy: Deploy the container images to the different container apps created during the provisioning job.

    The deploy.yml file also accepts parameters to make the workflow more dynamic, such as setting the resource group name or the Azure region resources will be provisioned to.

    Below is a commented version of the deploy.yml file that highlights the essential steps.

    name: Build and deploy .NET application to Container Apps

    # Trigger the workflow on pushes to the deploy branch
    on:
    push:
    branches:
    - deploy

    env:
    # Set workflow variables
    RESOURCE_GROUP_NAME: msdocswebappapis

    REGION: eastus

    STORE_DOCKER: Store/Dockerfile
    STORE_IMAGE: store

    INVENTORY_DOCKER: Store.InventoryApi/Dockerfile
    INVENTORY_IMAGE: inventory

    PRODUCTS_DOCKER: Store.ProductApi/Dockerfile
    PRODUCTS_IMAGE: products

    jobs:
    # Create the required Azure resources
    provision:
    runs-on: ubuntu-latest

    steps:

    - name: Checkout to the branch
    uses: actions/checkout@v2

    - name: Azure Login
    uses: azure/login@v1
    with:
    creds: ${{ secrets.AzureSPN }}

    - name: Create resource group
    uses: azure/CLI@v1
    with:
    inlineScript: >
    echo "Creating resource group in Azure"
    echo "Executing 'az group create -l ${{ env.REGION }} -n ${{ env.RESOURCE_GROUP_NAME }}'"
    az group create -l ${{ env.REGION }} -n ${{ env.RESOURCE_GROUP_NAME }}

    # Use Bicep templates to create the resources in Azure
    - name: Creating resources
    uses: azure/CLI@v1
    with:
    inlineScript: >
    echo "Creating resources"
    az deployment group create --resource-group ${{ env.RESOURCE_GROUP_NAME }} --template-file '/github/workspace/Azure/main.bicep' --debug

    # Build the three app container images
    build:
    runs-on: ubuntu-latest
    needs: provision

    steps:

    - name: Checkout to the branch
    uses: actions/checkout@v2

    - name: Azure Login
    uses: azure/login@v1
    with:
    creds: ${{ secrets.AzureSPN }}

    - name: Set up Docker Buildx
    uses: docker/setup-buildx-action@v1

    - name: Login to ACR
    run: |
    set -euo pipefail
    access_token=$(az account get-access-token --query accessToken -o tsv)
    refresh_token=$(curl https://${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io/oauth2/exchange -v -d "grant_type=access_token&service=${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io&access_token=$access_token" | jq -r .refresh_token)
    docker login -u 00000000-0000-0000-0000-000000000000 --password-stdin ${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io <<< "$refresh_token"

    - name: Build the products api image and push it to ACR
    uses: docker/build-push-action@v2
    with:
    push: true
    tags: ${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io/${{ env.PRODUCTS_IMAGE }}:${{ github.sha }}
    file: ${{ env.PRODUCTS_DOCKER }}

    - name: Build the inventory api image and push it to ACR
    uses: docker/build-push-action@v2
    with:
    push: true
    tags: ${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io/${{ env.INVENTORY_IMAGE }}:${{ github.sha }}
    file: ${{ env.INVENTORY_DOCKER }}

    - name: Build the frontend image and push it to ACR
    uses: docker/build-push-action@v2
    with:
    push: true
    tags: ${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io/${{ env.STORE_IMAGE }}:${{ github.sha }}
    file: ${{ env.STORE_DOCKER }}

    # Deploy the three container images
    deploy:
    runs-on: ubuntu-latest
    needs: build

    steps:

    - name: Checkout to the branch
    uses: actions/checkout@v2

    - name: Azure Login
    uses: azure/login@v1
    with:
    creds: ${{ secrets.AzureSPN }}

    - name: Installing Container Apps extension
    uses: azure/CLI@v1
    with:
    inlineScript: >
    az config set extension.use_dynamic_install=yes_without_prompt

    az extension add --name containerapp --yes

    - name: Login to ACR
    run: |
    set -euo pipefail
    access_token=$(az account get-access-token --query accessToken -o tsv)
    refresh_token=$(curl https://${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io/oauth2/exchange -v -d "grant_type=access_token&service=${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io&access_token=$access_token" | jq -r .refresh_token)
    docker login -u 00000000-0000-0000-0000-000000000000 --password-stdin ${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io <<< "$refresh_token"

    - name: Deploy Container Apps
    uses: azure/CLI@v1
    with:
    inlineScript: >
    az containerapp registry set -n products -g ${{ env.RESOURCE_GROUP_NAME }} --server ${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io

    az containerapp update -n products -g ${{ env.RESOURCE_GROUP_NAME }} -i ${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io/${{ env.PRODUCTS_IMAGE }}:${{ github.sha }}

    az containerapp registry set -n inventory -g ${{ env.RESOURCE_GROUP_NAME }} --server ${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io

    az containerapp update -n inventory -g ${{ env.RESOURCE_GROUP_NAME }} -i ${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io/${{ env.INVENTORY_IMAGE }}:${{ github.sha }}

    az containerapp registry set -n store -g ${{ env.RESOURCE_GROUP_NAME }} --server ${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io

    az containerapp update -n store -g ${{ env.RESOURCE_GROUP_NAME }} -i ${{ env.RESOURCE_GROUP_NAME }}acr.azurecr.io/${{ env.STORE_IMAGE }}:${{ github.sha }}

    - name: logout
    run: >
    az logout

    Understanding the Bicep templates

    During the provisioning stage of the GitHub Actions workflow, the main.bicep file is processed. Bicep files provide a declarative way of generating resources in Azure and are ideal for managing infrastructure as code. You can learn more about Bicep in the related documentation. The main.bicep file in the sample project creates the following resources:

    • The container registry to store images of the containerized apps.
    • The container apps environment, which handles networking and resource management for the container apps.
    • Three container apps - one for the Blazor front-end and two for the back-end product and inventory APIs.
    • Configuration values to connect these services together

    main.bicep without Dapr

    param location string = resourceGroup().location

    # create the azure container registry
    resource acr 'Microsoft.ContainerRegistry/registries@2021-09-01' = {
    name: toLower('${resourceGroup().name}acr')
    location: location
    sku: {
    name: 'Basic'
    }
    properties: {
    adminUserEnabled: true
    }
    }

    # create the aca environment
    module env 'environment.bicep' = {
    name: 'containerAppEnvironment'
    params: {
    location: location
    }
    }

    # create the various configuration pairs
    var shared_config = [
    {
    name: 'ASPNETCORE_ENVIRONMENT'
    value: 'Development'
    }
    {
    name: 'APPINSIGHTS_INSTRUMENTATIONKEY'
    value: env.outputs.appInsightsInstrumentationKey
    }
    {
    name: 'APPLICATIONINSIGHTS_CONNECTION_STRING'
    value: env.outputs.appInsightsConnectionString
    }
    ]

    # create the products api container app
    module products 'container_app.bicep' = {
    name: 'products'
    params: {
    name: 'products'
    location: location
    registryPassword: acr.listCredentials().passwords[0].value
    registryUsername: acr.listCredentials().username
    containerAppEnvironmentId: env.outputs.id
    registry: acr.name
    envVars: shared_config
    externalIngress: false
    }
    }

    # create the inventory api container app
    module inventory 'container_app.bicep' = {
    name: 'inventory'
    params: {
    name: 'inventory'
    location: location
    registryPassword: acr.listCredentials().passwords[0].value
    registryUsername: acr.listCredentials().username
    containerAppEnvironmentId: env.outputs.id
    registry: acr.name
    envVars: shared_config
    externalIngress: false
    }
    }

    # create the store api container app
    var frontend_config = [
    {
    name: 'ProductsApi'
    value: 'http://${products.outputs.fqdn}'
    }
    {
    name: 'InventoryApi'
    value: 'http://${inventory.outputs.fqdn}'
    }
    ]

    module store 'container_app.bicep' = {
    name: 'store'
    params: {
    name: 'store'
    location: location
    registryPassword: acr.listCredentials().passwords[0].value
    registryUsername: acr.listCredentials().username
    containerAppEnvironmentId: env.outputs.id
    registry: acr.name
    envVars: union(shared_config, frontend_config)
    externalIngress: true
    }
    }

    main.bicep with Dapr


    param location string = resourceGroup().location

    # create the azure container registry
    resource acr 'Microsoft.ContainerRegistry/registries@2021-09-01' = {
    name: toLower('${resourceGroup().name}acr')
    location: location
    sku: {
    name: 'Basic'
    }
    properties: {
    adminUserEnabled: true
    }
    }

    # create the aca environment
    module env 'environment.bicep' = {
    name: 'containerAppEnvironment'
    params: {
    location: location
    }
    }

    # create the various config pairs
    var shared_config = [
    {
    name: 'ASPNETCORE_ENVIRONMENT'
    value: 'Development'
    }
    {
    name: 'APPINSIGHTS_INSTRUMENTATIONKEY'
    value: env.outputs.appInsightsInstrumentationKey
    }
    {
    name: 'APPLICATIONINSIGHTS_CONNECTION_STRING'
    value: env.outputs.appInsightsConnectionString
    }
    ]

    # create the products api container app
    module products 'container_app.bicep' = {
    name: 'products'
    params: {
    name: 'products'
    location: location
    registryPassword: acr.listCredentials().passwords[0].value
    registryUsername: acr.listCredentials().username
    containerAppEnvironmentId: env.outputs.id
    registry: acr.name
    envVars: shared_config
    externalIngress: false
    }
    }

    # create the inventory api container app
    module inventory 'container_app.bicep' = {
    name: 'inventory'
    params: {
    name: 'inventory'
    location: location
    registryPassword: acr.listCredentials().passwords[0].value
    registryUsername: acr.listCredentials().username
    containerAppEnvironmentId: env.outputs.id
    registry: acr.name
    envVars: shared_config
    externalIngress: false
    }
    }

    # create the store api container app
    module store 'container_app.bicep' = {
    name: 'store'
    params: {
    name: 'store'
    location: location
    registryPassword: acr.listCredentials().passwords[0].value
    registryUsername: acr.listCredentials().username
    containerAppEnvironmentId: env.outputs.id
    registry: acr.name
    envVars: shared_config
    externalIngress: true
    }
    }


    Bicep Modules

    The main.bicep file references modules to create resources, such as module products. Modules are a feature of Bicep templates that enable you to abstract resource declarations into their own files or sub-templates. As the main.bicep file is processed, the defined modules are also evaluated. Modules allow you to create resources in a more organized and reusable way. They can also define input and output parameters that are passed to and from the parent template, such as the name of a resource.

    For example, the environment.bicep module extracts the details of creating a container apps environment into a reusable template. The module defines necessary resource dependencies such as Log Analytics Workspaces and an Application Insights instance.

    environment.bicep without Dapr

    param baseName string = resourceGroup().name
    param location string = resourceGroup().location

    resource logs 'Microsoft.OperationalInsights/workspaces@2021-06-01' = {
    name: '${baseName}logs'
    location: location
    properties: any({
    retentionInDays: 30
    features: {
    searchVersion: 1
    }
    sku: {
    name: 'PerGB2018'
    }
    })
    }

    resource appInsights 'Microsoft.Insights/components@2020-02-02' = {
    name: '${baseName}ai'
    location: location
    kind: 'web'
    properties: {
    Application_Type: 'web'
    WorkspaceResourceId: logs.id
    }
    }

    resource env 'Microsoft.App/managedEnvironments@2022-01-01-preview' = {
    name: '${baseName}env'
    location: location
    properties: {
    appLogsConfiguration: {
    destination: 'log-analytics'
    logAnalyticsConfiguration: {
    customerId: logs.properties.customerId
    sharedKey: logs.listKeys().primarySharedKey
    }
    }
    }
    }

    output id string = env.id
    output appInsightsInstrumentationKey string = appInsights.properties.InstrumentationKey
    output appInsightsConnectionString string = appInsights.properties.ConnectionString

    environment.bicep with Dapr


    param baseName string = resourceGroup().name
    param location string = resourceGroup().location

    resource logs 'Microsoft.OperationalInsights/workspaces@2021-06-01' = {
    name: '${baseName}logs'
    location: location
    properties: any({
    retentionInDays: 30
    features: {
    searchVersion: 1
    }
    sku: {
    name: 'PerGB2018'
    }
    })
    }

    resource appInsights 'Microsoft.Insights/components@2020-02-02' = {
    name: '${baseName}ai'
    location: location
    kind: 'web'
    properties: {
    Application_Type: 'web'
    WorkspaceResourceId: logs.id
    }
    }

    resource env 'Microsoft.App/managedEnvironments@2022-01-01-preview' = {
    name: '${baseName}env'
    location: location
    properties: {
    appLogsConfiguration: {
    destination: 'log-analytics'
    logAnalyticsConfiguration: {
    customerId: logs.properties.customerId
    sharedKey: logs.listKeys().primarySharedKey
    }
    }
    }
    }

    output id string = env.id
    output appInsightsInstrumentationKey string = appInsights.properties.InstrumentationKey
    output appInsightsConnectionString string = appInsights.properties.ConnectionString


    The container_apps.bicep template defines numerous parameters to provide a reusable template for creating container apps. This allows the module to be used in other CI/CD pipelines as well.

    container_app.bicep without Dapr

    param name string
    param location string = resourceGroup().location
    param containerAppEnvironmentId string
    param repositoryImage string = 'mcr.microsoft.com/azuredocs/containerapps-helloworld:latest'
    param envVars array = []
    param registry string
    param minReplicas int = 1
    param maxReplicas int = 1
    param port int = 80
    param externalIngress bool = false
    param allowInsecure bool = true
    param transport string = 'http'
    param registryUsername string
    @secure()
    param registryPassword string

    resource containerApp 'Microsoft.App/containerApps@2022-01-01-preview' ={
    name: name
    location: location
    properties:{
    managedEnvironmentId: containerAppEnvironmentId
    configuration: {
    activeRevisionsMode: 'single'
    secrets: [
    {
    name: 'container-registry-password'
    value: registryPassword
    }
    ]
    registries: [
    {
    server: registry
    username: registryUsername
    passwordSecretRef: 'container-registry-password'
    }
    ]
    ingress: {
    external: externalIngress
    targetPort: port
    transport: transport
    allowInsecure: allowInsecure
    }
    }
    template: {
    containers: [
    {
    image: repositoryImage
    name: name
    env: envVars
    }
    ]
    scale: {
    minReplicas: minReplicas
    maxReplicas: maxReplicas
    }
    }
    }
    }

    output fqdn string = containerApp.properties.configuration.ingress.fqdn

    container_app.bicep with Dapr


    param name string
    param location string = resourceGroup().location
    param containerAppEnvironmentId string
    param repositoryImage string = 'mcr.microsoft.com/azuredocs/containerapps-helloworld:latest'
    param envVars array = []
    param registry string
    param minReplicas int = 1
    param maxReplicas int = 1
    param port int = 80
    param externalIngress bool = false
    param allowInsecure bool = true
    param transport string = 'http'
    param appProtocol string = 'http'
    param registryUsername string
    @secure()
    param registryPassword string

    resource containerApp 'Microsoft.App/containerApps@2022-01-01-preview' ={
    name: name
    location: location
    properties:{
    managedEnvironmentId: containerAppEnvironmentId
    configuration: {
    dapr: {
    enabled: true
    appId: name
    appPort: port
    appProtocol: appProtocol
    }
    activeRevisionsMode: 'single'
    secrets: [
    {
    name: 'container-registry-password'
    value: registryPassword
    }
    ]
    registries: [
    {
    server: registry
    username: registryUsername
    passwordSecretRef: 'container-registry-password'
    }
    ]
    ingress: {
    external: externalIngress
    targetPort: port
    transport: transport
    allowInsecure: allowInsecure
    }
    }
    template: {
    containers: [
    {
    image: repositoryImage
    name: name
    env: envVars
    }
    ]
    scale: {
    minReplicas: minReplicas
    maxReplicas: maxReplicas
    }
    }
    }
    }

    output fqdn string = containerApp.properties.configuration.ingress.fqdn


    Understanding configuration differences with Dapr

    The code for this specific sample application is largely the same whether or not Dapr is integrated. However, even with this simple app, there are a few benefits and configuration differences when using Dapr that are worth exploring.

    In this scenario most of the changes are related to communication between the container apps. However, you can explore the full range of Dapr benefits by reading the Dapr integration with Azure Container Apps article in the conceptual documentation.

    Without Dapr

    Without Dapr the main.bicep template handles wiring up the front-end store app to communicate with the back-end apis by manually managing environment variables. The bicep template retrieves the fully qualified domains (fqdn) of the API apps as output parameters when they are created. Those configurations are then set as environment variables on the store container app.


    # Retrieve environment variables from API container creation
    var frontend_config = [
    {
    name: 'ProductsApi'
    value: 'http://${products.outputs.fqdn}'
    }
    {
    name: 'InventoryApi'
    value: 'http://${inventory.outputs.fqdn}'
    }
    ]

    # create the store api container app, passing in config
    module store 'container_app.bicep' = {
    name: 'store'
    params: {
    name: 'store'
    location: location
    registryPassword: acr.listCredentials().passwords[0].value
    registryUsername: acr.listCredentials().username
    containerAppEnvironmentId: env.outputs.id
    registry: acr.name
    envVars: union(shared_config, frontend_config)
    externalIngress: true
    }
    }

    The environment variables are then retrieved inside of the program class and used to configure the base URLs of the corresponding HTTP clients.


    builder.Services.AddHttpClient("Products", (httpClient) => httpClient.BaseAddress = new Uri(builder.Configuration.GetValue<string>("ProductsApi")));
    builder.Services.AddHttpClient("Inventory", (httpClient) => httpClient.BaseAddress = new Uri(builder.Configuration.GetValue<string>("InventoryApi")));

    With Dapr

    Dapr can be enabled on a container app when it is created, as seen below. This configuration adds a Dapr sidecar to the app to streamline discovery and communication features between the different container apps in your environment.


    # Create the container app with Dapr enabled
    resource containerApp 'Microsoft.App/containerApps@2022-01-01-preview' ={
    name: name
    location: location
    properties:{
    managedEnvironmentId: containerAppEnvironmentId
    configuration: {
    dapr: {
    enabled: true
    appId: name
    appPort: port
    appProtocol: appProtocol
    }
    activeRevisionsMode: 'single'
    secrets: [
    {
    name: 'container-registry-password'
    value: registryPassword
    }
    ]

    # Rest of template omitted for brevity...
    }
    }

    Some of these Dapr features can be surfaced through the program file. You can configure your HttpClient to leverage Dapr configurations when communicating with other apps in your environment.


    // reconfigure code to make requests to Dapr sidecar
    var baseURL = (Environment.GetEnvironmentVariable("BASE_URL") ?? "http://localhost") + ":" + (Environment.GetEnvironmentVariable("DAPR_HTTP_PORT") ?? "3500");
    builder.Services.AddHttpClient("Products", (httpClient) =>
    {
    httpClient.BaseAddress = new Uri(baseURL);
    httpClient.DefaultRequestHeaders.Add("dapr-app-id", "Products");
    });

    builder.Services.AddHttpClient("Inventory", (httpClient) =>
    {
    httpClient.BaseAddress = new Uri(baseURL);
    httpClient.DefaultRequestHeaders.Add("dapr-app-id", "Inventory");
    });


    Clean up resources

    If you're not going to continue to use this application, you can delete the Azure Container Apps and all the associated services by removing the resource group.

    Follow these steps in the Azure portal to remove the resources you created:

    1. In the Azure portal, navigate to the msdocswebappsapi resource group using the left navigation or search bar.
    2. Select the Delete resource group button at the top of the resource group Overview.
    3. Enter the resource group name msdocswebappsapi in the Are you sure you want to delete "msdocswebappsapi" confirmation dialog.
    4. Select Delete.
      The process to delete the resource group may take a few minutes to complete.
    - + \ No newline at end of file diff --git a/blog/tags/serverless-september/page/7/index.html b/blog/tags/serverless-september/page/7/index.html index a46d63e414..95e7143794 100644 --- a/blog/tags/serverless-september/page/7/index.html +++ b/blog/tags/serverless-september/page/7/index.html @@ -14,13 +14,13 @@ - +

    33 posts tagged with "serverless-september"

    View All Tags

    · 9 min read
    Justin Yoo

    Welcome to Day 21 of #30DaysOfServerless!

    We've so far walked through what Azure Event Grid is and how it generally works. Today, let's discuss how Azure Event Grid deals with CloudEvents.


    What We'll Cover


    OK. Let's get started!

    What is CloudEvents?

    Needless to say, events are everywhere. Events come not only from event-driven systems but also from many different systems and devices, including IoT ones like Raspberry PI.

    But the problem is that every event publisher (system/device that creates events) describes their events differently, meaning there is no standard way of describing events. It has caused many issues between systems, mainly from the interoperability perspective.

    1. Consistency: No standard way of describing events resulted in developers having to write their own event handling logic for each event source.
    2. Accessibility: There were no common libraries, tooling and infrastructure to deliver events across systems.
    3. Productivity: The overall productivity decreases because of the lack of the standard format of events.

    Cloud Events Logo

    Therefore, CNCF (Cloud-Native Computing Foundation) has brought up the concept, called CloudEvents. CloudEvents is a specification that commonly describes event data. Conforming any event data to this spec will simplify the event declaration and delivery across systems and platforms and more, resulting in a huge productivity increase.

    How Azure Event Grid brokers CloudEvents

    Before CloudEvents, Azure Event Grid described events in their own way. Therefore, if you want to use Azure Event Grid, you should follow the event format/schema that Azure Event Grid declares. However, not every system/service/application follows the Azure Event Grid schema. Therefore, Azure Event Grid now supports CloudEvents spec as input and output formats.

    Azure Event Grid for Azure

    Take a look at the simple diagram below, which describes how Azure Event Grid captures events raised from various Azure services. In this diagram, Azure Key Vault takes the role of the event source or event publisher, and Azure Logic Apps takes the role of the event handler (I'll discuss Azure Logic Apps as the event handler later in this post). We use Azure Event Grid System Topic for Azure.

    Azure Event Grid for Azure

    Therefore, let's create an Azure Event Grid System Topic that captures events raised from Azure Key Vault when a new version of a secret is added.

    Azure Event Grid System Topic for Key Vault

    As Azure Event Grid makes use of the pub/sub pattern, you need to create the Azure Event Grid Subscription to consume the events. Here's the subscription that uses the Event Grid data format:

    ![Azure Event Grid System Subscription for Key Vault in Event Grid Format][./img/21-cloudevents-via-event-grid-03.png]

    Once you create the subscription, create a new version of the secret on Azure Key Vault. Then, Azure Key Vault raises an event, which is captured in the Event Grid format:

    [
    {
    "id": "6f44b9c0-d37e-40e7-89be-f70a6da291cc",
    "topic": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/rg-aegce-krc/providers/Microsoft.KeyVault/vaults/kv-xxxxxxxx",
    "subject": "hello",
    "eventType": "Microsoft.KeyVault.SecretNewVersionCreated",
    "data": {
    "Id": "https://kv-xxxxxxxx.vault.azure.net/secrets/hello/064dfc082fec463f8d4610ed6118811d",
    "VaultName": "kv-xxxxxxxx",
    "ObjectType": "Secret",
    "ObjectName": "hello",
    "Version": "064dfc082fec463f8d4610ed6118811d",
    "NBF": null,
    "EXP": null
    },
    "dataVersion": "1",
    "metadataVersion": "1",
    "eventTime": "2022-09-21T07:08:09.1234567Z"
    }
    ]

    So, how is it different from the CloudEvents format? Let's take a look. According to the spec, the JSON data in CloudEvents might look like this:

    {
    "id" : "C234-1234-1234",
    "source" : "/mycontext",
    "specversion" : "1.0",
    "type" : "com.example.someevent",
    "comexampleextension1" : "value",
    "time" : "2018-04-05T17:31:00Z",
    "datacontenttype" : "application/cloudevents+json",
    "data" : {
    "appinfoA" : "abc",
    "appinfoB" : 123,
    "appinfoC" : true
    }
    }

    This time, let's create another subscription using the CloudEvents schema. Here's how to create the subscription against the system topic:

    Azure Event Grid System Subscription for Key Vault in CloudEvents Format

    Therefore, Azure Key Vault emits the event data in the CloudEvents format:

    {
    "id": "6f44b9c0-d37e-40e7-89be-f70a6da291cc",
    "source": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/rg-aegce-krc/providers/Microsoft.KeyVault/vaults/kv-xxxxxxxx",
    "specversion": "1.0",
    "type": "Microsoft.KeyVault.SecretNewVersionCreated",
    "subject": "hello",
    "time": "2022-09-21T07:08:09.1234567Z",
    "data": {
    "Id": "https://kv-xxxxxxxx.vault.azure.net/secrets/hello/064dfc082fec463f8d4610ed6118811d",
    "VaultName": "kv-xxxxxxxx",
    "ObjectType": "Secret",
    "ObjectName": "hello",
    "Version": "064dfc082fec463f8d4610ed6118811d",
    "NBF": null,
    "EXP": null
    }
    }

    Can you identify some differences between the Event Grid format and the CloudEvents format? Fortunately, both Event Grid schema and CloudEvents schema look similar to each other. But they might be significantly different if you use a different event source outside Azure.

    Azure Event Grid for Systems outside Azure

    As mentioned above, the event data described outside Azure or your own applications within Azure might not be understandable by Azure Event Grid. In this case, we need to use Azure Event Grid Custom Topic. Here's the diagram for it:

    Azure Event Grid for Applications outside Azure

    Let's create the Azure Event Grid Custom Topic. When you create the topic, make sure that you use the CloudEvent schema during the provisioning process:

    Azure Event Grid Custom Topic

    If your application needs to publish events to Azure Event Grid Custom Topic, your application should build the event data in the CloudEvents format. If you use a .NET application, add the NuGet package first.

    dotnet add package Azure.Messaging.EventGrid

    Then, create the publisher instance. You've already got the topic endpoint URL and the access key.

    var topicEndpoint = new Uri("<Azure Event Grid Custom Topic Endpoint URL>");
    var credential = new AzureKeyCredential("<Azure Event Grid Custom Topic Access Key>");
    var publisher = new EventGridPublisherClient(topicEndpoint, credential);

    Now, build the event data like below. Make sure that you follow the CloudEvents schema that requires additional metadata like event source, event type and content type.

    var source = "/your/event/source";
    var type = "com.source.event.your/OnEventOccurs";

    var data = new MyEventData() { Hello = "World" };

    var @event = new CloudEvent(source, type, data);

    And finally, send the event to Azure Event Grid Custom Topic.

    await publisher.SendEventAsync(@event);

    The captured event data looks like the following:

    {
    "id": "cc2b2775-52b8-43b8-a7cc-c1c33c2b2e59",
    "source": "/your/event/source",
    "type": "com.source.event.my/OnEventOccurs",
    "data": {
    "Hello": "World"
    },
    "time": "2022-09-21T07:08:09.1234567+00:00",
    "specversion": "1.0"
    }

    However, due to limitations, someone might insist that their existing application doesn't or can't emit the event data in the CloudEvents format. In this case, what should we do? There's no standard way of sending the event data in the CloudEvents format to Azure Event Grid Custom Topic. One of the approaches we may be able to apply is to put a converter between the existing application and Azure Event Grid Custom Topic like below:

    Azure Event Grid for Applications outside Azure with Converter

    Once the Function app (or any converter app) receives legacy event data, it internally converts the CloudEvents format and publishes it to Azure Event Grid.

    var data = default(MyRequestData);
    using (var reader = new StreamReader(req.Body))
    {
    var serialised = await reader.ReadToEndAsync();
    data = JsonConvert.DeserializeObject<MyRequestData>(serialised);
    }

    var converted = new MyEventData() { Hello = data.Lorem };
    var @event = new CloudEvent(source, type, converted);

    The converted event data is captured like this:

    {
    "id": "df296da3-77cd-4da2-8122-91f631941610",
    "source": "/your/event/source",
    "type": "com.source.event.my/OnEventOccurs",
    "data": {
    "Hello": "ipsum"
    },
    "time": "2022-09-21T07:08:09.1234567+00:00",
    "specversion": "1.0"
    }

    This approach is beneficial in many integration scenarios to make all the event data canonicalised.

    How Azure Logic Apps consumes CloudEvents

    I put Azure Logic Apps as the event handler in the previous diagrams. According to the CloudEvents spec, each event handler must implement request validation to avoid abuse. One good thing about using Azure Logic Apps is that it has already implemented this request validation feature. It implies that we just subscribe to the topic and consume the event data.

    Create a new Logic Apps instance and add the HTTP Request trigger. Once it saves, you will get the endpoint URL.

    Azure Logic Apps with HTTP Request Trigger

    Then, create the Azure Event Grid Subscription with:

    • Endpoint type: Webhook
    • Endpoint URL: The Logic Apps URL from above.

    Azure Logic Apps with HTTP Request Trigger

    Once the subscription is ready, this Logic Apps works well as the event handler. Here's how it receives the CloudEvents data from the subscription.

    Azure Logic Apps that Received CloudEvents data

    Now you've got the CloudEvents data. It's entirely up to you to handle that event data however you want!

    Exercise: Try this yourself!

    You can fork this GitHub repository to your account and play around with it to see how Azure Event Grid with CloudEvents works. Alternatively, the "Deploy to Azure" button below will provision all necessary Azure resources and deploy an Azure Functions app to mimic the event publisher.

    Deploy To Azure

    Resources: For self-study!

    Want to know more about CloudEvents in real-life examples? Here are several resources you can take a look at:

    - + \ No newline at end of file diff --git a/blog/tags/serverless-september/page/8/index.html b/blog/tags/serverless-september/page/8/index.html index 21fa0e50a6..adf8545fad 100644 --- a/blog/tags/serverless-september/page/8/index.html +++ b/blog/tags/serverless-september/page/8/index.html @@ -14,7 +14,7 @@ - + @@ -22,7 +22,7 @@

    33 posts tagged with "serverless-september"

    View All Tags

    · 10 min read
    Ayca Bas

    Welcome to Day 20 of #30DaysOfServerless!

    Every day millions of people spend their precious time in productivity tools. What if you use data and intelligence behind the Microsoft applications (Microsoft Teams, Outlook, and many other Office apps) to build seamless automations and custom apps to boost productivity?

    In this post, we'll learn how to build a seamless onboarding experience for new employees joining a company with the power of Microsoft Graph, integrated with Event Hubs and Logic Apps!


    What We'll Cover

    • ✨ The power of Microsoft Graph
    • 🖇️ How do Microsoft Graph and Event Hubs work together?
    • 🛠 Let's Build an Onboarding Workflow!
      • 1️⃣ Setup Azure Event Hubs + Key Vault
      • 2️⃣ Subscribe to users, receive change notifications from Logic Apps
      • 3️⃣ Create Onboarding workflow in the Logic Apps
    • 🚀 Debug: Your onboarding experience
    • ✋ Exercise: Try this tutorial out yourself!
    • 📚 Resources: For Self-Study


    ✨ The Power of Microsoft Graph

    Microsoft Graph is the gateway to data and intelligence in Microsoft 365 platform. Microsoft Graph exploses Rest APIs and client libraries to access data across Microsoft 365 core services such as Calendar, Teams, To Do, Outlook, People, Planner, OneDrive, OneNote and more.

    Overview of Microsoft Graph

    You can build custom experiences by using Microsoft Graph such as automating the onboarding process for new employees. When new employees are created in the Azure Active Directory, they will be automatically added in the Onboarding team on Microsoft Teams.

    Solution architecture


    🖇️ Microsoft Graph with Event Hubs

    Microsoft Graph uses a webhook mechanism to track changes in resources and deliver change notifications to the clients. For example, with Microsoft Graph Change Notifications, you can receive change notifications when:

    • a new task is added in the to-do list
    • a user changes the presence status from busy to available
    • an event is deleted/cancelled from the calendar

    If you'd like to track a large set of resources at a high frequency, use Azure Events Hubs instead of traditional webhooks to receive change notifications. Azure Event Hubs is a popular real-time events ingestion and distribution service built for scale.

    EVENT GRID - PARTNER EVENTS

    Microsoft Graph Change Notifications can be also received by using Azure Event Grid -- currently available for Microsoft Partners! Read the Partner Events Overview documentation for details.

    Setup Azure Event Hubs + Key Vault.

    To get Microsoft Graph Change Notifications delivered to Azure Event Hubs, we'll have to setup Azure Event Hubs and Azure Key Vault. We'll use Azure Key Vault to access to Event Hubs connection string.

    1️⃣ Create Azure Event Hubs

    1. Go to Azure Portal and select Create a resource, type Event Hubs and select click Create.
    2. Fill in the Event Hubs namespace creation details, and then click Create.
    3. Go to the newly created Event Hubs namespace page, select Event Hubs tab from the left pane and + Event Hub:
      • Name your Event Hub as Event Hub
      • Click Create.
    4. Click the name of the Event Hub, and then select Shared access policies and + Add to add a new policy:
      • Give a name to the policy
      • Check Send and Listen
      • Click Create.
    5. After the policy has been created, click the name of the policy to open the details panel, and then copy the Connection string-primary key value. Write it down; you'll need it for the next step.
    6. Go to Consumer groups tab in the left pane and select + Consumer group, give a name for your consumer group as onboarding and select Create.

    2️⃣ Create Azure Key Vault

    1. Go to Azure Portal and select Create a resource, type Key Vault and select Create.
    2. Fill in the Key Vault creation details, and then click Review + Create.
    3. Go to newly created Key Vault and select Secrets tab from the left pane and click + Generate/Import:
      • Give a name to the secret
      • For the value, paste in the connection string you generated at the Event Hubs step
      • Click Create
      • Copy the name of the secret.
    4. Select Access Policies from the left pane and + Add Access Policy:
      • For Secret permissions, select Get
      • For Principal, select Microsoft Graph Change Tracking
      • Click Add.
    5. Select Overview tab from the left pane and copy the Vault URI.

    Subscribe for Logic Apps change notifications

    To start receiving Microsoft Graph Change Notifications, we'll need to create subscription to the resource that we'd like to track - here, 'users'. We'll use Azure Logic Apps to create subscription.

    To create subscription for Microsoft Graph Change Notifications, we'll need to make a http post request to https://graph.microsoft.com/v1.0/subscriptions. Microsoft Graph requires Azure Active Directory authentication make API calls. First, we'll need to register an app to Azure Active Directory, and then we will make the Microsoft Graph Subscription API call with Azure Logic Apps.

    1️⃣ Create an app in Azure Active Directory

    1. In the Azure Portal, go to Azure Active Directory and select App registrations from the left pane and select + New registration. Fill in the details for the new App registration form as below:
      • Name: Graph Subscription Flow Auth
      • Supported account types: Accounts in any organizational directory (Any Azure AD directory - Multitenant) and personal Microsoft accounts (e.g. Skype, Xbox)
      • Select Register.
    2. Go to newly registered app in Azure Active Directory, select API permissions:
      • Select + Add a permission and Microsoft Graph
      • Select Application permissions and add User.Read.All and Directory.Read.All.
      • Select Grant admin consent for the organization
    3. Select Certificates & secrets tab from the left pane, select + New client secret:
      • Choose desired expiry duration
      • Select Add
      • Copy the value of the secret.
    4. Go to Overview from the left pane, copy Application (client) ID and Directory (tenant) ID.

    2️⃣ Create subscription with Azure Logic Apps

    1. Go to Azure Portal and select Create a resource, type Logic apps and select click Create.

    2. Fill in the Logic Apps creation details, and then click Create.

    3. Go to the newly created Logic Apps page, select Workflows tab from the left pane and select + Add:

      • Give a name to the new workflow as graph-subscription-flow
      • Select Stateful as a state type
      • Click Create.
    4. Go to graph-subscription-flow, and then select Designer tab.

    5. In the Choose an operation section, search for Schedule and select Recurrence as a trigger. Fill in the parameters as below:

      • Interval: 61
      • Frequency: Minute
      • Time zone: Select your own time zone
      • Start time: Set a start time
    6. Select + button in the flow and select add an action. Search for HTTP and select HTTP as an action. Fill in the parameters as below:

      • Method: POST
      • URI: https://graph.microsoft.com/v1.0/subscriptions
      • Headers:
        • Key: Content-type
        • Value: application/json
      • Body:
      {
      "changeType": "created, updated",
      "clientState": "secretClientValue",
      "expirationDateTime": "@{addHours(utcNow(), 1)}",
      "notificationUrl": "EventHub:https://<YOUR-VAULT-URI>/secrets/<YOUR-KEY-VAULT-SECRET-NAME>?tenantId=72f988bf-86f1-41af-91ab-2d7cd011db47",
      "resource": "users"
      }

      In notificationUrl, make sure to replace <YOUR-VAULT-URI> with the vault uri and <YOUR-KEY-VAULT-SECRET-NAME> with the secret name that you copied from the Key Vault.

      In resource, define the resource type you'd like to track changes. For our example, we will track changes for users resource.

      • Authentication:
        • Authentication type: Active Directory OAuth
        • Authority: https://login.microsoft.com
        • Tenant: Directory (tenant) ID copied from AAD app
        • Audience: https://graph.microsoft.com
        • Client ID: Application (client) ID copied from AAD app
        • Credential Type: Secret
        • Secret: value of the secret copied from AAD app
    7. Select Save and run your workflow from the Overview tab.

      Check your subscription in Graph Explorer: If you'd like to make sure that your subscription is created successfully by Logic Apps, you can go to Graph Explorer, login with your Microsoft 365 account and make GET request to https://graph.microsoft.com/v1.0/subscriptions. Your subscription should appear in the response after it's created successfully.

    Subscription workflow success

    After subscription is created successfully by Logic Apps, Azure Event Hubs will receive notifications whenever there is a new user created in Azure Active Directory.


    Create Onboarding workflow in Logic Apps

    We'll create a second workflow in the Logic Apps to receive change notifications from Event Hubs when there is a new user created in the Azure Active Directory and add new user in Onboarding team on Microsoft Teams.

    1. Go to the Logic Apps you created in the previous steps, select Workflows tab and create a new workflow by selecting + Add:
      • Give a name to the new workflow as teams-onboarding-flow
      • Select Stateful as a state type
      • Click Create.
    2. Go to teams-onboarding-flow, and then select Designer tab.
    3. In the Choose an operation section, search for Event Hub, select When events are available in Event Hub as a trigger. Setup Event Hub connection as below:
      • Create Connection:
        • Connection name: Connection
        • Authentication Type: Connection String
        • Connection String: Go to Event Hubs > Shared Access Policies > RootManageSharedAccessKey and copy Connection string–primary key
        • Select Create.
      • Parameters:
        • Event Hub Name: Event Hub
        • Consumer Group Name: onboarding
    4. Select + in the flow and add an action, search for Control and add For each as an action. Fill in For each action as below:
      • Select output from previous steps: Events
    5. Inside For each, select + in the flow and add an action, search for Data operations and select Parse JSON. Fill in Parse JSON action as below:
      • Content: Events Content
      • Schema: Copy the json content from schema-parse.json and paste as a schema
    6. Select + in the flow and add an action, search for Control and add For each as an action. Fill in For each action as below:
      • Select output from previous steps: value
      1. Inside For each, select + in the flow and add an action, search for Microsoft Teams and select Add a member to a team. Login with your Microsoft 365 account to create a connection and fill in Add a member to a team action as below:
      • Team: Create an Onboarding team on Microsoft Teams and select
      • A user AAD ID for the user to add to a team: id
    7. Select Save.

    🚀 Debug your onboarding experience

    To debug our onboarding experience, we'll need to create a new user in Azure Active Directory and see if it's added in Microsoft Teams Onboarding team automatically.

    1. Go to Azure Portal and select Azure Active Directory from the left pane and go to Users. Select + New user and Create new user. Fill in the details as below:

      • User name: JaneDoe
      • Name: Jane Doe

      new user in Azure Active Directory

    2. When you added Jane Doe as a new user, it should trigger the teams-onboarding-flow to run. teams onboarding flow success

    3. Once the teams-onboarding-flow runs successfully, you should be able to see Jane Doe as a member of the Onboarding team on Microsoft Teams! 🥳 new member in Onboarding team on Microsoft Teams

    Congratulations! 🎉

    You just built an onboarding experience using Azure Logic Apps, Azure Event Hubs and Azure Key Vault.


    📚 Resources

    - + \ No newline at end of file diff --git a/blog/tags/serverless-september/page/9/index.html b/blog/tags/serverless-september/page/9/index.html index e0a0ed0313..898dcd4cc9 100644 --- a/blog/tags/serverless-september/page/9/index.html +++ b/blog/tags/serverless-september/page/9/index.html @@ -14,13 +14,13 @@ - +

    33 posts tagged with "serverless-september"

    View All Tags

    · 6 min read
    Ramya Oruganti

    Welcome to Day 19 of #30DaysOfServerless!

    Today, we have a special set of posts from our Zero To Hero 🚀 initiative, featuring blog posts authored by our Product Engineering teams for #ServerlessSeptember. Posts were originally published on the Apps on Azure blog on Microsoft Tech Community.


    What We'll Cover

    • Retry Policy Support - in Apache Kafka Extension
    • AutoOffsetReset property - in Apache Kafka Extension
    • Key support for Kafka messages - in Apache Kafka Extension
    • References: Apache Kafka Extension for Azure Functions


    Recently we launched the Apache Kafka extension for Azure functions in GA with some cool new features like deserialization of Avro Generic records and Kafka headers support. We received great responses - so we're back with more updates!

    Retry Policy support

    Handling errors in Azure Functions is important to avoid data loss or miss events or monitor the health of an application. Apache Kafka Extension for Azure Functions supports retry policy which tells the runtime to rerun a failed execution until either successful completion occurs or the maximum number of retries is reached.

    A retry policy is evaluated when a trigger function raises an uncaught exception. As a best practice, you should catch all exceptions in your code and rethrow any errors that you want to result in a retry.

    There are two retry strategies supported by policy that you can configure :- fixed delay and exponential backoff

    1. Fixed Delay - A specified amount of time is allowed to elapse between each retry.
    2. Exponential Backoff - The first retry waits for the minimum delay. On subsequent retries, time is added exponentially to the initial duration for each retry, until the maximum delay is reached. Exponential back-off adds some small randomization to delays to stagger retries in high-throughput scenarios.
    Please Note

    Retry Policy for Kafka extension is NOT supported for C# (in proc and out proc) trigger and output binding. This is supported for languages like Java, Node (JS , TypeScript), PowerShell and Python trigger and output bindings.

    Here is the sample code view of exponential backoff retry strategy

    Error Handling with Apache Kafka extension for Azure Functions

    AutoOffsetReset property

    AutoOffsetReset property enables customers to configure the behaviour in the absence of an initial offset. Imagine a scenario when there is a need to change consumer group name. The consumer connected using a new consumer group had to reprocess all events starting from the oldest (earliest) one, as this was the default one and this setting wasn’t exposed as configurable option in the Apache Kafka extension for Azure Functions(previously). With the help of this kafka setting you can configure on how to start processing events for newly created consumer groups.

    Due to lack of the ability to configure this setting, offset commit errors were causing topics to restart from earliest offset· Users were looking to be able to set offset setting to either latest or earliest based on their requirements.

    We are happy to share that we have enabled the AutoOffsetReset setting as a configurable one to either - Earliest(Default) and Latest. Setting the value to Earliest configures the consumption of the messages from the the earliest/smallest offset or beginning of the topic partition. Setting the property to Latest configures the consumption of the messages from the latest/largest offset or from the end of the topic partition. This is supported for all the Azure Functions supported languages (C# (in & out), Java, Node (JS and TypeScript), PowerShell and python) and can be used for both triggers and output binding

    Error Handling with Apache Kafka extension for Azure Functions

    Key support for Kafka messages

    With keys the producer/output binding can be mapped to broker and partition to write based on the message. So alongside the message value, we can choose to send a message key and that key can be whatever you want it could be a string, it could be a number . In case you don’t send the key, the key is set to null then the data will be sent in a Round Robin fashion to make it very simple. But in case you send a key with your message, all the messages that share the same key will always go to the same partition and thus you can enable grouping of similar messages into partitions

    Previously while consuming a Kafka event message using the Azure Function kafka extension, the event key was always none although the key was present in the event message.

    Key support was implemented in the extension which enables customers to set/view key in the Kafka event messages coming in to the kafka trigger and set keys to the messages going in to kafka topics (with keys set) through output binding. Therefore key support was enabled in the extension to support both trigger and output binding for all Azure Functions supported languages ( (C# (in & out), Java, Node (JS and TypeScript), PowerShell and python)

    Here is the view of an output binding producer code where Kafka messages are being set with key

    Error Handling with Apache Kafka extension for Azure Functions

    Conclusion:

    In this article you have learnt about the latest additions to the Apache Kafka extension for Azure Functions. Incase you have been waiting for these features to get released or need them you are all set and please go head and try them out!! They are available in the latest extension bundles

    Want to learn more?

    Please refer to Apache Kafka bindings for Azure Functions | Microsoft Docs for detail documentation, samples on the Azure function supported languages and more!

    References

    FEEDBACK WELCOME

    Keep in touch with us on Twitter via @AzureFunctions.

    - + \ No newline at end of file diff --git a/blog/tags/serverless/index.html b/blog/tags/serverless/index.html index 3a0da8a058..325887224b 100644 --- a/blog/tags/serverless/index.html +++ b/blog/tags/serverless/index.html @@ -14,13 +14,13 @@ - +

    One post tagged with "serverless"

    View All Tags

    · 8 min read
    Rory Preddy

    Welcome to Day 4 of #30DaysOfServerless!

    Yesterday we walked through an Azure Functions Quickstart with JavaScript, and used it to understand the general Functions App structure, tooling and developer experience.

    Today we'll look at developing Functions app with a different programming language - namely, Java - and explore developer guidance, tools and resources to build serverless Java solutions on Azure.


    What We'll Cover


    Developer Guidance

    If you're a Java developer new to serverless on Azure, start by exploring the Azure Functions Java Developer Guide. It covers:

    In this blog post, we'll dive into one quickstart, and discuss other resources briefly, for awareness! Do check out the recommended exercises and resources for self-study!


    My First Java Functions App

    In today's post, we'll walk through the Quickstart: Azure Functions tutorial using Visual Studio Code. In the process, we'll setup our development environment with the relevant command-line tools and VS Code extensions to make building Functions app simpler.

    Note: Completing this exercise may incur a a cost of a few USD cents based on your Azure subscription. Explore pricing details to learn more.

    First, make sure you have your development environment setup and configured.

    PRE-REQUISITES
    1. An Azure account with an active subscription - Create an account for free
    2. The Java Development Kit, version 11 or 8. - Install
    3. Apache Maven, version 3.0 or above. - Install
    4. Visual Studio Code. - Install
    5. The Java extension pack - Install
    6. The Azure Functions extension for Visual Studio Code - Install

    VS Code Setup

    NEW TO VISUAL STUDIO CODE?

    Start with the Java in Visual Studio Code tutorial to jumpstart your learning!

    Install the Extension Pack for Java (shown below) to install 6 popular extensions to help development workflow from creation to testing, debugging, and deployment.

    Extension Pack for Java

    Now, it's time to get started on our first Java-based Functions app.

    1. Create App

    1. Open a command-line terminal and create a folder for your project. Use the code command to launch Visual Studio Code from that directory as shown:

      $ mkdir java-function-resource-group-api
      $ cd java-function-resource-group-api
      $ code .
    2. Open the Visual Studio Command Palette (Ctrl + Shift + p) and select Azure Functions: create new project to kickstart the create workflow. Alternatively, you can click the Azure icon (on activity sidebar), to get the Workspace window, click "+" and pick the "Create Function" option as shown below.

      Screenshot of creating function in Azure from Visual Studio Code.

    3. This triggers a multi-step workflow. Fill in the information for each step as shown in the following prompts. Important: Start this process from an empty folder - the workflow will populate it with the scaffold for your Java-based Functions app.

      PromptValue
      Choose the directory location.You should either create a new folder or choose an empty folder for the project workspace. Don't choose a project folder that is already part of a workspace.
      Select a languageChoose Java.
      Select a version of JavaChoose Java 11 or Java 8, the Java version on which your functions run in Azure. Choose a Java version that you've verified locally.
      Provide a group IDChoose com.function.
      Provide an artifact IDEnter myFunction.
      Provide a versionChoose 1.0-SNAPSHOT.
      Provide a package nameChoose com.function.
      Provide an app nameEnter HttpExample.
      Select the build tool for Java projectChoose Maven.

    Visual Studio Code uses the provided information and generates an Azure Functions project. You can view the local project files in the Explorer - it should look like this:

    Azure Functions Scaffold For Java

    2. Preview App

    Visual Studio Code integrates with the Azure Functions Core tools to let you run this project on your local development computer before you publish to Azure.

    1. To build and run the application, use the following Maven command. You should see output similar to that shown below.

      $ mvn clean package azure-functions:run
      ..
      ..
      Now listening on: http://0.0.0.0:7071
      Application started. Press Ctrl+C to shut down.

      Http Functions:

      HttpExample: [GET,POST] http://localhost:7071/api/HttpExample
      ...
    2. Copy the URL of your HttpExample function from this output to a browser and append the query string ?name=<YOUR_NAME>, making the full URL something like http://localhost:7071/api/HttpExample?name=Functions. The browser should display a message that echoes back your query string value. The terminal in which you started your project also shows log output as you make requests.

    🎉 CONGRATULATIONS

    You created and ran a function app locally!

    With the Terminal panel focused, press Ctrl + C to stop Core Tools and disconnect the debugger. After you've verified that the function runs correctly on your local computer, it's time to use Visual Studio Code and Maven to publish and test the project on Azure.

    3. Sign into Azure

    Before you can deploy, sign in to your Azure subscription.

    az login

    The az login command signs you into your Azure account.

    Use the following command to deploy your project to a new function app.

    mvn clean package azure-functions:deploy

    When the creation is complete, the following Azure resources are created in your subscription:

    • Resource group. Named as java-functions-group.
    • Storage account. Required by Functions. The name is generated randomly based on Storage account name requirements.
    • Hosting plan. Serverless hosting for your function app.The name is java-functions-app-service-plan.
    • Function app. A function app is the deployment and execution unit for your functions. The name is randomly generated based on your artifactId, appended with a randomly generated number.

    4. Deploy App

    1. Back in the Resources area in the side bar, expand your subscription, your new function app, and Functions. Right-click (Windows) or Ctrl - click (macOS) the HttpExample function and choose Execute Function Now....

      Screenshot of executing function in Azure from Visual Studio Code.

    2. In Enter request body you see the request message body value of { "name": "Azure" }. Press Enter to send this request message to your function.

    3. When the function executes in Azure and returns a response, a notification is raised in Visual Studio Code.

    You can also copy the complete Invoke URL shown in the output of the publish command into a browser address bar, appending the query parameter ?name=Functions. The browser should display similar output as when you ran the function locally.

    🎉 CONGRATULATIONS

    You deployed your function app to Azure, and invoked it!

    5. Clean up

    Use the following command to delete the resource group and all its contained resources to avoid incurring further costs.

    az group delete --name java-functions-group

    Next Steps

    So, where can you go from here? The example above used a familiar HTTP Trigger scenario with a single Azure service (Azure Functions). Now, think about how you can build richer workflows by using other triggers and integrating with other Azure or third-party services.

    Other Triggers, Bindings

    Check out Azure Functions Samples In Java for samples (and short use cases) that highlight other triggers - with code! This includes triggers to integrate with CosmosDB, Blob Storage, Event Grid, Event Hub, Kafka and more.

    Scenario with Integrations

    Once you've tried out the samples, try building an end-to-end scenario by using these triggers to integrate seamlessly with other Services. Here are a couple of useful tutorials:

    Exercise

    Time to put this into action and validate your development workflow:

    Resources

    - + \ No newline at end of file diff --git a/blog/tags/students/index.html b/blog/tags/students/index.html index e37dbdedac..161fdf483a 100644 --- a/blog/tags/students/index.html +++ b/blog/tags/students/index.html @@ -14,13 +14,13 @@ - +

    One post tagged with "students"

    View All Tags

    · 3 min read
    Sara Gibbons

    ✨ Serverless September For Students

    My love for the tech industry grows as it evolves. Not just for the new technologies to play with, but seeing how paths into a tech career continue to expand. Allowing so many new voices, ideas and perspectives to our industry. With serverless computing removing barriers of entry for so many.

    It's a reason I enjoy working with universities and students. I get to hear the excitement of learning, fresh ideas and perspectives from our student community. All you students are incredible! How you view serverless, and what it can do, so cool!

    This year for Serverless September we want to hear all the amazing ways our student community is learning and working with Azure Serverless, and have all new ways for you to participate.

    Getting Started

    If you don't already have an Azure for Students account you can easily get your FREE account created at Azure for Students Sign up.

    If you are new to serverless, here are a couple links to get you started:

    No Experience, No problem

    For Serverless September we have planned beginner friendly content all month long. Covering such services as:

    You can follow #30DaysOfServerles here on the blog for daily posts covering concepts, scenarios, and how to create end-to-end solutions.

    Join the Cloud Skills Challenge where we have selected a list of Learn Modules for you to go through at your own pace, including deploying a full stack application with Azure Static Web Apps.

    Have A Question

    We want to hear it! All month long we will have Ask The Expert sessions. Submit your questions at any time and will be be sure to get one of our Azure Serverless experts to get you an answer.

    Share What You've Created

    If you have written a blog post, recorded a video, have an open source Azure Serverless project, we'd love to see it! Here is some links for you to share your creations

    🧭 Explore Student Resources

    ⚡️ Join us!

    Multiple teams across Microsoft are working to create Serverless September! They all want to hear from our incredible student community. We can't wait to share all the Serverless September resources and hear what you have learned and created. Here are some ways to keep up to date on all Serverless September activity:

    - + \ No newline at end of file diff --git a/blog/tags/vscode/index.html b/blog/tags/vscode/index.html index 241d1831da..91446f46b0 100644 --- a/blog/tags/vscode/index.html +++ b/blog/tags/vscode/index.html @@ -14,13 +14,13 @@ - +

    One post tagged with "vscode"

    View All Tags

    · 9 min read
    Nitya Narasimhan

    Welcome to Day 3 of #30DaysOfServerless!

    Yesterday we learned core concepts and terminology for Azure Functions, the signature Functions-as-a-Service option on Azure. Today we take our first steps into building and deploying an Azure Functions app, and validate local development setup.

    Ready? Let's go.


    What We'll Cover


    Developer Guidance

    Before we jump into development, let's familiarize ourselves with language-specific guidance from the Azure Functions Developer Guide. We'll review the JavaScript version but guides for F#, Java, Python, C# and PowerShell are also available.

    1. A function is defined by two things: code (written in a supported programming language) and configuration (specified in a functions.json file, declaring the triggers, bindings and other context for execution).

    2. A function app is the unit of deployment for your functions, and is associated with a single execution context or runtime. It can contain multiple functions, but they must be in the same language.

    3. A host configuration is runtime-specific configuration that affects all functions running in a given function app instance. It is defined in a host.json file.

    4. A recommended folder structure is defined for the function app, but may vary based on the programming language used. Check the documentation on folder structures to learn the default for your preferred language.

    Here's an example of the JavaScript folder structure for a function app containing two functions with some shared dependencies. Note that host.json (runtime configuration) is defined once, in the root directory. And function.json is defined separately for each function.

    FunctionsProject
    | - MyFirstFunction
    | | - index.js
    | | - function.json
    | - MySecondFunction
    | | - index.js
    | | - function.json
    | - SharedCode
    | | - myFirstHelperFunction.js
    | | - mySecondHelperFunction.js
    | - node_modules
    | - host.json
    | - package.json
    | - local.settings.json

    We'll dive into what the contents of these files look like, when we build and deploy the first function. We'll cover local.settings.json in the About Local Testing section at the end.


    My First Function App

    The documentation provides quickstart options for all supported languages. We'll walk through the JavaScript versions in this article. You have two options for development:

    I'm a huge fan of VS Code - so I'll be working through that tutorial today.

    PRE-REQUISITES

    Don't forget to validate your setup by checking the versions of installed software.

    Install VSCode Extension

    Installing the Visual Studio Code extension should automatically open this page in your IDE with similar quickstart instructions, but potentially more recent screenshots.

    Visual Studio Code Extension for VS Code

    Note that it may make sense to install the Azure tools for Visual Studio Code extensions pack if you plan on working through the many projects in Serverless September. This includes the Azure Functions extension by default.

    Create First Function App

    Walk through the Create local [project] steps of the quickstart. The process is quick and painless and scaffolds out this folder structure and files. Note the existence (and locations) of functions.json and host.json files.

    Final screenshot for VS Code workflow

    Explore the Code

    Check out the functions.json configuration file. It shows that the function is activated by an httpTrigger with an input binding (tied to req payload) and an output binding (tied to res payload). And it supports both GET and POST requests on the exposed URL.

    {
    "bindings": [
    {
    "authLevel": "anonymous",
    "type": "httpTrigger",
    "direction": "in",
    "name": "req",
    "methods": [
    "get",
    "post"
    ]
    },
    {
    "type": "http",
    "direction": "out",
    "name": "res"
    }
    ]
    }

    Check out index.js - the function implementation. We see it logs a message to the console when invoked. It then extracts a name value from the input payload (req) and crafts a different responseMessage based on the presence/absence of a valid name. It returns this response in the output payload (res).

    module.exports = async function (context, req) {
    context.log('JavaScript HTTP trigger function processed a request.');

    const name = (req.query.name || (req.body && req.body.name));
    const responseMessage = name
    ? "Hello, " + name + ". This HTTP triggered function executed successfully."
    : "This HTTP triggered function executed successfully. Pass a name in the query string or in the request body for a personalized response.";

    context.res = {
    // status: 200, /* Defaults to 200 */
    body: responseMessage
    };
    }

    Preview Function App Locally

    You can now run this function app locally using Azure Functions Core Tools. VS Code integrates seamlessly with this CLI-based tool, making it possible for you to exploit all its capabilities without leaving the IDE. In fact, the workflow will even prompt you to install those tools if they didn't already exist in your local dev environment.

    Now run the function app locally by clicking on the "Run and Debug" icon in the activity bar (highlighted, left) and pressing the "▶️" (Attach to Node Functions) to start execution. On success, your console output should show something like this.

    Final screenshot for VS Code workflow

    You can test the function locally by visiting the Function Url shown (http://localhost:7071/api/HttpTrigger1) or by opening the Workspace region of the Azure extension, and selecting the Execute Function now menu item as shown.

    Final screenshot for VS Code workflow

    In the latter case, the Enter request body popup will show a pre-populated request of {"name":"Azure"} that you can submit.

    Final screenshot for VS Code workflow

    On successful execution, your VS Code window will show a notification as follows. Take note of the console output - it shows the message encoded in index.js.

    Final screenshot for VS Code workflow

    You can also visit the deployed function URL directly in a local browser - testing the case for a request made with no name payload attached. Note how the response in the browser now shows the non-personalized version of the message!

    Final screenshot for VS Code workflow

    🎉 Congratulations

    You created and ran a function app locally!

    (Re)Deploy to Azure

    Now, just follow the creating a function app in Azure steps to deploy it to Azure, using an active subscription! The deployed app resource should now show up under the Function App Resources where you can click Execute Function Now to test the Azure-deployed version instead. You can also look up the function URL in the portal and visit that link in your local browser to trigger the function without the name context.

    🎉 Congratulations

    You have an Azure-hosted serverless function app!

    Challenge yourself and try to change the code and redeploy to Azure to return something different. You have effectively created a serverless API endpoint!


    About Core Tools

    That was a lot to cover! In the next few days we'll have more examples for Azure Functions app development - focused on different programming languages. So let's wrap today's post by reviewing two helpful resources.

    First, let's talk about Azure Functions Core Tools - the command-line tool that lets you develop, manage, and deploy, Azure Functions projects from your local development environment. It is used transparently by the VS Code extension - but you can use it directly from a terminal for a powerful command-line end-to-end developer experience! The Core Tools commands are organized into the following contexts:

    Learn how to work with Azure Functions Core Tools. Not only can it help with quick command execution, it can also be invaluable for debugging issues that may not always be visible or understandable in an IDE.

    About Local Testing

    You might have noticed that the scaffold also produced a local.settings.json file. What is that and why is it useful? By definition, the local.settings.json file "stores app settings and settings used by local development tools. Settings in the local.settings.json file are used only when you're running your project locally."

    Read the guidance on Code and test Azure Functions Locally to learn more about how to configure development environments locally, for your preferred programming language, to support testing and debugging on the local Functions runtime.

    Exercise

    We made it! Now it's your turn!! Here are a few things you can try to apply what you learned and reinforce your understanding:

    Resources

    Bookmark and visit the #30DaysOfServerless Collection. It's the one-stop collection of resources we will keep updated with links to relevant documentation and learning resources.

    - + \ No newline at end of file diff --git a/blog/tags/zero-to-hero/index.html b/blog/tags/zero-to-hero/index.html index 8a69de7d3d..4887fc16f2 100644 --- a/blog/tags/zero-to-hero/index.html +++ b/blog/tags/zero-to-hero/index.html @@ -14,14 +14,14 @@ - +

    8 posts tagged with "zero-to-hero"

    View All Tags

    · 5 min read
    Madhura Bharadwaj

    Welcome to Day 26 of #30DaysOfServerless!

    Today, we have a special set of posts from our Zero To Hero 🚀 initiative, featuring blog posts authored by our Product Engineering teams for #ServerlessSeptember. Posts were originally published on the Apps on Azure blog on Microsoft Tech Community.


    What We'll Cover

    • Monitoring your Azure Functions
    • Built-in log streaming
    • Live Metrics stream
    • Troubleshooting Azure Functions


    Monitoring your Azure Functions:

    Azure Functions uses Application Insights to collect and analyze log data from individual function executions in your function app.

    Using Application Insights

    Application Insights collects log, performance, and error data. By automatically detecting performance anomalies and featuring powerful analytics tools, you can more easily diagnose issues and better understand how your functions are used. These tools are designed to help you continuously improve performance and usability of your functions. You can even use Application Insights during local function app project development.

    Typically, you create an Application Insights instance when you create your function app. In this case, the instrumentation key required for the integration is already set as an application setting named APPINSIGHTS_INSTRUMENTATIONKEY. With Application Insights integration enabled, telemetry data is sent to your connected Application Insights instance. This data includes logs generated by the Functions host, traces written from your functions code, and performance data. In addition to data from your functions and the Functions host, you can also collect data from the Functions scale controller.

    By default, the data collected from your function app is stored in Application Insights. In the Azure portal, Application Insights provides an extensive set of visualizations of your telemetry data. You can drill into error logs and query events and metrics. To learn more, including basic examples of how to view and query your collected data, see Analyze Azure Functions telemetry in Application Insights.

    Using Log Streaming

    In addition to this, you can have a smoother debugging experience through log streaming. There are two ways to view a stream of log files being generated by your function executions.

    • Built-in log streaming: the App Service platform lets you view a stream of your application log files. This is equivalent to the output seen when you debug your functions during local development and when you use the Test tab in the portal. All log-based information is displayed. For more information, see Stream logs. This streaming method supports only a single instance and can't be used with an app running on Linux in a Consumption plan.
    • Live Metrics Stream: when your function app is connected to Application Insights, you can view log data and other metrics in near real-time in the Azure portal using Live Metrics Stream. Use this method when monitoring functions running on multiple-instances or on Linux in a Consumption plan. This method uses sampled data. Log streams can be viewed both in the portal and in most local development environments.
    Monitoring Azure Functions

    Learn how to configure monitoring for your Azure Functions. See Monitoring Azure Functions data reference for detailed information on the metrics and logs metrics created by Azure Functions.

    In addition to this, Azure Functions uses Azure Monitor to monitor the health of your function apps. Azure Functions collects the same kinds of monitoring data as other Azure resources that are described in Azure Monitor data collection. See Monitoring Azure Functions data reference for detailed information on the metrics and logs metrics created by Azure Functions.

    Troubleshooting your Azure Functions:

    When you do run into issues with your function app, Azure Functions diagnostics points out what’s wrong. It guides you to the right information to troubleshoot and resolve the issue more easily and quickly.

    Let’s explore how to use Azure Functions diagnostics to diagnose and solve common function app issues.

    1. Navigate to your function app in the Azure portal.
    2. Select Diagnose and solve problems to open Azure Functions diagnostics.
    3. Once you’re here, there are multiple ways to retrieve the information you’re looking for. Choose a category that best describes the issue of your function app by using the keywords in the homepage tile. You can also type a keyword that best describes your issue in the search bar. There’s also a section at the bottom of the page that will directly take you to some of the more popular troubleshooting tools. For example, you could type execution to see a list of diagnostic reports related to your function app execution and open them directly from the homepage.

    Monitoring and troubleshooting apps in Azure Functions

    1. For example, click on the Function App Down or Reporting Errors link under Popular troubleshooting tools section. You will find detailed analysis, insights and next steps for the issues that were detected. On the left you’ll see a list of detectors. Click on them to explore more, or if there’s a particular keyword you want to look for, type it Into the search bar on the top.

    Monitoring and troubleshooting apps in Azure Functions

    TROUBLESHOOTING TIP

    Here are some general troubleshooting tips that you can follow if you find your Function App throwing Azure Functions Runtime unreachable error.

    Also be sure to check out the recommended best practices to ensure your Azure Functions are highly reliable. This article details some best practices for designing and deploying efficient function apps that remain healthy and perform well in a cloud-based environment.

    Bonus tip:

    - + \ No newline at end of file diff --git a/blog/tags/zero-to-hero/page/2/index.html b/blog/tags/zero-to-hero/page/2/index.html index c5b2682c39..ab6f519380 100644 --- a/blog/tags/zero-to-hero/page/2/index.html +++ b/blog/tags/zero-to-hero/page/2/index.html @@ -14,13 +14,13 @@ - +

    8 posts tagged with "zero-to-hero"

    View All Tags

    · 6 min read
    Ramya Oruganti

    Welcome to Day 19 of #30DaysOfServerless!

    Today, we have a special set of posts from our Zero To Hero 🚀 initiative, featuring blog posts authored by our Product Engineering teams for #ServerlessSeptember. Posts were originally published on the Apps on Azure blog on Microsoft Tech Community.


    What We'll Cover

    • Retry Policy Support - in Apache Kafka Extension
    • AutoOffsetReset property - in Apache Kafka Extension
    • Key support for Kafka messages - in Apache Kafka Extension
    • References: Apache Kafka Extension for Azure Functions


    Recently we launched the Apache Kafka extension for Azure functions in GA with some cool new features like deserialization of Avro Generic records and Kafka headers support. We received great responses - so we're back with more updates!

    Retry Policy support

    Handling errors in Azure Functions is important to avoid data loss or miss events or monitor the health of an application. Apache Kafka Extension for Azure Functions supports retry policy which tells the runtime to rerun a failed execution until either successful completion occurs or the maximum number of retries is reached.

    A retry policy is evaluated when a trigger function raises an uncaught exception. As a best practice, you should catch all exceptions in your code and rethrow any errors that you want to result in a retry.

    There are two retry strategies supported by policy that you can configure :- fixed delay and exponential backoff

    1. Fixed Delay - A specified amount of time is allowed to elapse between each retry.
    2. Exponential Backoff - The first retry waits for the minimum delay. On subsequent retries, time is added exponentially to the initial duration for each retry, until the maximum delay is reached. Exponential back-off adds some small randomization to delays to stagger retries in high-throughput scenarios.
    Please Note

    Retry Policy for Kafka extension is NOT supported for C# (in proc and out proc) trigger and output binding. This is supported for languages like Java, Node (JS , TypeScript), PowerShell and Python trigger and output bindings.

    Here is the sample code view of exponential backoff retry strategy

    Error Handling with Apache Kafka extension for Azure Functions

    AutoOffsetReset property

    AutoOffsetReset property enables customers to configure the behaviour in the absence of an initial offset. Imagine a scenario when there is a need to change consumer group name. The consumer connected using a new consumer group had to reprocess all events starting from the oldest (earliest) one, as this was the default one and this setting wasn’t exposed as configurable option in the Apache Kafka extension for Azure Functions(previously). With the help of this kafka setting you can configure on how to start processing events for newly created consumer groups.

    Due to lack of the ability to configure this setting, offset commit errors were causing topics to restart from earliest offset· Users were looking to be able to set offset setting to either latest or earliest based on their requirements.

    We are happy to share that we have enabled the AutoOffsetReset setting as a configurable one to either - Earliest(Default) and Latest. Setting the value to Earliest configures the consumption of the messages from the the earliest/smallest offset or beginning of the topic partition. Setting the property to Latest configures the consumption of the messages from the latest/largest offset or from the end of the topic partition. This is supported for all the Azure Functions supported languages (C# (in & out), Java, Node (JS and TypeScript), PowerShell and python) and can be used for both triggers and output binding

    Error Handling with Apache Kafka extension for Azure Functions

    Key support for Kafka messages

    With keys the producer/output binding can be mapped to broker and partition to write based on the message. So alongside the message value, we can choose to send a message key and that key can be whatever you want it could be a string, it could be a number . In case you don’t send the key, the key is set to null then the data will be sent in a Round Robin fashion to make it very simple. But in case you send a key with your message, all the messages that share the same key will always go to the same partition and thus you can enable grouping of similar messages into partitions

    Previously while consuming a Kafka event message using the Azure Function kafka extension, the event key was always none although the key was present in the event message.

    Key support was implemented in the extension which enables customers to set/view key in the Kafka event messages coming in to the kafka trigger and set keys to the messages going in to kafka topics (with keys set) through output binding. Therefore key support was enabled in the extension to support both trigger and output binding for all Azure Functions supported languages ( (C# (in & out), Java, Node (JS and TypeScript), PowerShell and python)

    Here is the view of an output binding producer code where Kafka messages are being set with key

    Error Handling with Apache Kafka extension for Azure Functions

    Conclusion:

    In this article you have learnt about the latest additions to the Apache Kafka extension for Azure Functions. Incase you have been waiting for these features to get released or need them you are all set and please go head and try them out!! They are available in the latest extension bundles

    Want to learn more?

    Please refer to Apache Kafka bindings for Azure Functions | Microsoft Docs for detail documentation, samples on the Azure function supported languages and more!

    References

    FEEDBACK WELCOME

    Keep in touch with us on Twitter via @AzureFunctions.

    - + \ No newline at end of file diff --git a/blog/tags/zero-to-hero/page/3/index.html b/blog/tags/zero-to-hero/page/3/index.html index caf717d1a8..5f17e90d23 100644 --- a/blog/tags/zero-to-hero/page/3/index.html +++ b/blog/tags/zero-to-hero/page/3/index.html @@ -14,14 +14,14 @@ - +

    8 posts tagged with "zero-to-hero"

    View All Tags

    · 5 min read
    Mike Morton

    Welcome to Day 19 of #30DaysOfServerless!

    Today, we have a special set of posts from our Zero To Hero 🚀 initiative, featuring blog posts authored by our Product Engineering teams for #ServerlessSeptember. Posts were originally published on the Apps on Azure blog on Microsoft Tech Community.


    What We'll Cover

    • Log Streaming - in Azure Portal
    • Console Connect - in Azure Portal
    • Metrics - using Azure Monitor
    • Log Analytics - using Azure Monitor
    • Metric Alerts and Log Alerts - using Azure Monitor


    In past weeks, @kendallroden wrote about what it means to be Cloud-Native and @Anthony Chu the various ways to get your apps running on Azure Container Apps. Today, we will talk about the observability tools you can use to observe, debug, and diagnose your Azure Container Apps.

    Azure Container Apps provides several observability features to help you debug and diagnose your apps. There are both Azure portal and CLI options you can use to help understand the health of your apps and help identify when issues arise.

    While these features are helpful throughout your container app’s lifetime, there are two that are especially helpful. Log streaming and console connect can be a huge help in the initial stages when issues often rear their ugly head. Let's dig into both of these a little.

    Log Streaming

    Log streaming allows you to use the Azure portal to view the streaming logs from your app. You’ll see the logs written from the app to the container’s console (stderr and stdout). If your app is running multiple revisions, you can choose from which revision to view logs. You can also select a specific replica if your app is configured to scale. Lastly, you can choose from which container to view the log output. This is useful when you are running a custom or Dapr sidecar container. view streaming logs

    Here’s an example CLI command to view the logs of a container app.

    az containerapp logs show -n MyContainerapp -g MyResourceGroup

    You can find more information about the different options in our CLI docs.

    Console Connect

    In the Azure portal, you can connect to the console of a container in your app. Like log streaming, you can select the revision, replica, and container if applicable. After connecting to the console of the container, you can execute shell commands and utilities that you have installed in your container. You can view files and their contents, monitor processes, and perform other debugging tasks.

    This can be great for checking configuration files or even modifying a setting or library your container is using. Of course, updating a container in this fashion is not something you should do to a production app, but tweaking and re-testing an app in a non-production environment can speed up development.

    Here’s an example CLI command to connect to the console of a container app.

    az containerapp exec -n MyContainerapp -g MyResourceGroup

    You can find more information about the different options in our CLI docs.

    Metrics

    Azure Monitor collects metric data from your container app at regular intervals to help you gain insights into the performance and health of your container app. Container apps provide these metrics:

    • CPU usage
    • Memory working set bytes
    • Network in bytes
    • Network out bytes
    • Requests
    • Replica count
    • Replica restart count

    Here you can see the metrics explorer showing the replica count for an app as it scaled from one replica to fifteen, and then back down to one.

    You can also retrieve metric data through the Azure CLI.

    Log Analytics

    Azure Monitor Log Analytics is great for viewing your historical logs emitted from your container apps. There are two custom tables of interest, the ContainerAppConsoleLogs_CL which contains all the log messages written by your app (stdout and stderr), and the ContainerAppSystemLogs_CL which contain the system messages from the Azure Container Apps service.

    You can also query Log Analytics through the Azure CLI.

    Alerts

    Azure Monitor alerts notify you so that you can respond quickly to critical issues. There are two types of alerts that you can define:

    You can create alert rules from metric charts in the metric explorer and from queries in Log Analytics. You can also define and manage alerts from the Monitor|Alerts page.

    Here is what creating an alert looks like in the Azure portal. In this case we are setting an alert rule from the metric explorer to trigger an alert if the replica restart count for a specific container app is greater than two within the last fifteen minutes.

    To learn more about alerts, refer to Overview of alerts in Microsoft Azure.

    Conclusion

    In this article, we looked at the several ways to observe, debug, and diagnose your Azure Container Apps. As you can see there are rich portal tools and a complete set of CLI commands to use. All the tools are helpful throughout the lifecycle of your app, be sure to take advantage of them when having an issue and/or to prevent issues.

    To learn more, visit Azure Container Apps | Microsoft Azure today!

    ASK THE EXPERT: LIVE Q&A

    The Azure Container Apps team will answer questions live on September 29.

    - + \ No newline at end of file diff --git a/blog/tags/zero-to-hero/page/4/index.html b/blog/tags/zero-to-hero/page/4/index.html index 46d75724d4..923022c70b 100644 --- a/blog/tags/zero-to-hero/page/4/index.html +++ b/blog/tags/zero-to-hero/page/4/index.html @@ -14,13 +14,13 @@ - +

    8 posts tagged with "zero-to-hero"

    View All Tags

    · 6 min read
    Melony Qin

    Welcome to Day 12 of #30DaysOfServerless!

    Today, we have a special set of posts from our Zero To Hero 🚀 initiative, featuring blog posts authored by our Product Engineering teams for #ServerlessSeptember. Posts were originally published on the Apps on Azure blog on Microsoft Tech Community.


    What We'll Cover

    • What are Custom Handlers, and why use them?
    • How Custom Handler Works
    • Message Processing With Azure Custom Handler
    • Azure Portal Monitoring


    If you have been working with Azure Functions for a while, you may know Azure Functions is a serverless FaaS (Function as a Service) offered provided by Microsoft Azure, which is built for your key scenarios, including building web APIs, processing file uploads, responding to database changes, processing IoT data streams, managing message queues, and more.

    Custom Handlers: What and Why

    Azure functions support multiple programming languages including C#, F#, Java, JavaScript, Python, typescript, and PowerShell. If you want to get extended language support with Azure functions for other languages such as Go, and Rust, that’s where custom handler comes in.

    An Azure function custom handler allows the use of any language that supports HTTP primitives and author Azure functions. With custom handlers, you can use triggers and input and output bindings via extension bundles, hence it supports all the triggers and bindings you're used to with Azure functions.

    How a Custom Handler Works

    Let’s take a look at custom handlers and how it works.

    • A request is sent to the function host when an event is triggered. It’s up to the function host to issue a request payload to the custom handler, which holds the trigger and inputs binding data as well as other metadata for the function.
    • The custom handler is a local HTTP web server. It executes the function code and returns a response payload to the Functions host.
    • The Functions host passes data from the response to the function's output bindings which will be passed to the downstream stream services for data processing.

    Check out this article to know more about Azure functions custom handlers.


    Message processing with Custom Handlers

    Message processing is one of the key scenarios that Azure functions are trying to address. In the message-processing scenario, events are often collected in queues. These events can trigger Azure functions to execute a piece of business logic.

    You can use the Service Bus trigger to respond to messages from an Azure Service Bus queue - it's then up to the Azure functions custom handlers to take further actions to process the messages. The process is described in the following diagram:

    Building Serverless Go Applications with Azure functions custom handlers

    In Azure function, the function.json defines the function's trigger, input and output bindings, and other configuration settings. Note that every function can have multiple bindings, but it can only have one trigger. The following is an example of setting up the Service Bus queue trigger in the function.json file :

    {
    "bindings": [
    {
    "name": "queueItem",
    "type": "serviceBusTrigger",
    "direction": "in",
    "queueName": "functionqueue",
    "connection": "ServiceBusConnection"
    }
    ]
    }

    You can add a binding definition in the function.json to write the output to a database or other locations of your desire. Supported bindings can be found here.

    As we’re programming in Go, so we need to set the value of defaultExecutablePath to handler in the customHandler.description section in the host.json file.

    Assume we’re programming in Windows OS, and we have named our go application as server.go, after we executed go build server.go command, it produces an executable called server.exe. So here we set server.exe in the host.json as the following example :

      "customHandler": {
    "description": {
    "defaultExecutablePath": "./server.exe",
    "workingDirectory": "",
    "arguments": []
    }
    }

    We’re showcasing a simple Go application here with Azure functions custom handlers where we print out the messages received from the function host. The following is the full code of server.go application :

    package main

    import (
    "encoding/json"
    "fmt"
    "log"
    "net/http"
    "os"
    )

    type InvokeRequest struct {
    Data map[string]json.RawMessage
    Metadata map[string]interface{}
    }

    func queueHandler(w http.ResponseWriter, r *http.Request) {
    var invokeRequest InvokeRequest

    d := json.NewDecoder(r.Body)
    d.Decode(&invokeRequest)

    var parsedMessage string
    json.Unmarshal(invokeRequest.Data["queueItem"], &parsedMessage)

    fmt.Println(parsedMessage)
    }

    func main() {
    customHandlerPort, exists := os.LookupEnv("FUNCTIONS_CUSTOMHANDLER_PORT")
    if !exists {
    customHandlerPort = "8080"
    }
    mux := http.NewServeMux()
    mux.HandleFunc("/MessageProcessorFunction", queueHandler)
    fmt.Println("Go server Listening on: ", customHandlerPort)
    log.Fatal(http.ListenAndServe(":"+customHandlerPort, mux))

    }

    Ensure you have Azure functions core tools installed, then we can use func start command to start our function. Then we’ll have have a C#-based Message Sender application on Github to send out 3000 messages to the Azure service bus queue. You’ll see Azure functions instantly start to process the messages and print out the message as the following:

    Monitoring Serverless Go Applications with Azure functions custom handlers


    Azure portal monitoring

    Let’s go back to Azure portal portal the events see how those messages in Azure Service Bus queue were being processed. There was 3000 messages were queued in the Service Bus queue ( the Blue line stands for incoming Messages ). The outgoing messages (the red line in smaller wave shape ) showing there are progressively being read by Azure functions as the following :

    Monitoring Serverless Go Applications with Azure functions custom handlers

    Check out this article about monitoring Azure Service bus for further information.

    Next steps

    Thanks for following along, we’re looking forward to hearing your feedback. Also, if you discover potential issues, please record them on Azure Functions host GitHub repository or tag us @AzureFunctions on Twitter.

    RESOURCES

    Start to build your serverless applications with custom handlers, check out the official documentation:

    Life is a journey of learning. Let’s stay tuned!

    - + \ No newline at end of file diff --git a/blog/tags/zero-to-hero/page/5/index.html b/blog/tags/zero-to-hero/page/5/index.html index 8ad6f7fac5..37d770b694 100644 --- a/blog/tags/zero-to-hero/page/5/index.html +++ b/blog/tags/zero-to-hero/page/5/index.html @@ -14,13 +14,13 @@ - +

    8 posts tagged with "zero-to-hero"

    View All Tags

    · 5 min read
    Anthony Chu

    Welcome to Day 12 of #30DaysOfServerless!

    Today, we have a special set of posts from our Zero To Hero 🚀 initiative, featuring blog posts authored by our Product Engineering teams for #ServerlessSeptember. Posts were originally published on the Apps on Azure blog on Microsoft Tech Community.


    What We'll Cover

    • Using Visual Studio
    • Using Visual Studio Code: Docker, ACA extensions
    • Using Azure CLI
    • Using CI/CD Pipelines


    Last week, @kendallroden wrote about what it means to be Cloud-Native and how Azure Container Apps provides a serverless containers platform for hosting all of your Cloud-Native applications. Today, we’ll walk through a few ways to get your apps up and running on Azure Container Apps.

    Depending on where you are in your Cloud-Native app development journey, you might choose to use different tools to deploy your apps.

    • “Right-click, publish” – Deploying an app directly from an IDE or code editor is often seen as a bad practice, but it’s one of the quickest ways to test out an app in a cloud environment.
    • Command line interface – CLIs are useful for deploying apps from a terminal. Commands can be run manually or in a script.
    • Continuous integration/deployment – To deploy production apps, the recommended approach is to automate the process in a robust CI/CD pipeline.

    Let's explore some of these options in more depth.

    Visual Studio

    Visual Studio 2022 has built-in support for deploying .NET applications to Azure Container Apps. You can use the familiar publish dialog to provision Container Apps resources and deploy to them directly. This helps you prototype an app and see it run in Azure Container Apps with the least amount of effort.

    Journey to the cloud with Azure Container Apps

    Once you’re happy with the app and it’s ready for production, Visual Studio allows you to push your code to GitHub and set up a GitHub Actions workflow to build and deploy your app every time you push changes. You can do this by checking a box.

    Journey to the cloud with Azure Container Apps

    Visual Studio Code

    There are a couple of valuable extensions that you’ll want to install if you’re working in VS Code.

    Docker extension

    The Docker extension provides commands for building a container image for your app and pushing it to a container registry. It can even do this without requiring Docker Desktop on your local machine --- the “Build image in Azure” command remotely builds and pushes a container image to Azure Container Registry.

    Journey to the cloud with Azure Container Apps

    And if your app doesn’t have a dockerfile, the extension will generate one for you.

    Journey to the cloud with Azure Container Apps

    Azure Container Apps extension

    Once you’ve built your container image and pushed it to a registry, the Azure Container Apps VS Code extension provides commands for creating a container app and deploying revisions using the image you’ve built.

    Journey to the cloud with Azure Container Apps

    Azure CLI

    The Azure CLI can be used to manage pretty much anything in Azure. For Azure Container Apps, you’ll find commands for creating, updating, and managing your Container Apps resources.

    Just like in VS Code, with a few commands in the Azure CLI, you can create your Azure resources, build and push your container image, and then deploy it to a container app.

    To make things as simple as possible, the Azure CLI also has an “az containerapp up” command. This single command takes care of everything that’s needed to turn your source code from your local machine to a cloud-hosted application in Azure Container Apps.

    az containerapp up --name myapp --source ./src

    We saw earlier that Visual Studio can generate a GitHub Actions workflow to automatically build and deploy your app on every commit. “az containerapp up” can do this too. The following adds a workflow to a repo.

    az containerapp up --name myapp --repo https://github.com/myorg/myproject

    CI/CD pipelines

    When it’s time to take your app to production, it’s strongly recommended to set up a CI/CD pipeline to automatically and repeatably build, test, and deploy it. We’ve already seen that tools such as Visual Studio and Azure CLI can automatically generate a workflow for GitHub Actions. You can set up a pipeline in Azure DevOps too. This is an example Azure DevOps pipeline.

    trigger:
    branches:
    include:
    - main

    pool:
    vmImage: ubuntu-latest

    stages:

    - stage: Build

    jobs:
    - job: build
    displayName: Build app

    steps:
    - task: Docker@2
    inputs:
    containerRegistry: 'myregistry'
    repository: 'hello-aca'
    command: 'buildAndPush'
    Dockerfile: 'hello-container-apps/Dockerfile'
    tags: '$(Build.BuildId)'

    - stage: Deploy

    jobs:
    - job: deploy
    displayName: Deploy app

    steps:
    - task: AzureCLI@2
    inputs:
    azureSubscription: 'my-subscription(5361b9d6-46ea-43c3-a898-15f14afb0db6)'
    scriptType: 'bash'
    scriptLocation: 'inlineScript'
    inlineScript: |
    # automatically install Container Apps CLI extension
    az config set extension.use_dynamic_install=yes_without_prompt

    # ensure registry is configured in container app
    az containerapp registry set \
    --name hello-aca \
    --resource-group mygroup \
    --server myregistry.azurecr.io \
    --identity system

    # update container app
    az containerapp update \
    --name hello-aca \
    --resource-group mygroup \
    --image myregistry.azurecr.io/hello-aca:$(Build.BuildId)

    Conclusion

    In this article, we looked at a few ways to deploy your Cloud-Native applications to Azure Container Apps and how to decide which one to use based on where you are in your app’s journey to the cloud.

    To learn more, visit Azure Container Apps | Microsoft Azure today!

    ASK THE EXPERT: LIVE Q&A

    The Azure Container Apps team will answer questions live on September 29.

    - + \ No newline at end of file diff --git a/blog/tags/zero-to-hero/page/6/index.html b/blog/tags/zero-to-hero/page/6/index.html index 4e532ecb60..afb33c0874 100644 --- a/blog/tags/zero-to-hero/page/6/index.html +++ b/blog/tags/zero-to-hero/page/6/index.html @@ -14,13 +14,13 @@ - +

    8 posts tagged with "zero-to-hero"

    View All Tags

    · 5 min read
    Nitya Narasimhan
    Devanshi Joshi

    SEP 08: CHANGE IN PUBLISHING SCHEDULE

    Starting from Week 2 (Sep 8), we'll be publishing blog posts in batches rather than on a daily basis, so you can read a series of related posts together. Don't want to miss updates? Just subscribe to the feed


    Welcome to Day 8 of #30DaysOfServerless!

    This marks the end of our Week 1 Roadmap focused on Azure Functions!! Today, we'll do a quick recap of all #ServerlessSeptember activities in Week 1, set the stage for Week 2 - and leave you with some excellent tutorials you should explore to build more advanced scenarios with Azure Functions.

    Ready? Let's go.


    What We'll Cover

    • Azure Functions: Week 1 Recap
    • Advanced Functions: Explore Samples
    • End-to-End: Serverless Hacks & Cloud Skills
    • What's Next: Hello, Containers & Microservices
    • Challenge: Complete the Learning Path


    Week 1 Recap: #30Days & Functions

    Congratulations!! We made it to the end of Week 1 of #ServerlessSeptember. Let's recap what we learned so far:

    • In Core Concepts we looked at where Azure Functions fits into the serverless options available on Azure. And we learned about key concepts like Triggers, Bindings, Custom Handlers and Durable Functions.
    • In Build Your First Function we looked at the tooling options for creating Functions apps, testing them locally, and deploying them to Azure - as we built and deployed our first Functions app.
    • In the next 4 posts, we explored new Triggers, Integrations, and Scenarios - as we looked at building Functions Apps in Java, JavaScript, .NET and Python.
    • And in the Zero-To-Hero series, we learned about Durable Entities - and how we can use them to create stateful serverless solutions using a Chirper Sample as an example scenario.

    The illustrated roadmap below summarizes what we covered each day this week, as we bring our Functions-as-a-Service exploration to a close.


    Advanced Functions: Code Samples

    So, now that we've got our first Functions app under our belt, and validated our local development setup for tooling, where can we go next? A good next step is to explore different triggers and bindings, that drive richer end-to-end scenarios. For example:

    • Integrate Functions with Azure Logic Apps - we'll discuss Azure Logic Apps in Week 3. For now, think of it as a workflow automation tool that lets you integrate seamlessly with other supported Azure services to drive an end-to-end scenario. In this tutorial, we set up a workflow connecting Twitter (get tweet) to Azure Cognitive Services (analyze sentiment) - and use that to trigger an Azure Functions app to send email about the result.
    • Integrate Functions with Event Grid - we'll discuss Azure Event Grid in Week 3. For now, think of it as an eventing service connecting event sources (publishers) to event handlers (subscribers) at cloud scale. In this tutorial, we handle a common use case - a workflow where loading an image to Blob Storage triggers an Azure Functions app that implements a resize function, helping automatically generate thumbnails for the uploaded image.
    • Integrate Functions with CosmosDB and SignalR to bring real-time push-based notifications to your web app. It achieves this by using a Functions app that is triggered by changes in a CosmosDB backend, causing it to broadcast that update (push notification to connected web clients over SignalR, in real time.

    Want more ideas? Check out the Azure Samples for Functions for implementations, and browse the Azure Architecture Center for reference architectures from real-world scenarios that involve Azure Functions usage.


    E2E Scenarios: Hacks & Cloud Skills

    Want to systematically work your way through a single End-to-End scenario involving Azure Functions alongside other serverless support technologies? Check out the Serverless Hacks activity happening during #ServerlessSeptember, and learn to build this "Serverless Tollbooth Application" in a series of 10 challenges. Check out the video series for a reference solution in .NET and sign up for weekly office hours to join peers and discuss your solutions or challenges.

    Or perhaps you prefer to learn core concepts with code in a structured learning path? We have that covered. Check out the 12-module "Create Serverless Applications" course from Microsoft Learn which walks your through concepts, one at a time, with code. Even better - sign up for the free Cloud Skills Challenge and complete the same path (in under 30 days) but this time, with the added fun of competing against your peers for a spot on a leaderboard, and swag.


    What's Next? Hello, Cloud-Native!

    So where to next? In Week 2 we turn our attention from Functions-as-a-Service to building more complex backends using Containers and Microservices. We'll focus on two core technologies - Azure Container Apps and Dapr (Distributed Application Runtime) - both key components of a broader vision around Building Cloud-Native Applications in Azure.

    What is Cloud-Native you ask?

    Fortunately for you, we have an excellent introduction in our Zero-to-Hero article on Go Cloud-Native with Azure Container Apps - that explains the 5 pillars of Cloud-Native and highlights the value of Azure Container Apps (scenarios) and Dapr (sidecar architecture) for simplified microservices-based solution with auto-scale capability. Prefer a visual summary? Here's an illustrate guide to that article for convenience.

    Go Cloud-Native Download a higher resolution version of the image


    Take The Challenge

    We typically end each post with an exercise or activity to reinforce what you learned. For Week 1, we encourage you to take the Cloud Skills Challenge and work your way through at least a subset of the modules, for hands-on experience with the different Azure Functions concepts, integrations, and usage.

    See you in Week 2!

    - + \ No newline at end of file diff --git a/blog/tags/zero-to-hero/page/7/index.html b/blog/tags/zero-to-hero/page/7/index.html index 9d775c5625..8d0add9cfe 100644 --- a/blog/tags/zero-to-hero/page/7/index.html +++ b/blog/tags/zero-to-hero/page/7/index.html @@ -14,7 +14,7 @@ - + @@ -22,7 +22,7 @@

    8 posts tagged with "zero-to-hero"

    View All Tags

    · 8 min read
    David Justo

    Welcome to Day 6 of #30DaysOfServerless!

    Today, we have a special set of posts from our Zero To Hero 🚀 initiative, featuring blog posts authored by our Product Engineering teams for #ServerlessSeptember. Posts were originally published on the Apps on Azure blog on Microsoft Tech Community.


    What We'll Cover

    • What are Durable Entities
    • Some Background
    • A Programming Model
    • Entities for a Micro-Blogging Platform


    Durable Entities are a special type of Azure Function that allow you to implement stateful objects in a serverless environment. They make it easy to introduce stateful components to your app without needing to manually persist data to external storage, so you can focus on your business logic. We’ll demonstrate their power with a real-life example in the last section.

    Entities 101: Some Background

    Programming Durable Entities feels a lot like object-oriented programming, except that these “objects” exist in a distributed system. Like objects, each Entity instance has a unique identifier, i.e. an entity ID that can be used to read and manipulate their internal state. Entities define a list of operations that constrain how their internal state is managed, like an object interface.

    Some experienced readers may realize that Entities sound a lot like an implementation of the Actor Pattern. For a discussion of the relationship between Entities and Actors, please refer to this documentation.

    Entities are a part of the Durable Functions Extension, an extension of Azure Functions that empowers programmers with stateful abstractions for serverless, such as Orchestrations (i.e. workflows).

    Durable Functions is available in most Azure Functions runtime environments: .NET, Node.js, Python, PowerShell, and Java (preview). For this article, we’ll focus on the C# experience, but note that Entities are also available in Node.js and Python; their availability in other languages is underway.

    Entities 102: The programming model

    Imagine you want to implement a simple Entity that just counts things. Its interface allows you to get the current count, add to the current count, and to reset the count to zero.

    If you implement this in an object-oriented way, you’d probably define a class (say “Counter”) with a method to get the current count (say “Counter.Get”), another to add to the count (say “Counter.Add”), and another to reset the count (say “Counter.Reset”). Well, the implementation of an Entity in C# is not that different from this sketch:

    [JsonObject(MemberSerialization.OptIn)] 
    public class Counter
    {
    [JsonProperty("value")]
    public int Value { get; set; }

    public void Add(int amount)
    {
    this.Value += amount;
    }

    public Task Reset()
    {
    this.Value = 0;
    return Task.CompletedTask;
    }

    public Task<int> Get()
    {
    return Task.FromResult(this.Value);
    }
    [FunctionName(nameof(Counter))]
    public static Task Run([EntityTrigger] IDurableEntityContext ctx)
    => ctx.DispatchAsync<Counter>();

    }

    We’ve defined a class named Counter, with an internal count stored in the variable “Value” which is manipulated through the “Add” and “Reset” methods, and which can be read via “Get”.

    The “Run” method is simply boilerplate required for the Azure Functions framework to interact with the object we’ve defined – it’s the method that the framework calls internally when it needs to load the Entity object. When DispatchAsync is called, the Entity and its corresponded state (the last count in “Value”) is loaded from storage. Again, this is mostly just boilerplate: your Entity’s business logic lies in the rest of the class.

    Finally, the Json annotation on top of the class and the Value field tells the Durable Functions framework that the “Value” field is to be durably persisted as part of the durable state on each Entity invocation. If you were to annotate other class variables with JsonProperty, they would also become part of the managed state.

    Entities for a micro-blogging platform

    We’ll try to implement a simple micro-blogging platform, a la Twitter. Let’s call it “Chirper”. In Chirper, users write chirps (i.e tweets), they can follow, and unfollow other users, and they can read the chirps of users they follow.

    Defining Entity

    Just like in OOP, it’s useful to begin by identifying what are the stateful agents of this scenario. In this case, users have state (who they follow and their chirps), and chirps have state in the form of their content. So, we could model these stateful agents as Entities!

    Below is a potential way to implement a User for Chirper as an Entity:

      [JsonObject(MemberSerialization = MemberSerialization.OptIn)] 
    public class User: IUser
    {
    [JsonProperty]
    public List<string> FollowedUsers { get; set; } = new List<string>();

    public void Add(string user)
    {
    FollowedUsers.Add(user);
    }

    public void Remove(string user)
    {
    FollowedUsers.Remove(user);
    }

    public Task<List<string>> Get()
    {
    return Task.FromResult(FollowedUsers);
    }
    // note: removed boilerplate “Run” method, for conciseness.
    }

    In this case, our Entity’s internal state is stored in “FollowedUsers” which is an array of accounts followed by this user. The operations exposed by this entity allow clients to read and modify this data: it can be read by “Get”, a new follower can be added via “Add”, and a user can be unfollowed via “Remove”.

    With that, we’ve modeled a Chirper’s user as an Entity! Recall that Entity instances each has a unique ID, so we can consider that unique ID to correspond to a specific user account.

    What about chirps? Should we represent them as Entities as well? That would certainly be valid. However, we would then need to create a mapping between an entity ID and every chirp entity ID that this user wrote.

    For demonstration purposes, a simpler approach would be to create an Entity that stores the list of all chirps authored by a given user; call it UserChirps. Then, we could fix each User Entity to share the same entity ID as its corresponding UserChirps Entity, making client operations easier.

    Below is a simple implementation of UserChirps:

      [JsonObject(MemberSerialization = MemberSerialization.OptIn)] 
    public class UserChirps : IUserChirps
    {
    [JsonProperty]
    public List<Chirp> Chirps { get; set; } = new List<Chirp>();

    public void Add(Chirp chirp)
    {
    Chirps.Add(chirp);
    }

    public void Remove(DateTime timestamp)
    {
    Chirps.RemoveAll(chirp => chirp.Timestamp == timestamp);
    }

    public Task<List<Chirp>> Get()
    {
    return Task.FromResult(Chirps);
    }

    // Omitted boilerplate “Run” function
    }

    Here, our state is stored in Chirps, a list of user posts. Our operations are the same as before: Get, Read, and Add. It’s the same pattern as before, but we’re representing different data.

    To put it all together, let’s set up Entity clients to generate and manipulate these Entities according to some REST API.

    Interacting with Entity

    Before going there, let’s talk briefly about how you can interact with an Entity. Entity interactions take one of two forms -- calls and signals:

    Calling an entity is a two-way communication. You send an operation message to the entity and then wait for the response message before you continue. The response can be a result value or an error. Signaling an entity is a one-way (fire-and-forget) communication. You send an operation message but don’t wait for a response. You have the reassurance that the message will be delivered eventually, but you don’t know when and don’t know what the response is. For example, when you read the state of an Entity, you are performing a “call” interaction. When you record that a user has followed another, you may choose to simply signal it.

    Now say user with a given userId (say “durableFan99” ) wants to post a chirp. For this, you can write an HTTP endpoint to signal the UserChips entity to record that chirp. We can leverage the HTTP Trigger functionality from Azure Functions and pair it with an entity client binding that signals the Add operation of our Chirp Entity:

    [FunctionName("UserChirpsPost")] 
    public static async Task<HttpResponseMessage> UserChirpsPost(
    [HttpTrigger(AuthorizationLevel.Function, "post", Route = "user/{userId}/chirps")]
    HttpRequestMessage req,
    DurableClient] IDurableClient client,
    ILogger log,
    string userId)
    {
    Authenticate(req, userId);
    var chirp = new Chirp()
    {
    UserId = userId,
    Timestamp = DateTime.UtcNow,
    Content = await req.Content.ReadAsStringAsync(),
    };
    await client.SignalEntityAsync<IUserChirps>(userId, x => x.Add(chirp));
    return req.CreateResponse(HttpStatusCode.Accepted, chirp);
    }

    Following the same pattern as above, to get all the chirps from a user, you could read the status of your Entity via ReadEntityStateAsync, which follows the call-interaction pattern as your client expects a response:

    [FunctionName("UserChirpsGet")] 
    public static async Task<HttpResponseMessage> UserChirpsGet(
    [HttpTrigger(AuthorizationLevel.Function, "get", Route = "user/{userId}/chirps")] HttpRequestMessage req,
    [DurableClient] IDurableClient client,
    ILogger log,
    string userId)
    {

    Authenticate(req, userId);
    var target = new EntityId(nameof(UserChirps), userId);
    var chirps = await client.ReadEntityStateAsync<UserChirps>(target);
    return chirps.EntityExists
    ? req.CreateResponse(HttpStatusCode.OK, chirps.EntityState.Chirps)
    : req.CreateResponse(HttpStatusCode.NotFound);
    }

    And there you have it! To play with a complete implementation of Chirper, you can try out our sample in the Durable Functions extension repo.

    Thank you!

    info

    Thanks for following along, and we hope you find Entities as useful as we do! If you have questions or feedback, please file issues in the repo above or tag us @AzureFunctions on Twitter

    - + \ No newline at end of file diff --git a/blog/tags/zero-to-hero/page/8/index.html b/blog/tags/zero-to-hero/page/8/index.html index 92a6836401..1d18d9bd3a 100644 --- a/blog/tags/zero-to-hero/page/8/index.html +++ b/blog/tags/zero-to-hero/page/8/index.html @@ -14,13 +14,13 @@ - +

    8 posts tagged with "zero-to-hero"

    View All Tags

    · 8 min read
    Kendall Roden

    Welcome to Day 6 of #30DaysOfServerless!

    Today, we have a special set of posts from our Zero To Hero 🚀 initiative, featuring blog posts authored by our Product Engineering teams for #ServerlessSeptember. Posts were originally published on the Apps on Azure blog on Microsoft Tech Community.


    What We'll Cover

    • Defining Cloud-Native
    • Introduction to Azure Container Apps
    • Dapr In Azure Container Apps
    • Conclusion


    Defining Cloud-Native

    While I’m positive I’m not the first person to ask this, I think it’s an appropriate way for us to kick off this article: “How many developers does it take to define Cloud-Native?” I hope you aren’t waiting for a punch line because I seriously want to know your thoughts (drop your perspectives in the comments..) but if you ask me, the limit does not exist!

    A quick online search of the topic returns a laundry list of articles, e-books, twitter threads, etc. all trying to nail down the one true definition. While diving into the rabbit hole of Cloud-Native, you will inevitably find yourself on the Cloud-Native Computing Foundation (CNCF) site. The CNCF is part of the Linux Foundation and aims to make "Cloud-Native computing ubiquitous" through deep open source project and community involvement. The CNCF has also published arguably the most popularized definition of Cloud-Native which begins with the following statement:

    “Cloud-Native technologies empower organizations to build and run scalable applications in modern, dynamic environments such as public, private, and hybrid clouds."

    Over the past four years, my day-to-day work has been driven primarily by the surging demand for application containerization and the drastic adoption of Kubernetes as the de-facto container orchestrator. Customers are eager to learn and leverage patterns, practices and technologies that enable building "loosely coupled systems that are resilient, manageable, and observable". Enterprise developers at these organizations are being tasked with rapidly deploying event-driven, horizontally-scalable, polyglot services via repeatable, code-to-cloud pipelines.

    While building Cloud-Native solutions can enable rapid innovation, the transition to adopting a Cloud-Native architectural approach comes with a steep learning curve and a new set of considerations. In a document published by Microsoft called What is Cloud-Native?, there are a few key areas highlighted to aid customers in the adoption of best practices for building modern, portable applications which I will summarize below:

    Cloud infrastructure

    • Cloud-Native applications leverage cloud infrastructure and make use of Platform-as-a-service offerings
    • Cloud-Native applications depend on highly-elastic infrastructure with automatic scaling, self-healing, and monitoring capabilities

    Modern application design

    • Cloud-Native applications should be constructed using principles outlined in the 12 factor methodology

    Microservices

    • Cloud-Native applications are typically composed of microservices where each core function, or service, is built and deployed independently

    Containers

    • Cloud-Native applications are typically deployed using containers as a packaging mechanism where an application's code and dependencies are bundled together for consistency of deployment
    • Cloud-Native applications leverage container orchestration technologies- primarily Kubernetes- for achieving capabilities such as workload scheduling, self-healing, auto-scale, etc.

    Backing services

    • Cloud-Native applications are ideally stateless workloads which retrieve and store data in data stores external to the application hosting infrastructure. Cloud providers like Azure provide an array of backing data services which can be securely accessed from application code and provide capabilities for ensuring application data is highly-available

    Automation

    • Cloud-Native solutions should use deployment automation for backing cloud infrastructure via versioned, parameterized Infrastructure as Code (IaC) templates which provide a consistent, repeatable process for provisioning cloud resources.
    • Cloud-Native solutions should make use of modern CI/CD practices and pipelines to ensure successful, reliable infrastructure and application deployment.

    Azure Container Apps

    In many of the conversations I've had with customers that involve talk of Kubernetes and containers, the topics of cost-optimization, security, networking, and reducing infrastructure and operations inevitably arise. I personally have yet to meet with any customers eager to have their developers get more involved with infrastructure concerns.

    One of my former colleagues, Jeff Hollan, made a statement while appearing on a 2019 episode of The Cloud-Native Show where he shared his perspective on Cloud-Native:

    "When I think about Cloud-Native... it's writing applications in a way where you are specifically thinking about the benefits the cloud can provide... to me, serverless is the perfect realization of that because the only reason you can write serverless applications is because the cloud exists."

    I must say that I agree with Jeff's perspective. In addition to optimizing development practices for the Cloud-Native world, reducing infrastructure exposure and operations is equally as important to many organizations and can be achieved as a result of cloud platform innovation.

    In May of 2022, Microsoft announced the general availability of Azure Container Apps. Azure Container Apps provides customers with the ability to run microservices and containerized applications on a serverless, consumption-based platform.

    For those interested in taking advantage of the open source ecosystem while reaping the benefits of a managed platform experience, Container Apps run on Kubernetes and provides a set of managed open source projects embedded directly into the platform including the Kubernetes Event Driven Autoscaler (KEDA), the Distributed Application Runtime (Dapr) and Envoy.

    Azure Kubernetes Service vs. Azure Container Apps

    Container apps provides other Cloud-Native features and capabilities in addition to those above including, but not limited to:

    The ability to dynamically scale and support growing numbers of users, events, and requests is one of the core requirements for most Cloud-Native, distributed applications. Azure Container Apps is purpose-built with this and other Cloud-Native tenants in mind.

    What can you build with Azure Container Apps?

    Dapr in Azure Container Apps

    As a quick personal note before we dive into this section I will say I am a bit bias about Dapr. When Dapr was first released, I had an opportunity to immediately get involved and became an early advocate for the project. It is created by developers for developers, and solves tangible problems customers architecting distributed systems face:

    HOW DO I
    • integrate with external systems that my app has to react and respond to?
    • create event driven apps which reliably send events from one service to another?
    • observe the calls and events between my services to diagnose issues in production?
    • access secrets securely from within my application?
    • discover other services and call methods on them?
    • prevent committing to a technology early and have the flexibility to swap out an alternative based on project or environment changes?

    While existing solutions were in the market which could be used to address some of the concerns above, there was not a lightweight, CNCF-backed project which could provide a unified approach to solve the more fundamental ask from customers: "How do I make it easy for developers to build microservices based on Cloud-Native best practices?"

    Enter Dapr!

    The Distributed Application Runtime (Dapr) provides APIs that simplify microservice connectivity. Whether your communication pattern is service to service invocation or pub/sub messaging, Dapr helps you write resilient and secured microservices. By letting Dapr’s sidecar take care of the complex challenges such as service discovery, message broker integration, encryption, observability, and secret management, you can focus on business logic and keep your code simple."

    The Container Apps platform provides a managed and supported Dapr integration which eliminates the need for deploying and managing the Dapr OSS project. In addition to providing managed upgrades, the platform also exposes a simplified Dapr interaction model to increase developer productivity and reduce the friction required to leverage Dapr capabilities. While the Dapr integration makes it easier for customers to adopt Cloud-Native best practices in container apps it is not required to make use of the container apps platform.

    Image on Dapr

    For additional insight into the dapr integration visit aka.ms/aca-dapr.

    Conclusion

    Backed by and integrated with powerful Cloud-Native technologies, Azure Container Apps strives to make developers productive, while reducing the operational overhead and learning curve that typically accompanies adopting a cloud-native strategy.

    If you are interested in building resilient, portable and highly-scalable apps visit Azure Container Apps | Microsoft Azure today!

    ASK THE EXPERT: LIVE Q&A

    The Azure Container Apps team will answer questions live on September 29.

    - + \ No newline at end of file diff --git a/blog/welcome/index.html b/blog/welcome/index.html index 0dbbb497ce..e820c3fc32 100644 --- a/blog/welcome/index.html +++ b/blog/welcome/index.html @@ -14,13 +14,13 @@ - +

    Hello, ServerlessSeptember

    · 3 min read
    Nitya Narasimhan
    Devanshi Joshi

    🍂 It's September?

    Well, almost! September 1 is a few days away and I'm excited! Why? Because it's the perfect time to revisit #Serverless September, a month of

    ".. content-driven learning where experts and practitioners share their insights and tutorials on how to use serverless technologies effectively in today's ecosystems"

    If the words look familiar, it's because I actually wrote them 2 years ago when we launched the 2020 edition of this series. You might even recall this whimsical image I drew to capture the concept of September (fall) and Serverless (event-driven on-demand compute). Since then, a lot has happened in the serverless ecosystem!

    You can still browse the 2020 Content Collection to find great talks, articles and code samples to get started using Serverless on Azure. But read on to learn what's new!

    🧐 What's New?

    Well - quite a few things actually. This year, Devanshi Joshi and I expanded the original concept in a number of ways. Here's just a few of them that come to mind.

    New Website

    This year, we created this website (shortcut: https://aka.ms/serverless-september) to serve as a permanent home for content in 2022 and beyond - making it a canonical source for the #serverless posts we publish to tech communities like dev.to, Azure Developer Community and Apps On Azure. We hope this also makes it easier for you to search for, or discover, current and past articles that support your learning journey!

    Start by bookmarking these two sites:

    More Options

    Previous years focused on curating and sharing content authored by Microsoft and community contributors, showcasing serverless examples and best practices. This was perfect for those who already had experience with the core devtools and concepts.

    This year, we wanted to combine beginner-friendly options (for those just starting their serverless journey) with more advanced insights (for those looking to skill up further). Here's a sneak peek at some of the initiatives we've got planned!

    We'll also explore the full spectrum of serverless - from Functions-as-a-Service (for granularity) to Containerization (for deployment) and Microservices (for scalability). Here are a few services and technologies you'll get to learn more about:

    ⚡️ Join us!

    This has been a labor of love from multiple teams at Microsoft! We can't wait to share all the resources that we hope will help you skill up on all things Serverless this September! Here are a couple of ways to participate:

    - + \ No newline at end of file diff --git a/blog/zero2hero-aca-01/index.html b/blog/zero2hero-aca-01/index.html index eb4b263eff..59234e32f5 100644 --- a/blog/zero2hero-aca-01/index.html +++ b/blog/zero2hero-aca-01/index.html @@ -14,13 +14,13 @@ - +

    🚀 | Go Cloud-Native with ACA

    · 8 min read
    Kendall Roden

    Welcome to Day 6 of #30DaysOfServerless!

    Today, we have a special set of posts from our Zero To Hero 🚀 initiative, featuring blog posts authored by our Product Engineering teams for #ServerlessSeptember. Posts were originally published on the Apps on Azure blog on Microsoft Tech Community.


    What We'll Cover

    • Defining Cloud-Native
    • Introduction to Azure Container Apps
    • Dapr In Azure Container Apps
    • Conclusion


    Defining Cloud-Native

    While I’m positive I’m not the first person to ask this, I think it’s an appropriate way for us to kick off this article: “How many developers does it take to define Cloud-Native?” I hope you aren’t waiting for a punch line because I seriously want to know your thoughts (drop your perspectives in the comments..) but if you ask me, the limit does not exist!

    A quick online search of the topic returns a laundry list of articles, e-books, twitter threads, etc. all trying to nail down the one true definition. While diving into the rabbit hole of Cloud-Native, you will inevitably find yourself on the Cloud-Native Computing Foundation (CNCF) site. The CNCF is part of the Linux Foundation and aims to make "Cloud-Native computing ubiquitous" through deep open source project and community involvement. The CNCF has also published arguably the most popularized definition of Cloud-Native which begins with the following statement:

    “Cloud-Native technologies empower organizations to build and run scalable applications in modern, dynamic environments such as public, private, and hybrid clouds."

    Over the past four years, my day-to-day work has been driven primarily by the surging demand for application containerization and the drastic adoption of Kubernetes as the de-facto container orchestrator. Customers are eager to learn and leverage patterns, practices and technologies that enable building "loosely coupled systems that are resilient, manageable, and observable". Enterprise developers at these organizations are being tasked with rapidly deploying event-driven, horizontally-scalable, polyglot services via repeatable, code-to-cloud pipelines.

    While building Cloud-Native solutions can enable rapid innovation, the transition to adopting a Cloud-Native architectural approach comes with a steep learning curve and a new set of considerations. In a document published by Microsoft called What is Cloud-Native?, there are a few key areas highlighted to aid customers in the adoption of best practices for building modern, portable applications which I will summarize below:

    Cloud infrastructure

    • Cloud-Native applications leverage cloud infrastructure and make use of Platform-as-a-service offerings
    • Cloud-Native applications depend on highly-elastic infrastructure with automatic scaling, self-healing, and monitoring capabilities

    Modern application design

    • Cloud-Native applications should be constructed using principles outlined in the 12 factor methodology

    Microservices

    • Cloud-Native applications are typically composed of microservices where each core function, or service, is built and deployed independently

    Containers

    • Cloud-Native applications are typically deployed using containers as a packaging mechanism where an application's code and dependencies are bundled together for consistency of deployment
    • Cloud-Native applications leverage container orchestration technologies- primarily Kubernetes- for achieving capabilities such as workload scheduling, self-healing, auto-scale, etc.

    Backing services

    • Cloud-Native applications are ideally stateless workloads which retrieve and store data in data stores external to the application hosting infrastructure. Cloud providers like Azure provide an array of backing data services which can be securely accessed from application code and provide capabilities for ensuring application data is highly-available

    Automation

    • Cloud-Native solutions should use deployment automation for backing cloud infrastructure via versioned, parameterized Infrastructure as Code (IaC) templates which provide a consistent, repeatable process for provisioning cloud resources.
    • Cloud-Native solutions should make use of modern CI/CD practices and pipelines to ensure successful, reliable infrastructure and application deployment.

    Azure Container Apps

    In many of the conversations I've had with customers that involve talk of Kubernetes and containers, the topics of cost-optimization, security, networking, and reducing infrastructure and operations inevitably arise. I personally have yet to meet with any customers eager to have their developers get more involved with infrastructure concerns.

    One of my former colleagues, Jeff Hollan, made a statement while appearing on a 2019 episode of The Cloud-Native Show where he shared his perspective on Cloud-Native:

    "When I think about Cloud-Native... it's writing applications in a way where you are specifically thinking about the benefits the cloud can provide... to me, serverless is the perfect realization of that because the only reason you can write serverless applications is because the cloud exists."

    I must say that I agree with Jeff's perspective. In addition to optimizing development practices for the Cloud-Native world, reducing infrastructure exposure and operations is equally as important to many organizations and can be achieved as a result of cloud platform innovation.

    In May of 2022, Microsoft announced the general availability of Azure Container Apps. Azure Container Apps provides customers with the ability to run microservices and containerized applications on a serverless, consumption-based platform.

    For those interested in taking advantage of the open source ecosystem while reaping the benefits of a managed platform experience, Container Apps run on Kubernetes and provides a set of managed open source projects embedded directly into the platform including the Kubernetes Event Driven Autoscaler (KEDA), the Distributed Application Runtime (Dapr) and Envoy.

    Azure Kubernetes Service vs. Azure Container Apps

    Container apps provides other Cloud-Native features and capabilities in addition to those above including, but not limited to:

    The ability to dynamically scale and support growing numbers of users, events, and requests is one of the core requirements for most Cloud-Native, distributed applications. Azure Container Apps is purpose-built with this and other Cloud-Native tenants in mind.

    What can you build with Azure Container Apps?

    Dapr in Azure Container Apps

    As a quick personal note before we dive into this section I will say I am a bit bias about Dapr. When Dapr was first released, I had an opportunity to immediately get involved and became an early advocate for the project. It is created by developers for developers, and solves tangible problems customers architecting distributed systems face:

    HOW DO I
    • integrate with external systems that my app has to react and respond to?
    • create event driven apps which reliably send events from one service to another?
    • observe the calls and events between my services to diagnose issues in production?
    • access secrets securely from within my application?
    • discover other services and call methods on them?
    • prevent committing to a technology early and have the flexibility to swap out an alternative based on project or environment changes?

    While existing solutions were in the market which could be used to address some of the concerns above, there was not a lightweight, CNCF-backed project which could provide a unified approach to solve the more fundamental ask from customers: "How do I make it easy for developers to build microservices based on Cloud-Native best practices?"

    Enter Dapr!

    The Distributed Application Runtime (Dapr) provides APIs that simplify microservice connectivity. Whether your communication pattern is service to service invocation or pub/sub messaging, Dapr helps you write resilient and secured microservices. By letting Dapr’s sidecar take care of the complex challenges such as service discovery, message broker integration, encryption, observability, and secret management, you can focus on business logic and keep your code simple."

    The Container Apps platform provides a managed and supported Dapr integration which eliminates the need for deploying and managing the Dapr OSS project. In addition to providing managed upgrades, the platform also exposes a simplified Dapr interaction model to increase developer productivity and reduce the friction required to leverage Dapr capabilities. While the Dapr integration makes it easier for customers to adopt Cloud-Native best practices in container apps it is not required to make use of the container apps platform.

    Image on Dapr

    For additional insight into the dapr integration visit aka.ms/aca-dapr.

    Conclusion

    Backed by and integrated with powerful Cloud-Native technologies, Azure Container Apps strives to make developers productive, while reducing the operational overhead and learning curve that typically accompanies adopting a cloud-native strategy.

    If you are interested in building resilient, portable and highly-scalable apps visit Azure Container Apps | Microsoft Azure today!

    ASK THE EXPERT: LIVE Q&A

    The Azure Container Apps team will answer questions live on September 29.

    - + \ No newline at end of file diff --git a/blog/zero2hero-aca-04/index.html b/blog/zero2hero-aca-04/index.html index fab8b406dc..603e43f158 100644 --- a/blog/zero2hero-aca-04/index.html +++ b/blog/zero2hero-aca-04/index.html @@ -14,13 +14,13 @@ - +

    🚀 | Journey to the Cloud With ACA

    · 5 min read
    Anthony Chu

    Welcome to Day 12 of #30DaysOfServerless!

    Today, we have a special set of posts from our Zero To Hero 🚀 initiative, featuring blog posts authored by our Product Engineering teams for #ServerlessSeptember. Posts were originally published on the Apps on Azure blog on Microsoft Tech Community.


    What We'll Cover

    • Using Visual Studio
    • Using Visual Studio Code: Docker, ACA extensions
    • Using Azure CLI
    • Using CI/CD Pipelines


    Last week, @kendallroden wrote about what it means to be Cloud-Native and how Azure Container Apps provides a serverless containers platform for hosting all of your Cloud-Native applications. Today, we’ll walk through a few ways to get your apps up and running on Azure Container Apps.

    Depending on where you are in your Cloud-Native app development journey, you might choose to use different tools to deploy your apps.

    • “Right-click, publish” – Deploying an app directly from an IDE or code editor is often seen as a bad practice, but it’s one of the quickest ways to test out an app in a cloud environment.
    • Command line interface – CLIs are useful for deploying apps from a terminal. Commands can be run manually or in a script.
    • Continuous integration/deployment – To deploy production apps, the recommended approach is to automate the process in a robust CI/CD pipeline.

    Let's explore some of these options in more depth.

    Visual Studio

    Visual Studio 2022 has built-in support for deploying .NET applications to Azure Container Apps. You can use the familiar publish dialog to provision Container Apps resources and deploy to them directly. This helps you prototype an app and see it run in Azure Container Apps with the least amount of effort.

    Journey to the cloud with Azure Container Apps

    Once you’re happy with the app and it’s ready for production, Visual Studio allows you to push your code to GitHub and set up a GitHub Actions workflow to build and deploy your app every time you push changes. You can do this by checking a box.

    Journey to the cloud with Azure Container Apps

    Visual Studio Code

    There are a couple of valuable extensions that you’ll want to install if you’re working in VS Code.

    Docker extension

    The Docker extension provides commands for building a container image for your app and pushing it to a container registry. It can even do this without requiring Docker Desktop on your local machine --- the “Build image in Azure” command remotely builds and pushes a container image to Azure Container Registry.

    Journey to the cloud with Azure Container Apps

    And if your app doesn’t have a dockerfile, the extension will generate one for you.

    Journey to the cloud with Azure Container Apps

    Azure Container Apps extension

    Once you’ve built your container image and pushed it to a registry, the Azure Container Apps VS Code extension provides commands for creating a container app and deploying revisions using the image you’ve built.

    Journey to the cloud with Azure Container Apps

    Azure CLI

    The Azure CLI can be used to manage pretty much anything in Azure. For Azure Container Apps, you’ll find commands for creating, updating, and managing your Container Apps resources.

    Just like in VS Code, with a few commands in the Azure CLI, you can create your Azure resources, build and push your container image, and then deploy it to a container app.

    To make things as simple as possible, the Azure CLI also has an “az containerapp up” command. This single command takes care of everything that’s needed to turn your source code from your local machine to a cloud-hosted application in Azure Container Apps.

    az containerapp up --name myapp --source ./src

    We saw earlier that Visual Studio can generate a GitHub Actions workflow to automatically build and deploy your app on every commit. “az containerapp up” can do this too. The following adds a workflow to a repo.

    az containerapp up --name myapp --repo https://github.com/myorg/myproject

    CI/CD pipelines

    When it’s time to take your app to production, it’s strongly recommended to set up a CI/CD pipeline to automatically and repeatably build, test, and deploy it. We’ve already seen that tools such as Visual Studio and Azure CLI can automatically generate a workflow for GitHub Actions. You can set up a pipeline in Azure DevOps too. This is an example Azure DevOps pipeline.

    trigger:
    branches:
    include:
    - main

    pool:
    vmImage: ubuntu-latest

    stages:

    - stage: Build

    jobs:
    - job: build
    displayName: Build app

    steps:
    - task: Docker@2
    inputs:
    containerRegistry: 'myregistry'
    repository: 'hello-aca'
    command: 'buildAndPush'
    Dockerfile: 'hello-container-apps/Dockerfile'
    tags: '$(Build.BuildId)'

    - stage: Deploy

    jobs:
    - job: deploy
    displayName: Deploy app

    steps:
    - task: AzureCLI@2
    inputs:
    azureSubscription: 'my-subscription(5361b9d6-46ea-43c3-a898-15f14afb0db6)'
    scriptType: 'bash'
    scriptLocation: 'inlineScript'
    inlineScript: |
    # automatically install Container Apps CLI extension
    az config set extension.use_dynamic_install=yes_without_prompt

    # ensure registry is configured in container app
    az containerapp registry set \
    --name hello-aca \
    --resource-group mygroup \
    --server myregistry.azurecr.io \
    --identity system

    # update container app
    az containerapp update \
    --name hello-aca \
    --resource-group mygroup \
    --image myregistry.azurecr.io/hello-aca:$(Build.BuildId)

    Conclusion

    In this article, we looked at a few ways to deploy your Cloud-Native applications to Azure Container Apps and how to decide which one to use based on where you are in your app’s journey to the cloud.

    To learn more, visit Azure Container Apps | Microsoft Azure today!

    ASK THE EXPERT: LIVE Q&A

    The Azure Container Apps team will answer questions live on September 29.

    - + \ No newline at end of file diff --git a/blog/zero2hero-aca-06/index.html b/blog/zero2hero-aca-06/index.html index 76c799e072..7f464774c2 100644 --- a/blog/zero2hero-aca-06/index.html +++ b/blog/zero2hero-aca-06/index.html @@ -14,14 +14,14 @@ - +

    🚀 | Observability with ACA

    · 5 min read
    Mike Morton

    Welcome to Day 19 of #30DaysOfServerless!

    Today, we have a special set of posts from our Zero To Hero 🚀 initiative, featuring blog posts authored by our Product Engineering teams for #ServerlessSeptember. Posts were originally published on the Apps on Azure blog on Microsoft Tech Community.


    What We'll Cover

    • Log Streaming - in Azure Portal
    • Console Connect - in Azure Portal
    • Metrics - using Azure Monitor
    • Log Analytics - using Azure Monitor
    • Metric Alerts and Log Alerts - using Azure Monitor


    In past weeks, @kendallroden wrote about what it means to be Cloud-Native and @Anthony Chu the various ways to get your apps running on Azure Container Apps. Today, we will talk about the observability tools you can use to observe, debug, and diagnose your Azure Container Apps.

    Azure Container Apps provides several observability features to help you debug and diagnose your apps. There are both Azure portal and CLI options you can use to help understand the health of your apps and help identify when issues arise.

    While these features are helpful throughout your container app’s lifetime, there are two that are especially helpful. Log streaming and console connect can be a huge help in the initial stages when issues often rear their ugly head. Let's dig into both of these a little.

    Log Streaming

    Log streaming allows you to use the Azure portal to view the streaming logs from your app. You’ll see the logs written from the app to the container’s console (stderr and stdout). If your app is running multiple revisions, you can choose from which revision to view logs. You can also select a specific replica if your app is configured to scale. Lastly, you can choose from which container to view the log output. This is useful when you are running a custom or Dapr sidecar container. view streaming logs

    Here’s an example CLI command to view the logs of a container app.

    az containerapp logs show -n MyContainerapp -g MyResourceGroup

    You can find more information about the different options in our CLI docs.

    Console Connect

    In the Azure portal, you can connect to the console of a container in your app. Like log streaming, you can select the revision, replica, and container if applicable. After connecting to the console of the container, you can execute shell commands and utilities that you have installed in your container. You can view files and their contents, monitor processes, and perform other debugging tasks.

    This can be great for checking configuration files or even modifying a setting or library your container is using. Of course, updating a container in this fashion is not something you should do to a production app, but tweaking and re-testing an app in a non-production environment can speed up development.

    Here’s an example CLI command to connect to the console of a container app.

    az containerapp exec -n MyContainerapp -g MyResourceGroup

    You can find more information about the different options in our CLI docs.

    Metrics

    Azure Monitor collects metric data from your container app at regular intervals to help you gain insights into the performance and health of your container app. Container apps provide these metrics:

    • CPU usage
    • Memory working set bytes
    • Network in bytes
    • Network out bytes
    • Requests
    • Replica count
    • Replica restart count

    Here you can see the metrics explorer showing the replica count for an app as it scaled from one replica to fifteen, and then back down to one.

    You can also retrieve metric data through the Azure CLI.

    Log Analytics

    Azure Monitor Log Analytics is great for viewing your historical logs emitted from your container apps. There are two custom tables of interest, the ContainerAppConsoleLogs_CL which contains all the log messages written by your app (stdout and stderr), and the ContainerAppSystemLogs_CL which contain the system messages from the Azure Container Apps service.

    You can also query Log Analytics through the Azure CLI.

    Alerts

    Azure Monitor alerts notify you so that you can respond quickly to critical issues. There are two types of alerts that you can define:

    You can create alert rules from metric charts in the metric explorer and from queries in Log Analytics. You can also define and manage alerts from the Monitor|Alerts page.

    Here is what creating an alert looks like in the Azure portal. In this case we are setting an alert rule from the metric explorer to trigger an alert if the replica restart count for a specific container app is greater than two within the last fifteen minutes.

    To learn more about alerts, refer to Overview of alerts in Microsoft Azure.

    Conclusion

    In this article, we looked at the several ways to observe, debug, and diagnose your Azure Container Apps. As you can see there are rich portal tools and a complete set of CLI commands to use. All the tools are helpful throughout the lifecycle of your app, be sure to take advantage of them when having an issue and/or to prevent issues.

    To learn more, visit Azure Container Apps | Microsoft Azure today!

    ASK THE EXPERT: LIVE Q&A

    The Azure Container Apps team will answer questions live on September 29.

    - + \ No newline at end of file diff --git a/blog/zero2hero-func-02/index.html b/blog/zero2hero-func-02/index.html index 8372a42487..1d6149a419 100644 --- a/blog/zero2hero-func-02/index.html +++ b/blog/zero2hero-func-02/index.html @@ -14,7 +14,7 @@ - + @@ -22,7 +22,7 @@

    🚀 | Durable Entities Walkthrough

    · 8 min read
    David Justo

    Welcome to Day 6 of #30DaysOfServerless!

    Today, we have a special set of posts from our Zero To Hero 🚀 initiative, featuring blog posts authored by our Product Engineering teams for #ServerlessSeptember. Posts were originally published on the Apps on Azure blog on Microsoft Tech Community.


    What We'll Cover

    • What are Durable Entities
    • Some Background
    • A Programming Model
    • Entities for a Micro-Blogging Platform


    Durable Entities are a special type of Azure Function that allow you to implement stateful objects in a serverless environment. They make it easy to introduce stateful components to your app without needing to manually persist data to external storage, so you can focus on your business logic. We’ll demonstrate their power with a real-life example in the last section.

    Entities 101: Some Background

    Programming Durable Entities feels a lot like object-oriented programming, except that these “objects” exist in a distributed system. Like objects, each Entity instance has a unique identifier, i.e. an entity ID that can be used to read and manipulate their internal state. Entities define a list of operations that constrain how their internal state is managed, like an object interface.

    Some experienced readers may realize that Entities sound a lot like an implementation of the Actor Pattern. For a discussion of the relationship between Entities and Actors, please refer to this documentation.

    Entities are a part of the Durable Functions Extension, an extension of Azure Functions that empowers programmers with stateful abstractions for serverless, such as Orchestrations (i.e. workflows).

    Durable Functions is available in most Azure Functions runtime environments: .NET, Node.js, Python, PowerShell, and Java (preview). For this article, we’ll focus on the C# experience, but note that Entities are also available in Node.js and Python; their availability in other languages is underway.

    Entities 102: The programming model

    Imagine you want to implement a simple Entity that just counts things. Its interface allows you to get the current count, add to the current count, and to reset the count to zero.

    If you implement this in an object-oriented way, you’d probably define a class (say “Counter”) with a method to get the current count (say “Counter.Get”), another to add to the count (say “Counter.Add”), and another to reset the count (say “Counter.Reset”). Well, the implementation of an Entity in C# is not that different from this sketch:

    [JsonObject(MemberSerialization.OptIn)] 
    public class Counter
    {
    [JsonProperty("value")]
    public int Value { get; set; }

    public void Add(int amount)
    {
    this.Value += amount;
    }

    public Task Reset()
    {
    this.Value = 0;
    return Task.CompletedTask;
    }

    public Task<int> Get()
    {
    return Task.FromResult(this.Value);
    }
    [FunctionName(nameof(Counter))]
    public static Task Run([EntityTrigger] IDurableEntityContext ctx)
    => ctx.DispatchAsync<Counter>();

    }

    We’ve defined a class named Counter, with an internal count stored in the variable “Value” which is manipulated through the “Add” and “Reset” methods, and which can be read via “Get”.

    The “Run” method is simply boilerplate required for the Azure Functions framework to interact with the object we’ve defined – it’s the method that the framework calls internally when it needs to load the Entity object. When DispatchAsync is called, the Entity and its corresponded state (the last count in “Value”) is loaded from storage. Again, this is mostly just boilerplate: your Entity’s business logic lies in the rest of the class.

    Finally, the Json annotation on top of the class and the Value field tells the Durable Functions framework that the “Value” field is to be durably persisted as part of the durable state on each Entity invocation. If you were to annotate other class variables with JsonProperty, they would also become part of the managed state.

    Entities for a micro-blogging platform

    We’ll try to implement a simple micro-blogging platform, a la Twitter. Let’s call it “Chirper”. In Chirper, users write chirps (i.e tweets), they can follow, and unfollow other users, and they can read the chirps of users they follow.

    Defining Entity

    Just like in OOP, it’s useful to begin by identifying what are the stateful agents of this scenario. In this case, users have state (who they follow and their chirps), and chirps have state in the form of their content. So, we could model these stateful agents as Entities!

    Below is a potential way to implement a User for Chirper as an Entity:

      [JsonObject(MemberSerialization = MemberSerialization.OptIn)] 
    public class User: IUser
    {
    [JsonProperty]
    public List<string> FollowedUsers { get; set; } = new List<string>();

    public void Add(string user)
    {
    FollowedUsers.Add(user);
    }

    public void Remove(string user)
    {
    FollowedUsers.Remove(user);
    }

    public Task<List<string>> Get()
    {
    return Task.FromResult(FollowedUsers);
    }
    // note: removed boilerplate “Run” method, for conciseness.
    }

    In this case, our Entity’s internal state is stored in “FollowedUsers” which is an array of accounts followed by this user. The operations exposed by this entity allow clients to read and modify this data: it can be read by “Get”, a new follower can be added via “Add”, and a user can be unfollowed via “Remove”.

    With that, we’ve modeled a Chirper’s user as an Entity! Recall that Entity instances each has a unique ID, so we can consider that unique ID to correspond to a specific user account.

    What about chirps? Should we represent them as Entities as well? That would certainly be valid. However, we would then need to create a mapping between an entity ID and every chirp entity ID that this user wrote.

    For demonstration purposes, a simpler approach would be to create an Entity that stores the list of all chirps authored by a given user; call it UserChirps. Then, we could fix each User Entity to share the same entity ID as its corresponding UserChirps Entity, making client operations easier.

    Below is a simple implementation of UserChirps:

      [JsonObject(MemberSerialization = MemberSerialization.OptIn)] 
    public class UserChirps : IUserChirps
    {
    [JsonProperty]
    public List<Chirp> Chirps { get; set; } = new List<Chirp>();

    public void Add(Chirp chirp)
    {
    Chirps.Add(chirp);
    }

    public void Remove(DateTime timestamp)
    {
    Chirps.RemoveAll(chirp => chirp.Timestamp == timestamp);
    }

    public Task<List<Chirp>> Get()
    {
    return Task.FromResult(Chirps);
    }

    // Omitted boilerplate “Run” function
    }

    Here, our state is stored in Chirps, a list of user posts. Our operations are the same as before: Get, Read, and Add. It’s the same pattern as before, but we’re representing different data.

    To put it all together, let’s set up Entity clients to generate and manipulate these Entities according to some REST API.

    Interacting with Entity

    Before going there, let’s talk briefly about how you can interact with an Entity. Entity interactions take one of two forms -- calls and signals:

    Calling an entity is a two-way communication. You send an operation message to the entity and then wait for the response message before you continue. The response can be a result value or an error. Signaling an entity is a one-way (fire-and-forget) communication. You send an operation message but don’t wait for a response. You have the reassurance that the message will be delivered eventually, but you don’t know when and don’t know what the response is. For example, when you read the state of an Entity, you are performing a “call” interaction. When you record that a user has followed another, you may choose to simply signal it.

    Now say user with a given userId (say “durableFan99” ) wants to post a chirp. For this, you can write an HTTP endpoint to signal the UserChips entity to record that chirp. We can leverage the HTTP Trigger functionality from Azure Functions and pair it with an entity client binding that signals the Add operation of our Chirp Entity:

    [FunctionName("UserChirpsPost")] 
    public static async Task<HttpResponseMessage> UserChirpsPost(
    [HttpTrigger(AuthorizationLevel.Function, "post", Route = "user/{userId}/chirps")]
    HttpRequestMessage req,
    DurableClient] IDurableClient client,
    ILogger log,
    string userId)
    {
    Authenticate(req, userId);
    var chirp = new Chirp()
    {
    UserId = userId,
    Timestamp = DateTime.UtcNow,
    Content = await req.Content.ReadAsStringAsync(),
    };
    await client.SignalEntityAsync<IUserChirps>(userId, x => x.Add(chirp));
    return req.CreateResponse(HttpStatusCode.Accepted, chirp);
    }

    Following the same pattern as above, to get all the chirps from a user, you could read the status of your Entity via ReadEntityStateAsync, which follows the call-interaction pattern as your client expects a response:

    [FunctionName("UserChirpsGet")] 
    public static async Task<HttpResponseMessage> UserChirpsGet(
    [HttpTrigger(AuthorizationLevel.Function, "get", Route = "user/{userId}/chirps")] HttpRequestMessage req,
    [DurableClient] IDurableClient client,
    ILogger log,
    string userId)
    {

    Authenticate(req, userId);
    var target = new EntityId(nameof(UserChirps), userId);
    var chirps = await client.ReadEntityStateAsync<UserChirps>(target);
    return chirps.EntityExists
    ? req.CreateResponse(HttpStatusCode.OK, chirps.EntityState.Chirps)
    : req.CreateResponse(HttpStatusCode.NotFound);
    }

    And there you have it! To play with a complete implementation of Chirper, you can try out our sample in the Durable Functions extension repo.

    Thank you!

    info

    Thanks for following along, and we hope you find Entities as useful as we do! If you have questions or feedback, please file issues in the repo above or tag us @AzureFunctions on Twitter

    - + \ No newline at end of file diff --git a/blog/zero2hero-func-03/index.html b/blog/zero2hero-func-03/index.html index 76efb4e216..ea42a045a6 100644 --- a/blog/zero2hero-func-03/index.html +++ b/blog/zero2hero-func-03/index.html @@ -14,13 +14,13 @@ - +

    🚀 | Use Custom Handlers For Go

    · 6 min read
    Melony Qin

    Welcome to Day 12 of #30DaysOfServerless!

    Today, we have a special set of posts from our Zero To Hero 🚀 initiative, featuring blog posts authored by our Product Engineering teams for #ServerlessSeptember. Posts were originally published on the Apps on Azure blog on Microsoft Tech Community.


    What We'll Cover

    • What are Custom Handlers, and why use them?
    • How Custom Handler Works
    • Message Processing With Azure Custom Handler
    • Azure Portal Monitoring


    If you have been working with Azure Functions for a while, you may know Azure Functions is a serverless FaaS (Function as a Service) offered provided by Microsoft Azure, which is built for your key scenarios, including building web APIs, processing file uploads, responding to database changes, processing IoT data streams, managing message queues, and more.

    Custom Handlers: What and Why

    Azure functions support multiple programming languages including C#, F#, Java, JavaScript, Python, typescript, and PowerShell. If you want to get extended language support with Azure functions for other languages such as Go, and Rust, that’s where custom handler comes in.

    An Azure function custom handler allows the use of any language that supports HTTP primitives and author Azure functions. With custom handlers, you can use triggers and input and output bindings via extension bundles, hence it supports all the triggers and bindings you're used to with Azure functions.

    How a Custom Handler Works

    Let’s take a look at custom handlers and how it works.

    • A request is sent to the function host when an event is triggered. It’s up to the function host to issue a request payload to the custom handler, which holds the trigger and inputs binding data as well as other metadata for the function.
    • The custom handler is a local HTTP web server. It executes the function code and returns a response payload to the Functions host.
    • The Functions host passes data from the response to the function's output bindings which will be passed to the downstream stream services for data processing.

    Check out this article to know more about Azure functions custom handlers.


    Message processing with Custom Handlers

    Message processing is one of the key scenarios that Azure functions are trying to address. In the message-processing scenario, events are often collected in queues. These events can trigger Azure functions to execute a piece of business logic.

    You can use the Service Bus trigger to respond to messages from an Azure Service Bus queue - it's then up to the Azure functions custom handlers to take further actions to process the messages. The process is described in the following diagram:

    Building Serverless Go Applications with Azure functions custom handlers

    In Azure function, the function.json defines the function's trigger, input and output bindings, and other configuration settings. Note that every function can have multiple bindings, but it can only have one trigger. The following is an example of setting up the Service Bus queue trigger in the function.json file :

    {
    "bindings": [
    {
    "name": "queueItem",
    "type": "serviceBusTrigger",
    "direction": "in",
    "queueName": "functionqueue",
    "connection": "ServiceBusConnection"
    }
    ]
    }

    You can add a binding definition in the function.json to write the output to a database or other locations of your desire. Supported bindings can be found here.

    As we’re programming in Go, so we need to set the value of defaultExecutablePath to handler in the customHandler.description section in the host.json file.

    Assume we’re programming in Windows OS, and we have named our go application as server.go, after we executed go build server.go command, it produces an executable called server.exe. So here we set server.exe in the host.json as the following example :

      "customHandler": {
    "description": {
    "defaultExecutablePath": "./server.exe",
    "workingDirectory": "",
    "arguments": []
    }
    }

    We’re showcasing a simple Go application here with Azure functions custom handlers where we print out the messages received from the function host. The following is the full code of server.go application :

    package main

    import (
    "encoding/json"
    "fmt"
    "log"
    "net/http"
    "os"
    )

    type InvokeRequest struct {
    Data map[string]json.RawMessage
    Metadata map[string]interface{}
    }

    func queueHandler(w http.ResponseWriter, r *http.Request) {
    var invokeRequest InvokeRequest

    d := json.NewDecoder(r.Body)
    d.Decode(&invokeRequest)

    var parsedMessage string
    json.Unmarshal(invokeRequest.Data["queueItem"], &parsedMessage)

    fmt.Println(parsedMessage)
    }

    func main() {
    customHandlerPort, exists := os.LookupEnv("FUNCTIONS_CUSTOMHANDLER_PORT")
    if !exists {
    customHandlerPort = "8080"
    }
    mux := http.NewServeMux()
    mux.HandleFunc("/MessageProcessorFunction", queueHandler)
    fmt.Println("Go server Listening on: ", customHandlerPort)
    log.Fatal(http.ListenAndServe(":"+customHandlerPort, mux))

    }

    Ensure you have Azure functions core tools installed, then we can use func start command to start our function. Then we’ll have have a C#-based Message Sender application on Github to send out 3000 messages to the Azure service bus queue. You’ll see Azure functions instantly start to process the messages and print out the message as the following:

    Monitoring Serverless Go Applications with Azure functions custom handlers


    Azure portal monitoring

    Let’s go back to Azure portal portal the events see how those messages in Azure Service Bus queue were being processed. There was 3000 messages were queued in the Service Bus queue ( the Blue line stands for incoming Messages ). The outgoing messages (the red line in smaller wave shape ) showing there are progressively being read by Azure functions as the following :

    Monitoring Serverless Go Applications with Azure functions custom handlers

    Check out this article about monitoring Azure Service bus for further information.

    Next steps

    Thanks for following along, we’re looking forward to hearing your feedback. Also, if you discover potential issues, please record them on Azure Functions host GitHub repository or tag us @AzureFunctions on Twitter.

    RESOURCES

    Start to build your serverless applications with custom handlers, check out the official documentation:

    Life is a journey of learning. Let’s stay tuned!

    - + \ No newline at end of file diff --git a/blog/zero2hero-func-05/index.html b/blog/zero2hero-func-05/index.html index a8fc0fd40c..3e44164800 100644 --- a/blog/zero2hero-func-05/index.html +++ b/blog/zero2hero-func-05/index.html @@ -14,13 +14,13 @@ - +

    🚀 | Error Handling w/ Apache Kafka

    · 6 min read
    Ramya Oruganti

    Welcome to Day 19 of #30DaysOfServerless!

    Today, we have a special set of posts from our Zero To Hero 🚀 initiative, featuring blog posts authored by our Product Engineering teams for #ServerlessSeptember. Posts were originally published on the Apps on Azure blog on Microsoft Tech Community.


    What We'll Cover

    • Retry Policy Support - in Apache Kafka Extension
    • AutoOffsetReset property - in Apache Kafka Extension
    • Key support for Kafka messages - in Apache Kafka Extension
    • References: Apache Kafka Extension for Azure Functions


    Recently we launched the Apache Kafka extension for Azure functions in GA with some cool new features like deserialization of Avro Generic records and Kafka headers support. We received great responses - so we're back with more updates!

    Retry Policy support

    Handling errors in Azure Functions is important to avoid data loss or miss events or monitor the health of an application. Apache Kafka Extension for Azure Functions supports retry policy which tells the runtime to rerun a failed execution until either successful completion occurs or the maximum number of retries is reached.

    A retry policy is evaluated when a trigger function raises an uncaught exception. As a best practice, you should catch all exceptions in your code and rethrow any errors that you want to result in a retry.

    There are two retry strategies supported by policy that you can configure :- fixed delay and exponential backoff

    1. Fixed Delay - A specified amount of time is allowed to elapse between each retry.
    2. Exponential Backoff - The first retry waits for the minimum delay. On subsequent retries, time is added exponentially to the initial duration for each retry, until the maximum delay is reached. Exponential back-off adds some small randomization to delays to stagger retries in high-throughput scenarios.
    Please Note

    Retry Policy for Kafka extension is NOT supported for C# (in proc and out proc) trigger and output binding. This is supported for languages like Java, Node (JS , TypeScript), PowerShell and Python trigger and output bindings.

    Here is the sample code view of exponential backoff retry strategy

    Error Handling with Apache Kafka extension for Azure Functions

    AutoOffsetReset property

    AutoOffsetReset property enables customers to configure the behaviour in the absence of an initial offset. Imagine a scenario when there is a need to change consumer group name. The consumer connected using a new consumer group had to reprocess all events starting from the oldest (earliest) one, as this was the default one and this setting wasn’t exposed as configurable option in the Apache Kafka extension for Azure Functions(previously). With the help of this kafka setting you can configure on how to start processing events for newly created consumer groups.

    Due to lack of the ability to configure this setting, offset commit errors were causing topics to restart from earliest offset· Users were looking to be able to set offset setting to either latest or earliest based on their requirements.

    We are happy to share that we have enabled the AutoOffsetReset setting as a configurable one to either - Earliest(Default) and Latest. Setting the value to Earliest configures the consumption of the messages from the the earliest/smallest offset or beginning of the topic partition. Setting the property to Latest configures the consumption of the messages from the latest/largest offset or from the end of the topic partition. This is supported for all the Azure Functions supported languages (C# (in & out), Java, Node (JS and TypeScript), PowerShell and python) and can be used for both triggers and output binding

    Error Handling with Apache Kafka extension for Azure Functions

    Key support for Kafka messages

    With keys the producer/output binding can be mapped to broker and partition to write based on the message. So alongside the message value, we can choose to send a message key and that key can be whatever you want it could be a string, it could be a number . In case you don’t send the key, the key is set to null then the data will be sent in a Round Robin fashion to make it very simple. But in case you send a key with your message, all the messages that share the same key will always go to the same partition and thus you can enable grouping of similar messages into partitions

    Previously while consuming a Kafka event message using the Azure Function kafka extension, the event key was always none although the key was present in the event message.

    Key support was implemented in the extension which enables customers to set/view key in the Kafka event messages coming in to the kafka trigger and set keys to the messages going in to kafka topics (with keys set) through output binding. Therefore key support was enabled in the extension to support both trigger and output binding for all Azure Functions supported languages ( (C# (in & out), Java, Node (JS and TypeScript), PowerShell and python)

    Here is the view of an output binding producer code where Kafka messages are being set with key

    Error Handling with Apache Kafka extension for Azure Functions

    Conclusion:

    In this article you have learnt about the latest additions to the Apache Kafka extension for Azure Functions. Incase you have been waiting for these features to get released or need them you are all set and please go head and try them out!! They are available in the latest extension bundles

    Want to learn more?

    Please refer to Apache Kafka bindings for Azure Functions | Microsoft Docs for detail documentation, samples on the Azure function supported languages and more!

    References

    FEEDBACK WELCOME

    Keep in touch with us on Twitter via @AzureFunctions.

    - + \ No newline at end of file diff --git a/blog/zero2hero-func-07/index.html b/blog/zero2hero-func-07/index.html index a4d041f790..304dd70368 100644 --- a/blog/zero2hero-func-07/index.html +++ b/blog/zero2hero-func-07/index.html @@ -14,14 +14,14 @@ - +

    🚀 | Monitor + Troubleshoot Apps

    · 5 min read
    Madhura Bharadwaj

    Welcome to Day 26 of #30DaysOfServerless!

    Today, we have a special set of posts from our Zero To Hero 🚀 initiative, featuring blog posts authored by our Product Engineering teams for #ServerlessSeptember. Posts were originally published on the Apps on Azure blog on Microsoft Tech Community.


    What We'll Cover

    • Monitoring your Azure Functions
    • Built-in log streaming
    • Live Metrics stream
    • Troubleshooting Azure Functions


    Monitoring your Azure Functions:

    Azure Functions uses Application Insights to collect and analyze log data from individual function executions in your function app.

    Using Application Insights

    Application Insights collects log, performance, and error data. By automatically detecting performance anomalies and featuring powerful analytics tools, you can more easily diagnose issues and better understand how your functions are used. These tools are designed to help you continuously improve performance and usability of your functions. You can even use Application Insights during local function app project development.

    Typically, you create an Application Insights instance when you create your function app. In this case, the instrumentation key required for the integration is already set as an application setting named APPINSIGHTS_INSTRUMENTATIONKEY. With Application Insights integration enabled, telemetry data is sent to your connected Application Insights instance. This data includes logs generated by the Functions host, traces written from your functions code, and performance data. In addition to data from your functions and the Functions host, you can also collect data from the Functions scale controller.

    By default, the data collected from your function app is stored in Application Insights. In the Azure portal, Application Insights provides an extensive set of visualizations of your telemetry data. You can drill into error logs and query events and metrics. To learn more, including basic examples of how to view and query your collected data, see Analyze Azure Functions telemetry in Application Insights.

    Using Log Streaming

    In addition to this, you can have a smoother debugging experience through log streaming. There are two ways to view a stream of log files being generated by your function executions.

    • Built-in log streaming: the App Service platform lets you view a stream of your application log files. This is equivalent to the output seen when you debug your functions during local development and when you use the Test tab in the portal. All log-based information is displayed. For more information, see Stream logs. This streaming method supports only a single instance and can't be used with an app running on Linux in a Consumption plan.
    • Live Metrics Stream: when your function app is connected to Application Insights, you can view log data and other metrics in near real-time in the Azure portal using Live Metrics Stream. Use this method when monitoring functions running on multiple-instances or on Linux in a Consumption plan. This method uses sampled data. Log streams can be viewed both in the portal and in most local development environments.
    Monitoring Azure Functions

    Learn how to configure monitoring for your Azure Functions. See Monitoring Azure Functions data reference for detailed information on the metrics and logs metrics created by Azure Functions.

    In addition to this, Azure Functions uses Azure Monitor to monitor the health of your function apps. Azure Functions collects the same kinds of monitoring data as other Azure resources that are described in Azure Monitor data collection. See Monitoring Azure Functions data reference for detailed information on the metrics and logs metrics created by Azure Functions.

    Troubleshooting your Azure Functions:

    When you do run into issues with your function app, Azure Functions diagnostics points out what’s wrong. It guides you to the right information to troubleshoot and resolve the issue more easily and quickly.

    Let’s explore how to use Azure Functions diagnostics to diagnose and solve common function app issues.

    1. Navigate to your function app in the Azure portal.
    2. Select Diagnose and solve problems to open Azure Functions diagnostics.
    3. Once you’re here, there are multiple ways to retrieve the information you’re looking for. Choose a category that best describes the issue of your function app by using the keywords in the homepage tile. You can also type a keyword that best describes your issue in the search bar. There’s also a section at the bottom of the page that will directly take you to some of the more popular troubleshooting tools. For example, you could type execution to see a list of diagnostic reports related to your function app execution and open them directly from the homepage.

    Monitoring and troubleshooting apps in Azure Functions

    1. For example, click on the Function App Down or Reporting Errors link under Popular troubleshooting tools section. You will find detailed analysis, insights and next steps for the issues that were detected. On the left you’ll see a list of detectors. Click on them to explore more, or if there’s a particular keyword you want to look for, type it Into the search bar on the top.

    Monitoring and troubleshooting apps in Azure Functions

    TROUBLESHOOTING TIP

    Here are some general troubleshooting tips that you can follow if you find your Function App throwing Azure Functions Runtime unreachable error.

    Also be sure to check out the recommended best practices to ensure your Azure Functions are highly reliable. This article details some best practices for designing and deploying efficient function apps that remain healthy and perform well in a cloud-based environment.

    Bonus tip:

    - + \ No newline at end of file diff --git a/calendar/index.html b/calendar/index.html index 628b6d4674..1e531fbfa5 100644 --- a/calendar/index.html +++ b/calendar/index.html @@ -14,13 +14,13 @@ - +

    #ServerlessSeptember Calendar

    Look for these icons, for signature activities.


    Upcoming

    Check this section for links to upcoming activities for #Serverless September.

    WhenWhatWhere
    Sep 01✍🏽 Kickoff: #30DaysOfServerless, 4 Themed WeeksWebsite
    Sep 01🎯 Cloud Skills Challenge: Register NowMicrosoft Learn
    Sep 02✍🏽 Week 1: Functions-As-A-ServiceWebsite
    Sep 05🚀 A walkthrough of Durable EntitiesApps On Azure
    Sep 05🚀 Go Cloud-Native With Azure Container AppsApps On Azure
    Sep 07🏆 Hacks: How to get into Tech And Serverless YouTube
    Sep 09✍🏽 Week 2: Containers & MicroservicesWebsite
    Sep 12🚀 Journey to the cloud with Azure Container AppsApps On Azure
    Sep 12🚀 Building Serverless Go Applications with Azure functions custom handlersApps On Azure
    Sep 14🏆 Hacks: How to DevOps and Serverless the Right Way🌟 Register Now
    Sep 15🎤 ATE: Azure Functions Live Q&A with Product Team🌟 Register Now
    Sep 15🚀 Error Handling with Kafka extension - and what's new with the Kafka triggerApps On Azure
    Sep 16Containers, Serverless & IoT Meetup - In-Person, Online🌟 Register Now
    Sep 16✍🏽 Week 3: Serverless IntegrationsWebsite
    Sep 19🚀 Azure Container Apps observabilityApps On Azure
    Sep 21🏆 Hacks: The Serverless Project that Got Me Promoted!🌟 Register Now
    Sep 23✍🏽 Week 4: Serverless End-to-EndWebsite
    Sep 23#Learnathon - Azure Functions - In Person, Online🌟 Register Now
    Sep 26🚀 How to monitor and troubleshoot applications in Azure FunctionsApps On Azure
    Sep 26🚀 End-to-End solution development with codeApps on Azure
    Sep 28Webinar: Java Azure Functions with Kafka🌟 Register Now
    Sep 28🏆 Hacks: So you want to migrate your project to Serverless?🌟 Register Now
    Sep 29🎤 ATE: Azure Container Apps Live Q&A with Product Team🌟 Register Here
    Sep 29Serverless Meetup: #SamosaChai.NET = Livestream🌟 Register Now
    Sep 30🎯 Cloud Skills Challenge: Last Day To Complete It!Microsoft Learn

    Archive

    Check this section once events or content deadlines are past, to get links to published posts or recordings, to catch up on what you missed.

    - + \ No newline at end of file diff --git a/cnny-2023/Kubernetes-101/index.html b/cnny-2023/Kubernetes-101/index.html index fddde9e9fb..45f9a856aa 100644 --- a/cnny-2023/Kubernetes-101/index.html +++ b/cnny-2023/Kubernetes-101/index.html @@ -14,14 +14,14 @@ - +

    1-3. Kubernetes 101

    · 3 min read
    Steven Murawski

    Welcome to Day 3 of Week 1 of #CloudNativeNewYear!

    This week we'll focus on what Kubernetes is.

    What We'll Cover

    • Introduction
    • What is Kubernetes? (Video)
    • How does Kubernetes Work? (Video)
    • Conclusion


    REGISTER & LEARN: KUBERNETES 101

    Interested in a dive into Kubernetes and a chance to talk to experts?

    🎙: Join us Jan 26 @1pm PST by registering here

    Here's what you will learn:

    • Key concepts and core principles of Kubernetes.
    • How to deploy, scale and manage containerized workloads.
    • Live Demo of the concepts explained
    • How to get started with Azure Kubernetes Service for free.

    Start your free Azure Kubernetes Trial Today!!: aka.ms/TryAKS

    Introduction

    Kubernetes is an open source container orchestration engine that can help with automated deployment, scaling, and management of our applications.

    Kubernetes takes physical (or virtual) resources and provides a consistent API over them, bringing a consistency to the management and runtime experience for our applications. Kubernetes provides us with a number of capabilities such as:

    • Container scheduling
    • Service discovery and load balancing
    • Storage orchestration
    • Automated rollouts and rollbacks
    • Automatic bin packing
    • Self-healing
    • Secret and configuration management

    We'll learn more about most of these topics as we progress through Cloud Native New Year.

    What is Kubernetes?

    Let's hear from Brendan Burns, one of the founders of Kubernetes as to what Kubernetes actually is.

    How does Kubernetes Work?

    And Brendan shares a bit more with us about how Kubernetes works.

    Conclusion

    Kubernetes allows us to deploy and manage our applications effectively and consistently.

    By providing a consistent API across many of the concerns our applications have, like load balancing, networking, storage, and compute, Kubernetes improves both our ability to build and ship new software.

    There are standards for the applications to depend on for resources needed. Deployments, metrics, and logs are provided in a standardized fashion allowing more effecient operations across our application environments.

    And since Kubernetes is an open source platform, it can be found in just about every type of operating environment - cloud, virtual machines, physical hardware, shared data centers, even small devices like Rasberry Pi's!

    Want to learn more? Join us for a webinar on Kubernetes Concepts (or catch the playback) on Thursday, January 26th at 1 PM PST and watch for the rest of this series right here!

    - + \ No newline at end of file diff --git a/cnny-2023/aks-extensions-addons/index.html b/cnny-2023/aks-extensions-addons/index.html index 1e1111e659..707afb6ac6 100644 --- a/cnny-2023/aks-extensions-addons/index.html +++ b/cnny-2023/aks-extensions-addons/index.html @@ -14,13 +14,13 @@ - +

    4-4. Azure Kubernetes Services Add-ons and Extensions

    · 4 min read
    Jorge Arteiro

    Welcome to Day 4 of Week 4 of #CloudNativeNewYear!

    The theme for this week is going further with Cloud Native. Yesterday we talked about Windows Containers. Today we'll explore addons and extensions available to Azure Kubernetes Services (AKS).

    What We'll Cover

    • Introduction
    • Add-ons
    • Extensions
    • Add-ons vs Extensions
    • Resources

    Introduction

    Azure Kubernetes Service (AKS) is a fully managed container orchestration service that makes it easy to deploy and manage containerized applications on Azure. AKS offers a number of features and capabilities, including the ability to extend its supported functionality through the use of add-ons and extensions.

    There are also integrations available from open-source projects and third parties, but they are not covered by the AKS support policy.

    Add-ons

    Add-ons provide a supported way to extend AKS. Installation, configuration and lifecycle are managed by AKS following pre-determine updates rules.

    As an example, let's enable Container Insights with the monitoring addon. on an existing AKS cluster using az aks enable-addons --addons CLI command

    az aks enable-addons \
    --name MyManagedCluster \
    --resource-group MyResourceGroup \
    --addons monitoring

    or you can use az aks create --enable-addons when creating new clusters

    az aks create \
    --name MyManagedCluster \
    --resource-group MyResourceGroup \
    --enable-addons monitoring

    The current available add-ons are:

    1. http_application_routing - Configure ingress with automatic public DNS name creation. Only recommended for development.
    2. monitoring - Container Insights monitoring.
    3. virtual-node - CNCF virtual nodes open source project.
    4. azure-policy - Azure Policy for AKS.
    5. ingress-appgw - Application Gateway Ingress Controller (AGIC).
    6. open-service-mesh - CNCF Open Service Mesh project.
    7. azure-keyvault-secrets-provider - Azure Key Vault Secrets Provider for Secret Store CSI Driver.
    8. web_application_routing - Managed NGINX ingress Controller.
    9. keda - CNCF Event-driven autoscaling project.

    For more details, get the updated list of AKS Add-ons here

    Extensions

    Cluster Extensions uses Helm charts and integrates with Azure Resource Manager (ARM) to provide installation and lifecycle management of capabilities on top of AKS.

    Extensions can be auto upgraded using minor versions, but it requires extra management and configuration. Using Scope parameter, it can be installed on the whole cluster or per namespace.

    AKS Extensions requires an Azure CLI extension to be installed. To add or update this CLI extension use the following commands:

    az extension add --name k8s-extension

    and to update an existing extension

    az extension update --name k8s-extension

    There are only 3 available extensions:

    1. Dapr - CNCF Dapr project.
    2. Azure ML - Integrate Azure Machine Learning with AKS to train, inference and manage ML models.
    3. Flux (GitOps) - CNCF Flux project integrated with AKS to enable cluster configuration and application deployment using GitOps.

    As an example, you can install Azure ML using the following command:

    az k8s-extension create \
    --name aml-compute --extension-type Microsoft.AzureML.Kubernetes \
    --scope cluster --cluster-name <clusterName> \
    --resource-group <resourceGroupName> \
    --cluster-type managedClusters \
    --configuration-settings enableInference=True allowInsecureConnections=True

    For more details, get the updated list of AKS Extensions here

    Add-ons vs Extensions

    AKS Add-ons brings an advantage of been fully managed by AKS itself, and AKS Extensions are more flexible and configurable but requires extra level of management.

    Add-ons are part of the AKS resource provider in the Azure API, and AKS Extensions are a separate resource provider on the Azure API.

    Resources

    It's not too late to sign up for and complete the Cloud Skills Challenge!
    - + \ No newline at end of file diff --git a/cnny-2023/archive/index.html b/cnny-2023/archive/index.html index eb73d9e2fd..2aaa4b8b5b 100644 --- a/cnny-2023/archive/index.html +++ b/cnny-2023/archive/index.html @@ -14,13 +14,13 @@ - +
    - + \ No newline at end of file diff --git a/cnny-2023/bring-your-app-day-1/index.html b/cnny-2023/bring-your-app-day-1/index.html index 619cad1c16..7ed368b668 100644 --- a/cnny-2023/bring-your-app-day-1/index.html +++ b/cnny-2023/bring-your-app-day-1/index.html @@ -14,7 +14,7 @@ - + @@ -22,7 +22,7 @@

    3-1. Bringing Your Application to Kubernetes - CI/CD

    · 14 min read
    Steven Murawski

    Welcome to Day 1 of Week 3 of #CloudNativeNewYear!

    The theme for this week is Bringing Your Application to Kubernetes. Last we talked about Kubernetes Fundamentals. Today we'll explore getting an existing application running in Kubernetes with a full pipeline in GitHub Actions.

    Ask the Experts Thursday, February 9th at 9 AM PST
    Friday, February 10th at 11 AM PST

    Watch the recorded demo and conversation about this week's topics.

    We were live on YouTube walking through today's (and the rest of this week's) demos. Join us Friday, February 10th and bring your questions!

    What We'll Cover

    • Our Application
    • Adding Some Infrastructure as Code
    • Building and Publishing a Container Image
    • Deploying to Kubernetes
    • Summary
    • Resources

    Our Application

    This week we'll be taking an exisiting application - something similar to a typical line of business application - and setting it up to run in Kubernetes. Over the course of the week, we'll address different concerns. Today we'll focus on updating our CI/CD process to handle standing up (or validating that we have) an Azure Kubernetes Service (AKS) environment, building and publishing container images for our web site and API server, and getting those services running in Kubernetes.

    The application we'll be starting with is eShopOnWeb. This application has a web site and API which are backed by a SQL Server instance. It's built in .NET 7, so it's cross-platform.

    info

    For the enterprising among you, you may notice that there are a number of different eShopOn* variants on GitHub, including eShopOnContainers. We aren't using that example as it's more of an end state than a starting place. Afterwards, feel free to check out that example as what this solution could look like as a series of microservices.

    Adding Some Infrastructure as Code

    Just like last week, we need to stand up an AKS environment. This week, however, rather than running commands in our own shell, we'll set up GitHub Actions to do that for us.

    There is a LOT of plumbing in this section, but once it's set up, it'll make our lives a lot easier. This section ensures that we have an environment to deploy our application into configured the way we want. We can easily extend this to accomodate multiple environments or add additional microservices with minimal new effort.

    Federated Identity

    Setting up a federated identity will allow us a more securable and auditable way of accessing Azure from GitHub Actions. For more about setting up a federated identity, Microsoft Learn has the details on connecting GitHub Actions to Azure.

    Here, we'll just walk through the setup of the identity and configure GitHub to use that idenity to deploy our AKS environment and interact with our Azure Container Registry.

    The examples will use PowerShell, but a Bash version of the setup commands is available in the week3/day1 branch.

    Prerequisites

    To follow along, you'll need:

    • a GitHub account
    • an Azure Subscription
    • the Azure CLI
    • and the Git CLI.

    You'll need to fork the source repository under your GitHub user or organization where you can manage secrets and GitHub Actions.

    It would be helpful to have the GitHub CLI, but it's not required.

    Set Up Some Defaults

    You will need to update one or more of the variables (your user or organization, what branch you want to work off of, and possibly the Azure AD application name if there is a conflict).

    # Replace the gitHubOrganizationName value
    # with the user or organization you forked
    # the repository under.

    $githubOrganizationName = 'Azure-Samples'
    $githubRepositoryName = 'eShopOnAKS'
    $branchName = 'week3/day1'
    $applicationName = 'cnny-week3-day1'

    Create an Azure AD Application

    Next, we need to create an Azure AD application.

    # Create an Azure AD application
    $aksDeploymentApplication = New-AzADApplication -DisplayName $applicationName

    Set Up Federation for that Azure AD Application

    And configure that application to allow federated credential requests from our GitHub repository for a particular branch.

    # Create a federated identity credential for the application
    New-AzADAppFederatedCredential `
    -Name $applicationName `
    -ApplicationObjectId $aksDeploymentApplication.Id `
    -Issuer 'https://token.actions.githubusercontent.com' `
    -Audience 'api://AzureADTokenExchange' `
    -Subject "repo:$($githubOrganizationName)/$($githubRepositoryName):ref:refs/heads/$branchName"

    Create a Service Principal for the Azure AD Application

    Once the application has been created, we need a service principal tied to that application. The service principal can be granted rights in Azure.

    # Create a service principal for the application
    New-AzADServicePrincipal -AppId $($aksDeploymentApplication.AppId)

    Give that Service Principal Rights to Azure Resources

    Because our Bicep deployment exists at the subscription level and we are creating role assignments, we need to give it Owner rights. If we changed the scope of the deployment to just a resource group, we could apply more scoped permissions.

    $azureContext = Get-AzContext
    New-AzRoleAssignment `
    -ApplicationId $($aksDeploymentApplication.AppId) `
    -RoleDefinitionName Owner `
    -Scope $azureContext.Subscription.Id

    Add Secrets to GitHub Repository

    If you have the GitHub CLI, you can use that right from your shell to set the secrets needed.

    gh secret set AZURE_CLIENT_ID --body $aksDeploymentApplication.AppId
    gh secret set AZURE_TENANT_ID --body $azureContext.Tenant.Id
    gh secret set AZURE_SUBSCRIPTION_ID --body $azureContext.Subscription.Id

    Otherwise, you can create them through the web interface like I did in the Learn Live event below.

    info

    It may look like the whole video will play, but it'll stop after configuring the secrets in GitHub (after about 9 minutes)

    The video shows creating the Azure AD application, service principals, and configuring the federated identity in Azure AD and GitHub.

    Creating a Bicep Deployment

    Resuable Workflows

    We'll create our Bicep deployment in a reusable workflows. What are they? The previous link has the documentation or the video below has my colleague Brandon Martinez and I talking about them.

    This workflow is basically the same deployment we did in last week's series, just in GitHub Actions.

    Start by creating a file called deploy_aks.yml in the .github/workflows directory with the below contents.

    name: deploy

    on:
    workflow_call:
    inputs:
    resourceGroupName:
    required: true
    type: string
    secrets:
    AZURE_CLIENT_ID:
    required: true
    AZURE_TENANT_ID:
    required: true
    AZURE_SUBSCRIPTION_ID:
    required: true
    outputs:
    containerRegistryName:
    description: Container Registry Name
    value: ${{ jobs.deploy.outputs.containerRegistryName }}
    containerRegistryUrl:
    description: Container Registry Login Url
    value: ${{ jobs.deploy.outputs.containerRegistryUrl }}
    resourceGroupName:
    description: Resource Group Name
    value: ${{ jobs.deploy.outputs.resourceGroupName }}
    aksName:
    description: Azure Kubernetes Service Cluster Name
    value: ${{ jobs.deploy.outputs.aksName }}

    permissions:
    id-token: write
    contents: read

    jobs:
    validate:
    runs-on: ubuntu-latest
    steps:
    - uses: actions/checkout@v2
    - uses: azure/login@v1
    name: Sign in to Azure
    with:
    client-id: ${{ secrets.AZURE_CLIENT_ID }}
    tenant-id: ${{ secrets.AZURE_TENANT_ID }}
    subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}
    - uses: azure/arm-deploy@v1
    name: Run preflight validation
    with:
    deploymentName: ${{ github.run_number }}
    scope: subscription
    region: eastus
    template: ./deploy/main.bicep
    parameters: >
    resourceGroup=${{ inputs.resourceGroupName }}
    deploymentMode: Validate

    deploy:
    needs: validate
    runs-on: ubuntu-latest
    outputs:
    containerRegistryName: ${{ steps.deploy.outputs.acr_name }}
    containerRegistryUrl: ${{ steps.deploy.outputs.acr_login_server_url }}
    resourceGroupName: ${{ steps.deploy.outputs.resource_group_name }}
    aksName: ${{ steps.deploy.outputs.aks_name }}
    steps:
    - uses: actions/checkout@v2
    - uses: azure/login@v1
    name: Sign in to Azure
    with:
    client-id: ${{ secrets.AZURE_CLIENT_ID }}
    tenant-id: ${{ secrets.AZURE_TENANT_ID }}
    subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}
    - uses: azure/arm-deploy@v1
    id: deploy
    name: Deploy Bicep file
    with:
    failOnStdErr: false
    deploymentName: ${{ github.run_number }}
    scope: subscription
    region: eastus
    template: ./deploy/main.bicep
    parameters: >
    resourceGroup=${{ inputs.resourceGroupName }}

    Adding the Bicep Deployment

    Once we have the Bicep deployment workflow, we can add it to the primary build definition in .github/workflows/dotnetcore.yml

    Permissions

    First, we need to add a permissions block to let the workflow request our Azure AD token. This can go towards the top of the YAML file (I started it on line 5).

    permissions:
    id-token: write
    contents: read

    Deploy AKS Job

    Next, we'll add a reference to our reusable workflow. This will go after the build job.

      deploy_aks:
    needs: [build]
    uses: ./.github/workflows/deploy_aks.yml
    with:
    resourceGroupName: 'cnny-week3'
    secrets:
    AZURE_CLIENT_ID: ${{ secrets.AZURE_CLIENT_ID }}
    AZURE_TENANT_ID: ${{ secrets.AZURE_TENANT_ID }}
    AZURE_SUBSCRIPTION_ID: ${{ secrets.AZURE_SUBSCRIPTION_ID }}

    Building and Publishing a Container Image

    Now that we have our target environment in place and an Azure Container Registry, we can build and publish our container images.

    Add a Reusable Workflow

    First, we'll create a new file for our reusable workflow at .github/workflows/publish_container_image.yml.

    We'll start the file with a name, the parameters it needs to run, and the permissions requirements for the federated identity request.

    name: Publish Container Images

    on:
    workflow_call:
    inputs:
    containerRegistryName:
    required: true
    type: string
    containerRegistryUrl:
    required: true
    type: string
    githubSha:
    required: true
    type: string
    secrets:
    AZURE_CLIENT_ID:
    required: true
    AZURE_TENANT_ID:
    required: true
    AZURE_SUBSCRIPTION_ID:
    required: true

    permissions:
    id-token: write
    contents: read

    Build the Container Images

    Our next step is to build the two container images we'll need for the application, the website and the API. We'll build the container images on our build worker and tag it with the git SHA, so there'll be a direct tie between the point in time in our codebase and the container images that represent it.

    jobs:
    publish_container_image:
    runs-on: ubuntu-latest

    steps:
    - uses: actions/checkout@v2
    - name: docker build
    run: |
    docker build . -f src/Web/Dockerfile -t ${{ inputs.containerRegistryUrl }}/web:${{ inputs.githubSha }}
    docker build . -f src/PublicApi/Dockerfile -t ${{ inputs.containerRegistryUrl }}/api:${{ inputs.githubSha}}

    Scan the Container Images

    Before we publish those container images, we'll scan them for vulnerabilities and best practice violations. We can add these two steps (one scan for each image).

        - name: scan web container image
    uses: Azure/container-scan@v0
    with:
    image-name: ${{ inputs.containerRegistryUrl }}/web:${{ inputs.githubSha}}
    - name: scan api container image
    uses: Azure/container-scan@v0
    with:
    image-name: ${{ inputs.containerRegistryUrl }}/web:${{ inputs.githubSha}}

    The container images provided have a few items that'll be found. We can create an allowed list at .github/containerscan/allowedlist.yaml to define vulnerabilities or best practice violations that we'll explictly allow to not fail our build.

    general:
    vulnerabilities:
    - CVE-2022-29458
    - CVE-2022-3715
    - CVE-2022-1304
    - CVE-2021-33560
    - CVE-2020-16156
    - CVE-2019-8457
    - CVE-2018-8292
    bestPracticeViolations:
    - CIS-DI-0001
    - CIS-DI-0005
    - CIS-DI-0006
    - CIS-DI-0008

    Publish the Container Images

    Finally, we'll log in to Azure, then log in to our Azure Container Registry, and push our images.

        - uses: azure/login@v1
    name: Sign in to Azure
    with:
    client-id: ${{ secrets.AZURE_CLIENT_ID }}
    tenant-id: ${{ secrets.AZURE_TENANT_ID }}
    subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}
    - name: acr login
    run: az acr login --name ${{ inputs.containerRegistryName }}
    - name: docker push
    run: |
    docker push ${{ inputs.containerRegistryUrl }}/web:${{ inputs.githubSha}}
    docker push ${{ inputs.containerRegistryUrl }}/api:${{ inputs.githubSha}}

    Update the Build With the Image Build and Publish

    Now that we have our reusable workflow to create and publish our container images, we can include that in our primary build defnition at .github/workflows/dotnetcore.yml.

      publish_container_image:
    needs: [deploy_aks]
    uses: ./.github/workflows/publish_container_image.yml
    with:
    containerRegistryName: ${{ needs.deploy_aks.outputs.containerRegistryName }}
    containerRegistryUrl: ${{ needs.deploy_aks.outputs.containerRegistryUrl }}
    githubSha: ${{ github.sha }}
    secrets:
    AZURE_CLIENT_ID: ${{ secrets.AZURE_CLIENT_ID }}
    AZURE_TENANT_ID: ${{ secrets.AZURE_TENANT_ID }}
    AZURE_SUBSCRIPTION_ID: ${{ secrets.AZURE_SUBSCRIPTION_ID }}

    Deploying to Kubernetes

    Finally, we've gotten enough set up that a commit to the target branch will:

    • build and test our application code
    • set up (or validate) our AKS and ACR environment
    • and create, scan, and publish our container images to ACR

    Our last step will be to deploy our application to Kubernetes. We'll use the basic building blocks we worked with last week, deployments and services.

    Starting the Reusable Workflow to Deploy to AKS

    We'll start our workflow with our parameters that we need, as well as the permissions to access the token to log in to Azure.

    We'll check out our code, then log in to Azure, and use the az CLI to get credentials for our AKS cluster.

    name: deploy_to_aks

    on:
    workflow_call:
    inputs:
    aksName:
    required: true
    type: string
    resourceGroupName:
    required: true
    type: string
    containerRegistryUrl:
    required: true
    type: string
    githubSha:
    required: true
    type: string
    secrets:
    AZURE_CLIENT_ID:
    required: true
    AZURE_TENANT_ID:
    required: true
    AZURE_SUBSCRIPTION_ID:
    required: true

    permissions:
    id-token: write
    contents: read

    jobs:
    deploy:
    runs-on: ubuntu-latest
    steps:
    - uses: actions/checkout@v2
    - uses: azure/login@v1
    name: Sign in to Azure
    with:
    client-id: ${{ secrets.AZURE_CLIENT_ID }}
    tenant-id: ${{ secrets.AZURE_TENANT_ID }}
    subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}
    - name: Get AKS Credentials
    run: |
    az aks get-credentials --resource-group ${{ inputs.resourceGroupName }} --name ${{ inputs.aksName }}

    Edit the Deployment For Our Current Image Tag

    Let's add the Kubernetes manifests to our repo. This post is long enough, so you can find the content for the manifests folder in the manifests folder in the source repo under the week3/day1 branch.

    tip

    If you only forked the main branch of the source repo, you can easily get the updated manifests by using the following git commands:

    git remote add upstream https://github.com/Azure-Samples/eShopOnAks
    git fetch upstream week3/day1
    git checkout upstream/week3/day1 manifests

    This will make the week3/day1 branch available locally and then we can update the manifests directory to match the state of that branch.

    The deployments and the service defintions should be familiar from last week's content (but not the same). This week, however, there's a new file in the manifests - ./manifests/kustomization.yaml

    This file helps us more dynamically edit our kubernetes manifests and support is baked right in to the kubectl command.

    Kustomize Definition

    Kustomize allows us to specify specific resource manifests and areas of that manifest to replace. We've put some placeholders in our file as well, so we can replace those for each run of our CI/CD system.

    In ./manifests/kustomization.yaml you will see:

    resources:
    - deployment-api.yaml
    - deployment-web.yaml

    # Change the image name and version
    images:
    - name: notavalidregistry.azurecr.io/api:v0.1.0
    newName: <YOUR_ACR_SERVER>/api
    newTag: <YOUR_IMAGE_TAG>
    - name: notavalidregistry.azurecr.io/web:v0.1.0
    newName: <YOUR_ACR_SERVER>/web
    newTag: <YOUR_IMAGE_TAG>

    Replacing Values in our Build

    Now, we encounter a little problem - our deployment files need to know the tag and ACR server. We can do a bit of sed magic to edit the file on the fly.

    In .github/workflows/deploy_to_aks.yml, we'll add:

          - name: replace_placeholders_with_current_run
    run: |
    sed -i "s/<YOUR_ACR_SERVER>/${{ inputs.containerRegistryUrl }}/g" ./manifests/kustomization.yaml
    sed -i "s/<YOUR_IMAGE_TAG>/${{ inputs.githubSha }}/g" ./manifests/kustomization.yaml

    Deploying the Manifests

    We have our manifests in place and our kustomization.yaml file (with commands to update it at runtime) ready to go, we can deploy our manifests.

    First, we'll deploy our database (deployment and service). Next, we'll use the -k parameter on kubectl to tell it to look for a kustomize configuration, transform the requested manifests and apply those. Finally, we apply the service defintions for the web and API deployments.

            run: |
    kubectl apply -f ./manifests/deployment-db.yaml \
    -f ./manifests/service-db.yaml
    kubectl apply -k ./manifests
    kubectl apply -f ./manifests/service-api.yaml \
    -f ./manifests/service-web.yaml

    Summary

    We've covered a lot of ground in today's post. We set up federated credentials with GitHub. Then we added reusable workflows to deploy an AKS environment and build/scan/publish our container images, and then to deploy them into our AKS environment.

    This sets us up to start making changes to our application and Kubernetes configuration and have those changes automatically validated and deployed by our CI/CD system. Tomorrow, we'll look at updating our application environment with runtime configuration, persistent storage, and more.

    Resources

    Take the Cloud Skills Challenge!

    Enroll in the Cloud Skills Challenge!

    Don't miss out on this opportunity to level up your skills and stay ahead of the curve in the world of cloud native.

    - + \ No newline at end of file diff --git a/cnny-2023/bring-your-app-day-2/index.html b/cnny-2023/bring-your-app-day-2/index.html index 033afb8db4..c06ec259fb 100644 --- a/cnny-2023/bring-your-app-day-2/index.html +++ b/cnny-2023/bring-your-app-day-2/index.html @@ -14,13 +14,13 @@ - +

    3-2. Bringing Your Application to Kubernetes - Adapting Storage, Secrets, and Configuration

    · 12 min read
    Paul Yu

    Welcome to Day 2 of Week 3 of #CloudNativeNewYear!

    The theme for this week is Bringing Your Application to Kubernetes. Yesterday we talked about getting an existing application running in Kubernetes with a full pipeline in GitHub Actions. Today we'll evaluate our sample application's configuration, storage, and networking requirements and implement using Kubernetes and Azure resources.

    Ask the Experts Thursday, February 9th at 9 AM PST
    Friday, February 10th at 11 AM PST

    Watch the recorded demo and conversation about this week's topics.

    We were live on YouTube walking through today's (and the rest of this week's) demos. Join us Friday, February 10th and bring your questions!

    What We'll Cover

    • Gather requirements
    • Implement environment variables using ConfigMaps
    • Implement persistent volumes using Azure Files
    • Implement secrets using Azure Key Vault
    • Re-package deployments
    • Conclusion
    • Resources
    caution

    Before you begin, make sure you've gone through yesterday's post to set up your AKS cluster.

    Gather requirements

    The eShopOnWeb application is written in .NET 7 and has two major pieces of functionality. The web UI is where customers can browse and shop. The web UI also includes an admin portal for managing the product catalog. This admin portal, is packaged as a WebAssembly application and relies on a separate REST API service. Both the web UI and the REST API connect to the same SQL Server container.

    Looking through the source code which can be found here we can identify requirements for configs, persistent storage, and secrets.

    Database server

    • Need to store the password for the sa account as a secure secret
    • Need persistent storage volume for data directory
    • Need to inject environment variables for SQL Server license type and EULA acceptance

    Web UI and REST API service

    • Need to store database connection string as a secure secret
    • Need to inject ASP.NET environment variables to override app settings
    • Need persistent storage volume for ASP.NET key storage

    Implement environment variables using ConfigMaps

    ConfigMaps are relatively straight-forward to create. If you were following along with the examples last week, this should be review 😉

    Create a ConfigMap to store database environment variables.

    kubectl apply -f - <<EOF
    apiVersion: v1
    kind: ConfigMap
    metadata:
    name: mssql-settings
    data:
    MSSQL_PID: Developer
    ACCEPT_EULA: "Y"
    EOF

    Create another ConfigMap to store ASP.NET environment variables.

    kubectl apply -f - <<EOF
    apiVersion: v1
    kind: ConfigMap
    metadata:
    name: aspnet-settings
    data:
    ASPNETCORE_ENVIRONMENT: Development
    EOF

    Implement persistent volumes using Azure Files

    Similar to last week, we'll take advantage of storage classes built into AKS. For our SQL Server data, we'll use the azurefile-csi-premium storage class and leverage an Azure Files resource as our PersistentVolume.

    Create a PersistentVolumeClaim (PVC) for persisting SQL Server data.

    kubectl apply -f - <<EOF
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
    name: mssql-data
    spec:
    accessModes:
    - ReadWriteMany
    storageClassName: azurefile-csi-premium
    resources:
    requests:
    storage: 5Gi
    EOF

    Create another PVC for persisting ASP.NET data.

    kubectl apply -f - <<EOF
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
    name: aspnet-data
    spec:
    accessModes:
    - ReadWriteMany
    storageClassName: azurefile-csi-premium
    resources:
    requests:
    storage: 5Gi
    EOF

    Implement secrets using Azure Key Vault

    It's a well known fact that Kubernetes secretes are not really secrets. They're just base64-encoded values and not secure, especially if malicious users have access to your Kubernetes cluster.

    In a production scenario, you will want to leverage an external vault like Azure Key Vault or HashiCorp Vault to encrypt and store secrets.

    With AKS, we can enable the Secrets Store CSI driver add-on which will allow us to leverage Azure Key Vault.

    # Set some variables
    RG_NAME=<YOUR_RESOURCE_GROUP_NAME>
    AKS_NAME=<YOUR_AKS_CLUSTER_NAME>
    ACR_NAME=<YOUR_ACR_NAME>

    az aks enable-addons \
    --addons azure-keyvault-secrets-provider \
    --name $AKS_NAME \
    --resource-group $RG_NAME

    With the add-on enabled, you should see aks-secrets-store-csi-driver and aks-secrets-store-provider-azure resources installed on each node in your Kubernetes cluster.

    Run the command below to verify.

    kubectl get pods \
    --namespace kube-system \
    --selector 'app in (secrets-store-csi-driver, secrets-store-provider-azure)'

    The Secrets Store CSI driver allows us to use secret stores via Container Storage Interface (CSI) volumes. This provider offers capabilities such as mounting and syncing between the secure vault and Kubernetes Secrets. On AKS, the Azure Key Vault Provider for Secrets Store CSI Driver enables integration with Azure Key Vault.

    You may not have an Azure Key Vault created yet, so let's create one and add some secrets to it.

    AKV_NAME=$(az keyvault create \
    --name akv-eshop$RANDOM \
    --resource-group $RG_NAME \
    --query name -o tsv)

    # Database server password
    az keyvault secret set \
    --vault-name $AKV_NAME \
    --name mssql-password \
    --value "@someThingComplicated1234"

    # Catalog database connection string
    az keyvault secret set \
    --vault-name $AKV_NAME \
    --name mssql-connection-catalog \
    --value "Server=db;Database=Microsoft.eShopOnWeb.CatalogDb;User Id=sa;Password=@someThingComplicated1234;TrustServerCertificate=True;"

    # Identity database connection string
    az keyvault secret set \
    --vault-name $AKV_NAME \
    --name mssql-connection-identity \
    --value "Server=db;Database=Microsoft.eShopOnWeb.Identity;User Id=sa;Password=@someThingComplicated1234;TrustServerCertificate=True;"

    Pods authentication using Azure Workload Identity

    In order for our Pods to retrieve secrets from Azure Key Vault, we'll need to set up a way for the Pod to authenticate against Azure AD. This can be achieved by implementing the new Azure Workload Identity feature of AKS.

    info

    At the time of this writing, the workload identity feature of AKS is in Preview.

    The workload identity feature within AKS allows us to leverage native Kubernetes resources and link a Kubernetes ServiceAccount to an Azure Managed Identity to authenticate against Azure AD.

    For the authentication flow, our Kubernetes cluster will act as an Open ID Connect (OIDC) issuer and will be able issue identity tokens to ServiceAccounts which will be assigned to our Pods.

    The Azure Managed Identity will be granted permission to access secrets in our Azure Key Vault and with the ServiceAccount being assigned to our Pods, they will be able to retrieve our secrets.

    For more information on how the authentication mechanism all works, check out this doc.

    To implement all this, start by enabling the new preview feature for AKS.

    az feature register \
    --namespace "Microsoft.ContainerService" \
    --name "EnableWorkloadIdentityPreview"
    caution

    This can take several minutes to complete.

    Check the status and ensure the state shows Regestered before moving forward.

    az feature show \
    --namespace "Microsoft.ContainerService" \
    --name "EnableWorkloadIdentityPreview"

    Update your AKS cluster to enable the workload identity feature and enable the OIDC issuer endpoint.

    az aks update \
    --name $AKS_NAME \
    --resource-group $RG_NAME \
    --enable-workload-identity \
    --enable-oidc-issuer

    Create an Azure Managed Identity and retrieve its client ID.

    MANAGED_IDENTITY_CLIENT_ID=$(az identity create \
    --name aks-workload-identity \
    --resource-group $RG_NAME \
    --subscription $(az account show --query id -o tsv) \
    --query 'clientId' -o tsv)

    Create the Kubernetes ServiceAccount.

    # Set namespace (this must align with the namespace that your app is deployed into)
    SERVICE_ACCOUNT_NAMESPACE=default

    # Set the service account name
    SERVICE_ACCOUNT_NAME=eshop-serviceaccount

    # Create the service account
    kubectl apply -f - <<EOF
    apiVersion: v1
    kind: ServiceAccount
    metadata:
    annotations:
    azure.workload.identity/client-id: ${MANAGED_IDENTITY_CLIENT_ID}
    labels:
    azure.workload.identity/use: "true"
    name: ${SERVICE_ACCOUNT_NAME}
    namespace: ${SERVICE_ACCOUNT_NAMESPACE}
    EOF
    info

    Note to enable this ServiceAccount to work with Azure Workload Identity, you must annotate the resource with azure.workload.identity/client-id, and add a label of azure.workload.identity/use: "true"

    That was a lot... Let's review what we just did.

    We have an Azure Managed Identity (object in Azure AD), an OIDC issuer URL (endpoint in our Kubernetes cluster), and a Kubernetes ServiceAccount.

    The next step is to "tie" these components together and establish a Federated Identity Credential so that Azure AD can trust authentication requests from your Kubernetes cluster.

    info

    This identity federation can be established between Azure AD any Kubernetes cluster; not just AKS 🤗

    To establish the federated credential, we'll need the OIDC issuer URL, and a subject which points to your Kubernetes ServiceAccount.

    # Get the OIDC issuer URL
    OIDC_ISSUER_URL=$(az aks show \
    --name $AKS_NAME \
    --resource-group $RG_NAME \
    --query "oidcIssuerProfile.issuerUrl" -o tsv)

    # Set the subject name using this format: `system:serviceaccount:<YOUR_SERVICE_ACCOUNT_NAMESPACE>:<YOUR_SERVICE_ACCOUNT_NAME>`
    SUBJECT=system:serviceaccount:$SERVICE_ACCOUNT_NAMESPACE:$SERVICE_ACCOUNT_NAME

    az identity federated-credential create \
    --name aks-federated-credential \
    --identity-name aks-workload-identity \
    --resource-group $RG_NAME \
    --issuer $OIDC_ISSUER_URL \
    --subject $SUBJECT

    With the authentication components set, we can now create a SecretProviderClass which includes details about the Azure Key Vault, the secrets to pull out from the vault, and identity used to access the vault.

    # Get the tenant id for the key vault
    TENANT_ID=$(az keyvault show \
    --name $AKV_NAME \
    --resource-group $RG_NAME \
    --query properties.tenantId -o tsv)

    # Create the secret provider for azure key vault
    kubectl apply -f - <<EOF
    apiVersion: secrets-store.csi.x-k8s.io/v1
    kind: SecretProviderClass
    metadata:
    name: eshop-azure-keyvault
    spec:
    provider: azure
    parameters:
    usePodIdentity: "false"
    useVMManagedIdentity: "false"
    clientID: "${MANAGED_IDENTITY_CLIENT_ID}"
    keyvaultName: "${AKV_NAME}"
    cloudName: ""
    objects: |
    array:
    - |
    objectName: mssql-password
    objectType: secret
    objectVersion: ""
    - |
    objectName: mssql-connection-catalog
    objectType: secret
    objectVersion: ""
    - |
    objectName: mssql-connection-identity
    objectType: secret
    objectVersion: ""
    tenantId: "${TENANT_ID}"
    secretObjects:
    - secretName: eshop-secrets
    type: Opaque
    data:
    - objectName: mssql-password
    key: mssql-password
    - objectName: mssql-connection-catalog
    key: mssql-connection-catalog
    - objectName: mssql-connection-identity
    key: mssql-connection-identity
    EOF

    Finally, lets grant the Azure Managed Identity permissions to retrieve secrets from the Azure Key Vault.

    az keyvault set-policy \
    --name $AKV_NAME \
    --secret-permissions get \
    --spn $MANAGED_IDENTITY_CLIENT_ID

    Re-package deployments

    Update your database deployment to load environment variables from our ConfigMap, attach the PVC and SecretProviderClass as volumes, mount the volumes into the Pod, and use the ServiceAccount to retrieve secrets.

    Additionally, you may notice the database Pod is set to use fsGroup:10001 as part of the securityContext. This is required as the MSSQL container runs using a non-root account called mssql and this account has the proper permissions to read/write data at the /var/opt/mssql mount path.

    kubectl apply -f - <<EOF
    apiVersion: apps/v1
    kind: Deployment
    metadata:
    name: db
    labels:
    app: db
    spec:
    replicas: 1
    selector:
    matchLabels:
    app: db
    template:
    metadata:
    labels:
    app: db
    spec:
    securityContext:
    fsGroup: 10001
    serviceAccountName: ${SERVICE_ACCOUNT_NAME}
    containers:
    - name: db
    image: mcr.microsoft.com/mssql/server:2019-latest
    ports:
    - containerPort: 1433
    envFrom:
    - configMapRef:
    name: mssql-settings
    env:
    - name: MSSQL_SA_PASSWORD
    valueFrom:
    secretKeyRef:
    name: eshop-secrets
    key: mssql-password
    resources: {}
    volumeMounts:
    - name: mssqldb
    mountPath: /var/opt/mssql
    - name: eshop-secrets
    mountPath: "/mnt/secrets-store"
    readOnly: true
    volumes:
    - name: mssqldb
    persistentVolumeClaim:
    claimName: mssql-data
    - name: eshop-secrets
    csi:
    driver: secrets-store.csi.k8s.io
    readOnly: true
    volumeAttributes:
    secretProviderClass: eshop-azure-keyvault
    EOF

    We'll update the API and Web deployments in a similar way.

    # Set the image tag
    IMAGE_TAG=<YOUR_IMAGE_TAG>

    # API deployment
    kubectl apply -f - <<EOF
    apiVersion: apps/v1
    kind: Deployment
    metadata:
    name: api
    labels:
    app: api
    spec:
    replicas: 1
    selector:
    matchLabels:
    app: api
    template:
    metadata:
    labels:
    app: api
    spec:
    serviceAccount: ${SERVICE_ACCOUNT_NAME}
    containers:
    - name: api
    image: ${ACR_NAME}.azurecr.io/api:${IMAGE_TAG}
    ports:
    - containerPort: 80
    envFrom:
    - configMapRef:
    name: aspnet-settings
    env:
    - name: ConnectionStrings__CatalogConnection
    valueFrom:
    secretKeyRef:
    name: eshop-secrets
    key: mssql-connection-catalog
    - name: ConnectionStrings__IdentityConnection
    valueFrom:
    secretKeyRef:
    name: eshop-secrets
    key: mssql-connection-identity
    resources: {}
    volumeMounts:
    - name: aspnet
    mountPath: ~/.aspnet/https:/root/.aspnet/https:ro
    - name: eshop-secrets
    mountPath: "/mnt/secrets-store"
    readOnly: true
    volumes:
    - name: aspnet
    persistentVolumeClaim:
    claimName: aspnet-data
    - name: eshop-secrets
    csi:
    driver: secrets-store.csi.k8s.io
    readOnly: true
    volumeAttributes:
    secretProviderClass: eshop-azure-keyvault
    EOF

    ## Web deployment
    kubectl apply -f - <<EOF
    apiVersion: apps/v1
    kind: Deployment
    metadata:
    name: web
    labels:
    app: web
    spec:
    replicas: 1
    selector:
    matchLabels:
    app: web
    template:
    metadata:
    labels:
    app: web
    spec:
    serviceAccount: ${SERVICE_ACCOUNT_NAME}
    containers:
    - name: web
    image: ${ACR_NAME}.azurecr.io/web:${IMAGE_TAG}
    ports:
    - containerPort: 80
    envFrom:
    - configMapRef:
    name: aspnet-settings
    env:
    - name: ConnectionStrings__CatalogConnection
    valueFrom:
    secretKeyRef:
    name: eshop-secrets
    key: mssql-connection-catalog
    - name: ConnectionStrings__IdentityConnection
    valueFrom:
    secretKeyRef:
    name: eshop-secrets
    key: mssql-connection-identity
    resources: {}
    volumeMounts:
    - name: aspnet
    mountPath: ~/.aspnet/https:/root/.aspnet/https:ro
    - name: eshop-secrets
    mountPath: "/mnt/secrets-store"
    readOnly: true
    volumes:
    - name: aspnet
    persistentVolumeClaim:
    claimName: aspnet-data
    - name: eshop-secrets
    csi:
    driver: secrets-store.csi.k8s.io
    readOnly: true
    volumeAttributes:
    secretProviderClass: eshop-azure-keyvault
    EOF

    If all went well with your deployment updates, you should be able to browse to your website and buy some merchandise again 🥳

    echo "http://$(kubectl get service web -o jsonpath='{.status.loadBalancer.ingress[0].ip}')"

    Conclusion

    Although there is no visible changes on with our website, we've made a ton of changes on the Kubernetes backend to make this application much more secure and resilient.

    We used a combination of Kubernetes resources and AKS-specific features to achieve our goal of securing our secrets and ensuring data is not lost on container crashes and restarts.

    To learn more about the components we leveraged here today, checkout the resources and additional tutorials listed below.

    You can also find manifests with all the changes made in today's post in the Azure-Samples/eShopOnAKS repository.

    See you in the next post!

    Resources

    Take the Cloud Skills Challenge!

    Enroll in the Cloud Skills Challenge!

    Don't miss out on this opportunity to level up your skills and stay ahead of the curve in the world of cloud native.

    - + \ No newline at end of file diff --git a/cnny-2023/bring-your-app-day-3/index.html b/cnny-2023/bring-your-app-day-3/index.html index ab4a40f7f3..04eb91e844 100644 --- a/cnny-2023/bring-your-app-day-3/index.html +++ b/cnny-2023/bring-your-app-day-3/index.html @@ -14,13 +14,13 @@ - +

    3-3. Bringing Your Application to Kubernetes - Opening your Application with Ingress

    · 10 min read
    Paul Yu

    Welcome to Day 3 of Week 3 of #CloudNativeNewYear!

    The theme for this week is Bringing Your Application to Kubernetes. Yesterday we added configuration, secrets, and storage to our app. Today we'll explore how to expose the eShopOnWeb app so that customers can reach it over the internet using a custom domain name and TLS.

    Ask the Experts Thursday, February 9th at 9 AM PST
    Friday, February 10th at 11 AM PST

    Watch the recorded demo and conversation about this week's topics.

    We were live on YouTube walking through today's (and the rest of this week's) demos. Join us Friday, February 10th and bring your questions!

    What We'll Cover

    • Gather requirements
    • Generate TLS certificate and store in Azure Key Vault
    • Implement custom DNS using Azure DNS
    • Enable Web Application Routing add-on for AKS
    • Implement Ingress for the web application
    • Conclusion
    • Resources

    Gather requirements

    Currently, our eShopOnWeb app has three Kubernetes services deployed:

    1. db exposed internally via ClusterIP
    2. api exposed externally via LoadBalancer
    3. web exposed externally via LoadBalancer

    As mentioned in my post last week, Services allow applications to communicate with each other using DNS names. Kubernetes has service discovery capabilities built-in that allows Pods to resolve Services simply by using their names.

    In the case of our api and web deployments, they can simply reach the database by calling its name. The service type of ClusterIP for the db can remain as-is since it only needs to be accessed by the api and web apps.

    On the other hand, api and web both need to be accessed over the public internet. Currently, these services are using service type LoadBalancer which tells AKS to provision an Azure Load Balancer with a public IP address. No one is going to remember the IP addresses, so we need to make the app more accessible by adding a custom domain name and securing it with a TLS certificate.

    Here's what we're going to need:

    • Custom domain name for our app
    • TLS certificate for the custom domain name
    • Routing rule to ensure requests with /api/ in the URL is routed to the backend REST API
    • Routing rule to ensure requests without /api/ in the URL is routing to the web UI

    Just like last week, we will use the Web Application Routing add-on for AKS. But this time, we'll integrate it with Azure DNS and Azure Key Vault to satisfy all of our requirements above.

    info

    At the time of this writing the add-on is still in Public Preview

    Generate TLS certificate and store in Azure Key Vault

    We deployed an Azure Key Vault yesterday to store secrets. We'll use it again to store a TLS certificate too.

    Let's create and export a self-signed certificate for the custom domain.

    DNS_NAME=eshoponweb$RANDOM.com
    openssl req -new -x509 -nodes -out web-tls.crt -keyout web-tls.key -subj "/CN=${DNS_NAME}" -addext "subjectAltName=DNS:${DNS_NAME}"
    openssl pkcs12 -export -in web-tls.crt -inkey web-tls.key -out web-tls.pfx -password pass:
    info

    For learning purposes we'll use a self-signed certificate and a fake custom domain name.

    To browse to the site using the fake domain, we'll mimic a DNS lookup by adding an entry to your host file which maps the public IP address assigned to the ingress controller to the custom domain.

    In a production scenario, you will need to have a real domain delegated to Azure DNS and a valid TLS certificate for the domain.

    Grab your Azure Key Vault name and set the value in a variable for later use.

    RESOURCE_GROUP=cnny-week3

    AKV_NAME=$(az resource list \
    --resource-group $RESOURCE_GROUP \
    --resource-type Microsoft.KeyVault/vaults \
    --query "[0].name" -o tsv)

    Grant yourself permissions to get, list, and import certificates.

    MY_USER_NAME=$(az account show --query user.name -o tsv)
    MY_USER_OBJECT_ID=$(az ad user show --id $MY_USER_NAME --query id -o tsv)

    az keyvault set-policy \
    --name $AKV_NAME \
    --object-id $MY_USER_OBJECT_ID \
    --certificate-permissions get list import

    Upload the TLS certificate to Azure Key Vault and grab its certificate URI.

    WEB_TLS_CERT_ID=$(az keyvault certificate import \
    --vault-name $AKV_NAME \
    --name web-tls \
    --file web-tls.pfx \
    --query id \
    --output tsv)

    Implement custom DNS with Azure DNS

    Create a custom domain for our application and grab its Azure resource id.

    DNS_ZONE_ID=$(az network dns zone create \
    --name $DNS_NAME \
    --resource-group $RESOURCE_GROUP \
    --query id \
    --output tsv)

    Enable Web Application Routing add-on for AKS

    As we enable the Web Application Routing add-on, we'll also pass in the Azure DNS Zone resource id which triggers the installation of the external-dns controller in your Kubernetes cluster. This controller will be able to write Azure DNS zone entries on your behalf as you deploy Ingress manifests.

    AKS_NAME=$(az resource list \
    --resource-group $RESOURCE_GROUP \
    --resource-type Microsoft.ContainerService/managedClusters \
    --query "[0].name" -o tsv)

    az aks enable-addons \
    --name $AKS_NAME \
    --resource-group $RESOURCE_GROUP \
    --addons web_application_routing \
    --dns-zone-resource-id=$DNS_ZONE_ID \
    --enable-secret-rotation

    The add-on will also deploy a new Azure Managed Identity which is used by the external-dns controller when writing Azure DNS zone entries. Currently, it does not have permission to do that, so let's grant it permission.

    # This is where resources are automatically deployed by AKS
    NODE_RESOURCE_GROUP=$(az aks show \
    --name $AKS_NAME \
    --resource-group $RESOURCE_GROUP \
    --query nodeResourceGroup -o tsv)

    # This is the managed identity created by the Web Application Routing add-on
    MANAGED_IDENTTIY_OBJECT_ID=$(az resource show \
    --name webapprouting-${AKS_NAME} \
    --resource-group $NODE_RESOURCE_GROUP \
    --resource-type Microsoft.ManagedIdentity/userAssignedIdentities \
    --query properties.principalId \
    --output tsv)

    # Grant the managed identity permissions to write DNS entries
    az role assignment create \
    --role "DNS Zone Contributor" \
    --assignee $MANAGED_IDENTTIY_OBJECT_ID \
    --scope $DNS_ZONE_ID

    The Azure Managed Identity will also be used to retrieve and rotate TLS certificates from Azure Key Vault. So we'll need to grant it permission for that too.

    az keyvault set-policy \
    --name $AKV_NAME \
    --object-id $MANAGED_IDENTTIY_OBJECT_ID \
    --secret-permissions get \
    --certificate-permissions get

    Implement Ingress for the web application

    Before we create a new Ingress manifest, let's update the existing services to use ClusterIP instead of LoadBalancer. With an Ingress in place, there is no reason why we need the Service resources to be accessible from outside the cluster. The new Ingress will be the only entrypoint for external users.

    We can use the kubectl patch command to update the services

    kubectl patch service api -p '{"spec": {"type": "ClusterIP"}}'
    kubectl patch service web -p '{"spec": {"type": "ClusterIP"}}'

    Deploy a new Ingress to place in front of the web Service. Notice there is a special annotations entry for kubernetes.azure.com/tls-cert-keyvault-uri which points back to our self-signed certificate that was uploaded to Azure Key Vault.

    kubectl apply -f - <<EOF
    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
    annotations:
    kubernetes.azure.com/tls-cert-keyvault-uri: ${WEB_TLS_CERT_ID}
    name: web
    spec:
    ingressClassName: webapprouting.kubernetes.azure.com
    rules:
    - host: ${DNS_NAME}
    http:
    paths:
    - backend:
    service:
    name: web
    port:
    number: 80
    path: /
    pathType: Prefix
    - backend:
    service:
    name: api
    port:
    number: 80
    path: /api
    pathType: Prefix
    tls:
    - hosts:
    - ${DNS_NAME}
    secretName: web-tls
    EOF

    In our manifest above, we've also configured the Ingress route the traffic to either the web or api services based on the URL path requested. If the request URL includes /api/ then it will send traffic to the api backend service. Otherwise, it will send traffic to the web service.

    Within a few minutes, the external-dns controller will add an A record to Azure DNS which points to the Ingress resource's public IP. With the custom domain in place, we can simply browse using this domain name.

    info

    As mentioned above, since this is not a real domain name, we need to modify our host file to make it seem like our custom domain is resolving to the Ingress' public IP address.

    To get the ingress public IP, run the following:

    # Get the IP
    kubectl get ingress web -o jsonpath="{.status.loadBalancer.ingress[0].ip}"

    # Get the hostname
    kubectl get ingress web -o jsonpath="{.spec.tls[0].hosts[0]}"

    Next, open your host file and add an entry using the format <YOUR_PUBLIC_IP> <YOUR_CUSTOM_DOMAIN>. Below is an example of what it should look like.

    20.237.116.224 eshoponweb11265.com

    See this doc for more info on how to do this.

    When browsing to the website, you may be presented with a warning about the connection not being private. This is due to the fact that we are using a self-signed certificate. This is expected, so go ahead and proceed anyway to load up the page.

    Why is the Admin page broken?

    If you log in using the admin@microsoft.com account and browse to the Admin page, you'll notice no products are loaded on the page.

    This is because the admin page is built using Blazor and compiled as a WebAssembly application that runs in your browser. When the application was compiled, it packed the appsettings.Development.json file as an embedded resource. This file contains the base URL for the public API and it currently points to https://localhost:5099. Now that we have a domain name, we can update the base URL and point it to our custom domain.

    From the root of the eShopOnWeb repo, update the configuration file using a sed command.

    sed -i -e "s/localhost:5099/${DNS_NAME}/g" ./src/BlazorAdmin/wwwroot/appsettings.Development.json

    Rebuild and push the container to Azure Container Registry.

    # Grab the name of your Azure Container Registry
    ACR_NAME=$(az resource list \
    --resource-group $RESOURCE_GROUP \
    --resource-type Microsoft.ContainerRegistry/registries \
    --query "[0].name" -o tsv)

    # Invoke a build and publish job
    az acr build \
    --registry $ACR_NAME \
    --image $ACR_NAME.azurecr.io/web:v0.1.0 \
    --file ./src/Web/Dockerfile .

    Once the container build has completed, we can issue a kubectl patch command to quickly update the web deployment to test our change.

    kubectl patch deployment web -p "$(cat <<EOF
    {
    "spec": {
    "template": {
    "spec": {
    "containers": [
    {
    "name": "web",
    "image": "${ACR_NAME}.azurecr.io/web:v0.1.0"
    }
    ]
    }
    }
    }
    }
    EOF
    )"

    If all went well, you will be able to browse the admin page again and confirm product data is being loaded 🥳

    Conclusion

    The Web Application Routing add-on for AKS aims to streamline the process of exposing it to the public using the open-source NGINX Ingress Controller. With the add-on being managed by Azure, it natively integrates with other Azure services like Azure DNS and eliminates the need to manually create DNS entries. It can also integrate with Azure Key Vault to automatically pull in TLS certificates and rotate them as needed to further reduce operational overhead.

    We are one step closer to production and in the upcoming posts we'll further operationalize and secure our deployment, so stay tuned!

    In the meantime, check out the resources listed below for further reading.

    You can also find manifests with all the changes made in today's post in the Azure-Samples/eShopOnAKS repository.

    Resources

    Take the Cloud Skills Challenge!

    Enroll in the Cloud Skills Challenge!

    Don't miss out on this opportunity to level up your skills and stay ahead of the curve in the world of cloud native.

    - + \ No newline at end of file diff --git a/cnny-2023/bring-your-app-day-4/index.html b/cnny-2023/bring-your-app-day-4/index.html index 013a3f3ea5..114b94337c 100644 --- a/cnny-2023/bring-your-app-day-4/index.html +++ b/cnny-2023/bring-your-app-day-4/index.html @@ -14,13 +14,13 @@ - +

    3-4. Bringing Your Application to Kubernetes - Debugging and Instrumentation

    · 9 min read
    Steven Murawski

    Welcome to Day 4 of Week 3 of #CloudNativeNewYear!

    The theme for this week is Bringing Your Application to Kubernetes. Yesterday we exposed the eShopOnWeb app so that customers can reach it over the internet using a custom domain name and TLS. Today we'll explore the topic of debugging and instrumentation.

    Ask the Experts Thursday, February 9th at 9 AM PST
    Friday, February 10th at 11 AM PST

    Watch the recorded demo and conversation about this week's topics.

    We were live on YouTube walking through today's (and the rest of this week's) demos. Join us Friday, February 10th and bring your questions!

    What We'll Cover

    • Debugging
    • Bridge To Kubernetes
    • Instrumentation
    • Resources: For self-study!

    Debugging

    Debugging applications in a Kubernetes cluster can be challenging for several reasons:

    • Complexity: Kubernetes is a complex system with many moving parts, including pods, nodes, services, and config maps, all of which can interact in unexpected ways and cause issues.
    • Distributed Environment: Applications running in a Kubernetes cluster are often distributed across multiple nodes, which makes it harder to determine the root cause of an issue.
    • Logging and Monitoring: Debugging an application in a Kubernetes cluster requires access to logs and performance metrics, which can be difficult to obtain in a large and dynamic environment.
    • Resource Management: Kubernetes manages resources such as CPU and memory, which can impact the performance and behavior of applications. Debugging resource-related issues requires a deep understanding of the Kubernetes resource model and the underlying infrastructure.
    • Dynamic Nature: Kubernetes is designed to be dynamic, with the ability to add and remove resources as needed. This dynamic nature can make it difficult to reproduce issues and debug problems.

    However, there are many tools and practices that can help make debugging applications in a Kubernetes cluster easier, such as using centralized logging, monitoring, and tracing solutions, and following best practices for managing resources and deployment configurations.

    There's also another great tool in our toolbox - Bridge to Kubernetes.

    Bridge to Kubernetes

    Bridge to Kubernetes is a great tool for microservice development and debugging applications without having to locally replicate all the required microservices.

    Bridge to Kubernetes works with Visual Studio or Visual Studio Code.

    We'll walk through using it with Visual Studio Code.

    Connecting Bridge to Kubernetes to Our Cluster

    Ensure your AKS cluster is the default for kubectl

    If you've recently spun up a new AKS cluster or you have been working with a different cluster, you may need to change what cluster credentials you have configured.

    If it's a new cluster, we can use:

    RESOURCE_GROUP=<YOUR RESOURCE GROUP NAME>
    CLUSTER_NAME=<YOUR AKS CLUSTER NAME>
    az aks get-credentials az aks get-credentials --resource-group $RESOURCE_GROUP --name $CLUSTER_NAME

    Open the command palette

    Open the command palette and find Bridge to Kubernetes: Configure. You may need to start typing the name to get it to show up.

    The command palette for Visual Studio Code is open and the first item is Bridge to Kubernetes: Configure

    Pick the service you want to debug

    Bridge to Kubernetes will redirect a service for you. Pick the service you want to redirect, in this case we'll pick web.

    Selecting the `web` service to redirect in Visual Studio Code

    Identify the port your application runs on

    Next, we'll be prompted to identify what port our application will run on locally. For this application it'll be 5001, but that's just specific to this application (and the default for ASP.NET 7, I believe).

    Setting port 5001 as the port to redirect to the `web` Kubernetes service in Visual Studio Code

    Pick a debug configuration to extend

    Bridge to Kubernetes has a couple of ways to run - it can inject it's setup and teardown to your existing debug configurations. We'll pick .NET Core Launch (web).

    Telling Bridge to Kubernetes to use the .NET Core Launch (web) debug configuration in Visual Studio Code

    Forward Traffic for All Requests

    The last prompt you'll get in the configuration is about how you want Bridge to Kubernetes to handle re-routing traffic. The default is that all requests into the service will get your local version.

    You can also redirect specific traffic. Bridge to Kubernetes will set up a subdomain and route specific traffic to your local service, while allowing other traffic to the deployed service.

    Allowing the launch of Endpoint Manager on Windows

    Using Bridge to Kubernetes to Debug Our Service

    Now that we've configured Bridge to Kubernetes, we see that tasks and a new launch configuration have been added.

    Added to .vscode/tasks.json:

            {
    "label": "bridge-to-kubernetes.resource",
    "type": "bridge-to-kubernetes.resource",
    "resource": "web",
    "resourceType": "service",
    "ports": [
    5001
    ],
    "targetCluster": "aks1",
    "targetNamespace": "default",
    "useKubernetesServiceEnvironmentVariables": false
    },
    {
    "label": "bridge-to-kubernetes.compound",
    "dependsOn": [
    "bridge-to-kubernetes.resource",
    "build"
    ],
    "dependsOrder": "sequence"
    }

    And added to .vscode/launch.json:

    {
    "name": ".NET Core Launch (web) with Kubernetes",
    "type": "coreclr",
    "request": "launch",
    "preLaunchTask": "bridge-to-kubernetes.compound",
    "program": "${workspaceFolder}/src/Web/bin/Debug/net7.0/Web.dll",
    "args": [],
    "cwd": "${workspaceFolder}/src/Web",
    "stopAtEntry": false,
    "env": {
    "ASPNETCORE_ENVIRONMENT": "Development",
    "ASPNETCORE_URLS": "http://+:5001"
    },
    "sourceFileMap": {
    "/Views": "${workspaceFolder}/Views"
    }
    }

    Launch the debug configuration

    We can start the process with the .NET Core Launch (web) with Kubernetes launch configuration in the Debug pane in Visual Studio Code.

    Launch the `.NET Core Launch (web) with Kubernetes` from the Debug pane in Visual Studio Code

    Enable the Endpoint Manager

    Part of this process includes a local service to help manage the traffic routing and your hosts file. This will require admin or sudo privileges. On Windows, you'll get a prompt like:

    Prompt to launch the endpoint manager.

    Access your Kubernetes cluster "locally"

    Bridge to Kubernetes will set up a tunnel (thanks to port forwarding) to your local workstation and create local endpoints for the other Kubernetes hosted services in your cluster, as well as pretending to be a pod in that cluster (for the application you are debugging).

    Output from Bridge To Kubernetes setup task.

    After making the connection to your Kubernetes cluster, the launch configuration will continue. In this case, we'll make a debug build of the application and attach the debugger. (This process may cause the terminal in VS Code to scroll with build output. You can find the Bridge to Kubernetes output with the local IP addresses and ports in the Output pane for Bridge to Kubernetes.)

    You can set breakpoints, use your debug console, set watches, run tests against your local version of the service.

    Exploring the Running Application Environment

    One of the cool things that Bridge to Kubernetes does for our debugging experience is bring the environment configuration that our deployed pod would inherit. When we launch our app, it'll see configuration and secrets that we'd expect our pod to be running with.

    To test this, we'll set a breakpoint in our application's start up to see what SQL Server is being used. We'll set a breakpoint at src/Infrastructure/Dependencies.cs on line 32.

    Then, we will start debugging the application with Bridge to Kubernetes. When it hits the breakpoint, we'll open the Debug pane and type configuration.GetConnectionString("CatalogConnection").

    When we run locally (not with Bridge to Kubernetes), we'd see:

    configuration.GetConnectionString("CatalogConnection")
    "Server=(localdb)\\mssqllocaldb;Integrated Security=true;Initial Catalog=Microsoft.eShopOnWeb.CatalogDb;"

    But, with Bridge to Kubernetes we see something more like (yours will vary based on the password ):

    configuration.GetConnectionString("CatalogConnection")
    "Server=db;Database=Microsoft.eShopOnWeb.CatalogDb;User Id=sa;Password=*****************;TrustServerCertificate=True;"

    Debugging our local application connected to Kubernetes.

    We can see that the database server configured is based on our db service and the password is pulled from our secret in Azure KeyVault (via AKS).

    This helps us run our local application just like it was actually in our cluster.

    Going Further

    Bridge to Kubernetes also supports more advanced scenarios and, as you need to start routing traffic around inside your cluster, may require you to modify your application to pass along a kubernetes-route-as header to help ensure that traffic for your debugging workloads is properly handled. The docs go into much greater detail about that.

    Instrumentation

    Now that we've figured out our debugging story, we'll need to ensure that we have the right context clues to find where we need to debug or to give us a better idea of how well our microservices are running.

    Logging and Tracing

    Logging and tracing become even more critical in Kubernetes, where your application could be running in a number of pods across different nodes. When you have an issue, in addition to the normal application data, you'll want to know what pod and what node had the issue, what the state of those resources were (were you resource constrained or were shared resources unavailable?), and if autoscaling is enabled, you'll want to know if a scale event has been triggered. There are a multitude of other concerns based on your application and the environment you maintain.

    Given these informational needs, it's crucial to revisit your existing logging and instrumentation. Most frameworks and languages have extensible logging, tracing, and instrumentation libraries that you can iteratively add information to, such as pod and node states, and ensuring that requests can be traced across your microservices. This will pay you back time and time again when you have to troubleshoot issues in your existing environment.

    Centralized Logging

    To enhance the troubleshooting process further, it's important to implement centralized logging to consolidate logs from all your microservices into a single location. This makes it easier to search and analyze logs when you're troubleshooting an issue.

    Automated Alerting

    Additionally, implementing automated alerting, such as sending notifications when specific conditions occur in the logs, can help you detect issues before they escalate.

    End to end Visibility

    End-to-end visibility is also essential in understanding the flow of requests and responses between microservices in a distributed system. With end-to-end visibility, you can quickly identify bottlenecks and slowdowns in the system, helping you to resolve issues more efficiently.

    Resources

    Take the Cloud Skills Challenge!

    Enroll in the Cloud Skills Challenge!

    Don't miss out on this opportunity to level up your skills and stay ahead of the curve in the world of cloud native.

    - + \ No newline at end of file diff --git a/cnny-2023/bring-your-app-day-5/index.html b/cnny-2023/bring-your-app-day-5/index.html index fdad91dcf4..01ae70db56 100644 --- a/cnny-2023/bring-your-app-day-5/index.html +++ b/cnny-2023/bring-your-app-day-5/index.html @@ -14,13 +14,13 @@ - +

    3-5. Bringing Your Application to Kubernetes - CI/CD Secure Supply Chain

    · 6 min read
    Josh Duffney

    Welcome to Day 5 of Week 3 of #CloudNativeNewYear!

    The theme for this week is Bringing Your Application to Kubernetes. Yesterday we talked about debugging and instrumenting our application. Today we'll explore the topic of container image signing and secure supply chain.

    Ask the Experts Thursday, February 9th at 9 AM PST
    Friday, February 10th at 11 AM PST

    Watch the recorded demo and conversation about this week's topics.

    We were live on YouTube walking through today's (and the rest of this week's) demos. Join us Friday, February 10th and bring your questions!

    What We'll Cover

    • Introduction
    • Prerequisites
    • Create a digital signing certificate
    • Generate an Azure Container Registry Token
    • Set up Notation
    • Install the Notation Azure Key Vault Plugin
    • Add the signing Certificate to Notation
    • Sign Container Images
    • Conclusion

    Introduction

    The secure supply chain is a crucial aspect of software development, delivery, and deployment, and digital signing plays a critical role in this process.

    By using digital signatures to verify the authenticity and integrity of container images, organizations can improve the security of your software supply chain and reduce the risk of security breaches and data compromise.

    In this article, you'll learn how to use Notary, an open-source project hosted by the Cloud Native Computing Foundation (CNCF) to digitally sign container images stored on Azure Container Registry.

    Prerequisites

    To follow along, you'll need an instance of:

    Create a digital signing certificate

    A digital signing certificate is a certificate that is used to digitally sign and verify the authenticity and integrity of digital artifacts. Such documents, software, and of course container images.

    Before you can implement digital signatures, you must first create a digital signing certificate.

    Run the following command to generate the certificate:

    1. Create the policy file

      cat <<EOF > ./my_policy.json
      {
      "issuerParameters": {
      "certificateTransparency": null,
      "name": "Self"
      },
      "x509CertificateProperties": {
      "ekus": [
      "1.3.6.1.5.5.7.3.3"
      ],
      "key_usage": [
      "digitalSignature"
      ],
      "subject": "CN=${keySubjectName}",
      "validityInMonths": 12
      }
      }
      EOF

      The ekus and key usage of this certificate policy dictate that the certificate can only be used for digital signatures.

    2. Create the certificate in Azure Key Vault

      az keyvault certificate create --name $keyName --vault-name $keyVaultName --policy @my_policy.json

      Replace $keyName and $keyVaultName with your desired certificate name and Azure Key Vault instance name.

    Generate a Azure Container Registry token

    Azure Container Registry tokens are used to grant access to the contents of the registry. Tokens can be used for a variety of things such as pulling images, pushing images, or managing the registry.

    As part of the container image signing workflow, you'll need a token to authenticate the Notation CLI with your Azure Container Registry.

    Run the following command to generate an ACR token:

    az acr token create \
    --name $tokenName \
    --registry $registry \
    --scope-map _repositories_admin \
    --query 'credentials.passwords[0].value' \
    --only-show-errors \
    --output tsv

    Replace $tokenName with your name for the ACR token and $registry with the name of your ACR instance.

    Setup Notation

    Notation is the command-line interface for the CNCF Notary project. You'll use it to digitally sign the api and web container images for the eShopOnWeb application.

    Run the following commands to download and install the NotationCli:

    1. Open a terminal or command prompt window

    2. Download the Notary notation release

      curl -Lo notation.tar.gz https://github.com/notaryproject/notation/releases/download/v1.0.0-rc.1/notation_1.0.0-rc.1_linux_amd64.tar.gz > /dev/null 2>&1

      If you're not using Linux, you can find the releases here.

    3. Extract the contents of the notation.tar.gz

      tar xvzf notation.tar.gz > /dev/null 2>&1
    4. Copy the notation binary to the $HOME/bin directory

      cp ./notation $HOME/bin
    5. Add the $HOME/bin directory to the PATH environment variable

      export PATH="$HOME/bin:$PATH"
    6. Remove the downloaded files

      rm notation.tar.gz LICENSE
    7. Check the notation version

      notation --version

    Install the Notation Azure Key Vault plugin

    By design the NotationCli supports plugins that extend its digital signing capabilities to remote registries. And in order to sign your container images stored in Azure Container Registry, you'll need to install the Azure Key Vault plugin for Notation.

    Run the following commands to install the azure-kv plugin:

    1. Download the plugin

      curl -Lo notation-azure-kv.tar.gz \
      https://github.com/Azure/notation-azure-kv/releases/download/v0.5.0-rc.1/notation-azure-kv_0.5.0-rc.1_linux_amd64.tar.gz > /dev/null 2>&1

      Non-Linux releases can be found here.

    2. Extract to the plugin directory & delete download files

      tar xvzf notation-azure-kv.tar.gz -C ~/.config/notation/plugins/azure-kv notation-azure-kv > /dev/null 2>&

      rm -rf notation-azure-kv.tar.gz
    3. Verify the plugin was installed

      notation plugin ls

    Add the signing certificate to Notation

    Now that you have Notation and the Azure Key Vault plugin installed, add the certificate's keyId created above to Notation.

    1. Get the Certificate Key ID from Azure Key Vault

      az keyvault certificate show \
      --vault-name $keyVaultName \
      --name $keyName \
      --query "kid" --only-show-errors --output tsv

      Replace $keyVaultName and $keyName with the appropriate information.

    2. Add the Key ID to KMS using Notation

      notation key add --plugin azure-kv --id $keyID $keyName
    3. Check the key list

      notation key ls

    Sign Container Images

    At this point, all that's left is to sign the container images.

    Run the notation sign command to sign the api and web container images:

    notation sign $registry.azurecr.io/web:$tag \
    --username $tokenName \
    --password $tokenPassword

    notation sign $registry.azurecr.io/api:$tag \
    --username $tokenName \
    --password $tokenPassword

    Replace $registry, $tag, $tokenName, and $tokenPassword with the appropriate values. To improve security, use a SHA hash for the tag.

    NOTE: If you didn't take note of the token password, you can rerun the az acr token create command to generate a new password.

    Conclusion

    Digital signing plays a critical role in ensuring the security of software supply chains.

    By signing software components, organizations can verify the authenticity and integrity of software, helping to prevent unauthorized modifications, tampering, and malware.

    And if you want to take digital signing to a whole new level by using them to prevent the deployment of unsigned container images, check out the Ratify project on GitHub!

    Take the Cloud Skills Challenge!

    Enroll in the Cloud Skills Challenge!

    Don't miss out on this opportunity to level up your skills and stay ahead of the curve in the world of cloud native.

    - + \ No newline at end of file diff --git a/cnny-2023/building-with-draft/index.html b/cnny-2023/building-with-draft/index.html index 26209f27a2..b68ac9d56e 100644 --- a/cnny-2023/building-with-draft/index.html +++ b/cnny-2023/building-with-draft/index.html @@ -14,13 +14,13 @@ - +

    4-2. Jumpstart your applications with Draft

    · 3 min read
    Cory Skimming

    It's the final week of #CloudNativeNewYear! This week we'll go further with Cloud-native by exploring advanced topics and best practices for the Cloud-native practitioner. In today's post, we will introduce you to the basics of the open-source project Draft and how it can be used to easily create and deploy applications to Kubernetes.

    It's not too late to sign up for and complete the Cloud Skills Challenge!

    What We'll Cover

    • What is Draft?
    • Draft basics
    • Demo: Developing to AKS with Draft
    • Resources


    What is Draft?

    Draft is an open-source tool that can be used to streamline the development and deployment of applications on Kubernetes clusters. It provides a simple and easy-to-use workflow for creating and deploying applications, making it easier for developers to focus on writing code and building features, rather than worrying about the underlying infrastructure. This is great for users who are just getting started with Kubernetes, or those who are just looking to simplify their experience.

    New to Kubernetes?

    Draft basics

    Draft streamlines Kubernetes development by taking a non-containerized application and generating the Dockerfiles, K8s manifests, Helm charts, and other artifacts associated with a containerized application. Draft can also create a GitHub Action workflow file to quickly build and deploy your application onto any Kubernetes cluster.

    1. 'draft create'': Create a new Draft project by simply running the 'draft create' command - this command will walk you through a series of questions on your application specification (such as the application language) and create a Dockerfile, Helm char, and Kubernetes
    2. 'draft generate-workflow'': Automatically build out a GitHub Action using the 'draft generate-workflow' command
    3. 'draft setup-gh'': If you are using Azure, use this command to automate the GitHub OIDC set up process to ensure that you will be able to deploy your application using your GitHub Action.

    At this point, you will have all the files needed to deploy your app onto a Kubernetes cluster (we told you it was easy!).

    You can also use the 'draft info' command if you are looking for information on supported languages and deployment types. Let's see it in action, shall we?


    Developing to AKS with Draft

    In this Microsoft Reactor session below, we'll briefly introduce Kubernetes and the Azure Kubernetes Service (AKS) and then demo how enable your applications for Kubernetes using the open-source tool Draft. We'll show how Draft can help you create the boilerplate code to containerize your applications and add routing and scaling behaviours.

    ##Conclusion

    Overall, Draft simplifies the process of building, deploying, and managing applications on Kubernetes, and can make the overall journey from code to Kubernetes significantly easier.


    Resources


    - + \ No newline at end of file diff --git a/cnny-2023/cloud-native-fundamentals/index.html b/cnny-2023/cloud-native-fundamentals/index.html index 465828cf17..b550738911 100644 --- a/cnny-2023/cloud-native-fundamentals/index.html +++ b/cnny-2023/cloud-native-fundamentals/index.html @@ -14,14 +14,14 @@ - +

    1-1. Cloud-native Fundamentals

    · 5 min read
    Cory Skimming

    Welcome to Week 1 of #CloudNativeNewYear!

    Cloud-native New Year

    You will often hear the term "cloud-native" when discussing modern application development, but even a quick online search will return a huge number of articles, tweets, and web pages with a variety of definitions. So, what does cloud-native actually mean? Also, what makes an application a cloud-native application versus a "regular" application?

    Today, we will address these questions and more as we kickstart our learning journey (and our new year!) with an introductory dive into the wonderful world of cloud-native.


    What We'll Cover

    • What is cloud-native?
    • What is a cloud-native application?
    • The benefits of cloud-native
    • The five pillars of cloud-native
    • Exercise: Take the Cloud Skills Challenge!

    1. What is cloud-native?

    The term "cloud-native" can seem pretty self-evident (yes, hello, native to the cloud?), and in a way, it is. While there are lots of definitions of cloud-native floating around, at it's core, cloud-native simply refers to a modern approach to building software that takes advantage of cloud services and environments. This includes using cloud-native technologies, such as containers, microservices, and serverless, and following best practices for deploying, scaling, and managing applications in a cloud environment.

    Official definition from the Cloud Native Computing Foundation:

    Cloud-native technologies empower organizations to build and run scalable applications in modern, dynamic environments such as public, private, and hybrid clouds. Containers, service meshes, microservices, immutable infrastructure, and declarative APIs exemplify this approach.

    These techniques enable loosely coupled systems that are resilient, manageable, and observable. Combined with robust automation, they allow engineers to make high-impact changes frequently and predictably with minimal toil. Source


    2. So, what exactly is a cloud-native application?

    Cloud-native applications are specifically designed to take advantage of the scalability, resiliency, and distributed nature of modern cloud infrastructure. But how does this differ from a "traditional" application?

    Traditional applications are generally been built, tested, and deployed as a single, monolithic unit. The monolithic nature of this type of architecture creates close dependencies between components. This complexity and interweaving only increases as an application grows and can make it difficult to evolve (not to mention troubleshoot) and challenging to operate over time.

    To contrast, in cloud-native architectures the application components are decomposed into loosely coupled services, rather than built and deployed as one block of code. This decomposition into multiple self-contained services enables teams to manage complexity and improve the speed, agility, and scale of software delivery. Many small parts enables teams to make targeted updates, deliver new features, and fix any issues without leading to broader service disruption.


    3. The benefits of cloud-native

    Cloud-native architectures can bring many benefits to an organization, including:

    1. Scalability: easily scale up or down based on demand, allowing organizations to adjust their resource usage and costs as needed.
    2. Flexibility: deploy and run on any cloud platform, and easily move between clouds and on-premises environments.
    3. High-availability: techniques such as redundancy, self-healing, and automatic failover help ensure that cloud-native applications are designed to be highly-available and fault tolerant.
    4. Reduced costs: take advantage of the pay-as-you-go model of cloud computing, reducing the need for expensive infrastructure investments.
    5. Improved security: tap in to cloud security features, such as encryption and identity management, to improve the security of the application.
    6. Increased agility: easily add new features or services to your applications to meet changing business needs and market demand.

    4. The pillars of cloud-native

    There are five areas that are generally cited as the core building blocks of cloud-native architecture:

    1. Microservices: Breaking down monolithic applications into smaller, independent, and loosely-coupled services that can be developed, deployed, and scaled independently.
    2. Containers: Packaging software in lightweight, portable, and self-sufficient containers that can run consistently across different environments.
    3. Automation: Using automation tools and DevOps processes to manage and operate the cloud-native infrastructure and applications, including deployment, scaling, monitoring, and self-healing.
    4. Service discovery: Using service discovery mechanisms, such as APIs & service meshes, to enable services to discover and communicate with each other.
    5. Observability: Collecting and analyzing data from the infrastructure and applications to understand and optimize the performance, behavior, and health of the system.

    These can (and should!) be used in combination to deliver cloud-native solutions that are highly scalable, flexible, and available.

    WHAT'S NEXT

    Stay tuned, as we will be diving deeper into these topics in the coming weeks:

    • Jan 24: Containers 101
    • Jan 25: Adopting Microservices with Kubernetes
    • Jan 26: Kubernetes 101
    • Jan 27: Exploring your Cloud-native Options

    Resources


    Don't forget to subscribe to the blog to get daily posts delivered directly to your favorite feed reader!


    - + \ No newline at end of file diff --git a/cnny-2023/cnny-kickoff/index.html b/cnny-2023/cnny-kickoff/index.html index 3978fd715f..38f69cc303 100644 --- a/cnny-2023/cnny-kickoff/index.html +++ b/cnny-2023/cnny-kickoff/index.html @@ -14,13 +14,13 @@ - +

    Kicking Off 30DaysOfCloudNative!

    · 4 min read
    Cory Skimming
    Devanshi Joshi
    Steven Murawski
    Nitya Narasimhan

    Welcome to the Kick-off Post for #30DaysOfCloudNative - one of the core initiatives within #CloudNativeNewYear! Over the next four weeks, join us as we take you from fundamentals to functional usage of Cloud-native technologies, one blog post at a time! Read on to learn a little bit about this initiative and what you can expect to learn from this journey!

    What We'll Cover


    Cloud-native New Year

    Welcome to Week 01 of 🥳 #CloudNativeNewYear ! Today, we kick off a full month of content and activities to skill you up on all things Cloud-native on Azure with content, events, and community interactions! Read on to learn about what we have planned!


    Explore our initiatives

    We have a number of initiatives planned for the month to help you learn and skill up on relevant technologies. Click on the links to visit the relevant pages for each.

    We'll go into more details about #30DaysOfCloudNative in this post - don't forget to subscribe to the blog to get daily posts delivered directly to your preferred feed reader!


    Register for events!

    What are 3 things you can do today, to jumpstart your learning journey?


    #30DaysOfCloudNative

    #30DaysOfCloudNative is a month-long series of daily blog posts grouped into 4 themed weeks - taking you from core concepts to end-to-end solution examples in 30 days. Each article will be short (5-8 mins reading time) and provide exercises and resources to help you reinforce learnings and take next steps.

    This series focuses on the Cloud-native On Azure learning journey in four stages, each building on the previous week to help you skill up in a beginner-friendly way:

    We have a tentative weekly-themed roadmap for the topics we hope to cover and will keep this updated as we go with links to actual articles as they get published.

    Week 1: FOCUS ON CLOUD-NATIVE FUNDAMENTALS

    Here's a sneak peek at the week 1 schedule. We'll start with a broad review of cloud-native fundamentals and walkthrough the core concepts of microservices, containers and Kubernetes.

    • Jan 23: Learn Core Concepts for Cloud-native
    • Jan 24: Container 101
    • Jan 25: Adopting Microservices with Kubernetes
    • Jan 26: Kubernetes 101
    • Jan 27: Exploring your Cloud Native Options

    Let's Get Started!

    Now you know everything! We hope you are as excited as we are to dive into a full month of active learning and doing! Don't forget to subscribe for updates in your favorite feed reader! And look out for our first Cloud-native Fundamentals post on January 23rd!


    - + \ No newline at end of file diff --git a/cnny-2023/cnny-wrap-up/index.html b/cnny-2023/cnny-wrap-up/index.html index c66cc79da7..317b9c6a71 100644 --- a/cnny-2023/cnny-wrap-up/index.html +++ b/cnny-2023/cnny-wrap-up/index.html @@ -14,13 +14,13 @@ - +

    4-5. Cloud Native New Year Wrap Up

    · 6 min read
    Cory Skimming
    Steven Murawski
    Paul Yu
    Josh Duffney
    Nitya Narasimhan
    Vinicius Apolinario
    Jorge Arteiro
    Devanshi Joshi

    And that's a wrap on the inaugural #CloudNativeNewYear! Thank you for joining us to kick off the new year with this learning journey into cloud-native! In this final post of the 2023 celebration of all things cloud-native, we'll do two things:

    • Look Back - with a quick retrospective of what was covered.
    • Look Ahead - with resources and suggestions for how you can continue your skilling journey!

    We appreciate your time and attention and we hope you found this curated learning valuable. Feedback and suggestions are always welcome. From our entire team, we wish you good luck with the learning journey - now go build some apps and share your knowledge! 🎉


    What We'll Cover

    • Cloud-native fundamentals
    • Kubernetes fundamentals
    • Bringing your applications to Kubernetes
    • Go further with cloud-native
    • Resources to keep the celebration going!

    Week 1: Cloud-native Fundamentals

    In Week 1, we took a tour through the fundamentals of cloud-native technologies, including a walkthrough of the core concepts of containers, microservices, and Kubernetes.

    • Jan 23 - Cloud-native Fundamentals: The answers to life and all the universe - what is cloud-native? What makes an application cloud-native? What are the benefits? (yes, we all know it's 42, but hey, gotta start somewhere!)
    • Jan 24 - Containers 101: Containers are an essential component of cloud-native development. In this intro post, we cover how containers work and why they have become so popular.
    • Jan 25 - Kubernetes 101: Kuber-what-now? Learn the basics of Kubernetes and how it enables us to deploy and manage our applications effectively and consistently.
    A QUICKSTART GUIDE TO KUBERNETES CONCEPTS

    Missed it Live? Tune in to A Quickstart Guide to Kubernetes Concepts on demand, now!

    • Jan 26 - Microservices 101: What is a microservices architecture and how can we go about designing one?
    • Jan 27 - Exploring your Cloud Native Options: Cloud-native, while catchy, can be a very broad term. What technologies should you use? Learn some basic guidelines for when it is optimal to use different technologies for your project.

    Week 2: Kubernetes Fundamentals

    In Week 2, we took a deeper dive into the Fundamentals of Kubernetes. The posts and live demo from this week took us through how to build a simple application on Kubernetes, covering everything from deployment to networking and scaling. Note: for our samples and demo we have used Azure Kubernetes Service, but the principles apply to any Kubernetes!

    • Jan 30 - Pods and Deployments: how to use pods and deployments in Kubernetes.
    • Jan 31 - Services and Ingress: how to use services and ingress and a walk through the steps of making our containers accessible internally and externally!
    • Feb 1 - ConfigMaps and Secrets: how to of passing configuration and secrets to our applications in Kubernetes with ConfigMaps and Secrets.
    • Feb 2 - Volumes, Mounts, and Claims: how to use persistent storage on Kubernetes (and ensure your data can survive container restarts!).
    • Feb 3 - Scaling Pods and Nodes: how to scale pods and nodes in our Kubernetes cluster.
    ASK THE EXPERTS: AZURE KUBERNETES SERVICE

    Missed it Live? Tune in to Ask the Expert with Azure Kubernetes Service on demand, now!


    Week 3: Bringing your applications to Kubernetes

    So, you have learned how to build an application on Kubernetes. What about your existing applications? In Week 3, we explored how to take an existing application and set it up to run in Kubernetes:

    • Feb 6 - CI/CD: learn how to get an existing application running in Kubernetes with a full pipeline in GitHub Actions.
    • Feb 7 - Adapting Storage, Secrets, and Configuration: how to evaluate our sample application's configuration, storage, and networking requirements and implement using Kubernetes.
    • Feb 8 - Opening your Application with Ingress: how to expose the eShopOnWeb app so that customers can reach it over the internet using a custom domain name and TLS.
    • Feb 9 - Debugging and Instrumentation: how to debug and instrument your application now that it is on Kubernetes.
    • Feb 10 - CI/CD Secure Supply Chain: now that we have set up our application on Kubernetes, let's talk about container image signing and how to set up a secure supply change.

    Week 4: Go Further with Cloud-Native

    This week we have gone further with Cloud-native by exploring advanced topics and best practices for the Cloud-native practitioner.

    And today, February 17th, with this one post to rule (er, collect) them all!


    Keep the Learning Going!

    Learning is great, so why stop here? We have a host of great resources and samples for you to continue your cloud-native journey with Azure below:


    - + \ No newline at end of file diff --git a/cnny-2023/containers-101/index.html b/cnny-2023/containers-101/index.html index 57df5a8d41..d8c9394e0f 100644 --- a/cnny-2023/containers-101/index.html +++ b/cnny-2023/containers-101/index.html @@ -14,14 +14,14 @@ - +

    1-2. Containers 101

    · 4 min read
    Steven Murawski
    Paul Yu
    Josh Duffney

    Welcome to Day 2 of Week 1 of #CloudNativeNewYear!

    Today, we'll focus on building an understanding of containers.

    What We'll Cover

    • Introduction
    • How do Containers Work?
    • Why are Containers Becoming so Popular?
    • Conclusion
    • Resources
    • Learning Path

    REGISTER & LEARN: KUBERNETES 101

    Interested in a dive into Kubernetes and a chance to talk to experts?

    🎙: Join us Jan 26 @1pm PST by registering here

    Here's what you will learn:

    • Key concepts and core principles of Kubernetes.
    • How to deploy, scale and manage containerized workloads.
    • Live Demo of the concepts explained
    • How to get started with Azure Kubernetes Service for free.

    Start your free Azure Kubernetes Trial Today!!: aka.ms/TryAKS

    Introduction

    In the beginning, we deployed our applications onto physical servers. We only had a certain number of those servers, so often they hosted multiple applications. This led to some problems when those applications shared dependencies. Upgrading one application could break another application on the same server.

    Enter virtualization. Virtualization allowed us to run our applications in an isolated operating system instance. This removed much of the risk of updating shared dependencies. However, it increased our overhead since we had to run a full operating system for each application environment.

    To address the challenges created by virtualization, containerization was created to improve isolation without duplicating kernel level resources. Containers provide efficient and consistent deployment and runtime experiences for our applications and have become very popular as a way of packaging and distributing applications.

    How do Containers Work?

    Containers build on two capabilities in the Linux operating system, namespaces and cgroups. These constructs allow the operating system to provide isolation to a process or group of processes, keeping their access to filesystem resources separate and providing controls on resource utilization. This, combined with tooling to help package, deploy, and run container images has led to their popularity in today’s operating environment. This provides us our isolation without the overhead of additional operating system resources.

    When a container host is deployed on an operating system, it works at scheduling the access to the OS (operating systems) components. This is done by providing a logical isolated group that can contain processes for a given application, called a namespace. The container host then manages /schedules access from the namespace to the host OS. The container host then uses cgroups to allocate compute resources. Together, the container host with the help of cgroups and namespaces can schedule multiple applications to access host OS resources.

    Overall, this gives the illusion of virtualizing the host OS, where each application gets its own OS. In actuality, all the applications are running on the same operating system and sharing the same kernel as the container host.

    Containers are popular in the software development industry because they provide several benefits over traditional virtualization methods. Some of these benefits include:

    • Portability: Containers make it easy to move an application from one environment to another without having to worry about compatibility issues or missing dependencies.
    • Isolation: Containers provide a level of isolation between the application and the host system, which means that the application running in the container cannot access the host system's resources.
    • Scalability: Containers make it easy to scale an application up or down as needed, which is useful for applications that experience a lot of traffic or need to handle a lot of data.
    • Resource Efficiency: Containers are more resource-efficient than traditional virtualization methods because they don't require a full operating system to be running on each virtual machine.
    • Cost-Effective: Containers are more cost-effective than traditional virtualization methods because they don't require expensive hardware or licensing fees.

    Conclusion

    Containers are a powerful technology that allows developers to package and deploy applications in a portable and isolated environment. This technology is becoming increasingly popular in the world of software development and is being used by many companies and organizations to improve their application deployment and management processes. With the benefits of portability, isolation, scalability, resource efficiency, and cost-effectiveness, containers are definitely worth considering for your next application development project.

    Containerizing applications is a key step in modernizing them, and there are many other patterns that can be adopted to achieve cloud-native architectures, including using serverless platforms, Kubernetes, and implementing DevOps practices.

    Resources

    Learning Path

    - + \ No newline at end of file diff --git a/cnny-2023/explore-options/index.html b/cnny-2023/explore-options/index.html index 16982dee87..a63e0d2833 100644 --- a/cnny-2023/explore-options/index.html +++ b/cnny-2023/explore-options/index.html @@ -14,14 +14,14 @@ - +

    1-5. Exploring Cloud-Native Options

    · 6 min read
    Cory Skimming

    We are excited to be wrapping up our first week of #CloudNativeNewYear! This week, we have tried to set the stage by covering the fundamentals of cloud-native practices and technologies, including primers on containerization, microservices, and Kubernetes.

    Don't forget to sign up for the the Cloud Skills Challenge!

    Today, we will do a brief recap of some of these technologies and provide some basic guidelines for when it is optimal to use each.


    What We'll Cover

    • To Containerize or not to Containerize?
    • The power of Kubernetes
    • Where does Serverless fit?
    • Resources
    • What's coming next!


    Just joining us now? Check out these other Week 1 posts:

    To Containerize or not to Containerize?

    As mentioned in our Containers 101 post earlier this week, containers can provide several benefits over traditional virtualization methods, which has made them popular within the software development community. Containers provide a consistent and predictable runtime environment, which can help reduce the risk of compatibility issues and simplify the deployment process. Additionally, containers can improve resource efficiency by allowing multiple applications to run on the same host while isolating their dependencies.

    Some types of apps that are a particularly good fit for containerization include:

    1. Microservices: Containers are particularly well-suited for microservices-based applications, as they can be used to isolate and deploy individual components of the system. This allows for more flexibility and scalability in the deployment process.
    2. Stateless applications: Applications that do not maintain state across multiple sessions, such as web applications, are well-suited for containers. Containers can be easily scaled up or down as needed and replaced with new instances, without losing data.
    3. Portable applications: Applications that need to be deployed in different environments, such as on-premises, in the cloud, or on edge devices, can benefit from containerization. The consistent and portable runtime environment of containers can make it easier to move the application between different environments.
    4. Legacy applications: Applications that are built using older technologies or that have compatibility issues can be containerized to run in an isolated environment, without impacting other applications or the host system.
    5. Dev and testing environments: Containerization can be used to create isolated development and testing environments, which can be easily created and destroyed as needed.

    While there are many types of applications that can benefit from a containerized approach, it's worth noting that containerization is not always the best option, and it's important to weigh the benefits and trade-offs before deciding to containerize an application. Additionally, some types of applications may not be a good fit for containers including:

    • Apps that require full access to host resources: Containers are isolated from the host system, so if an application needs direct access to hardware resources such as GPUs or specialized devices, it might not work well in a containerized environment.
    • Apps that require low-level system access: If an application requires deep access to the underlying operating system, it may not be suitable for running in a container.
    • Applications that have specific OS dependencies: Apps that have specific dependencies on a certain version of an operating system or libraries may not be able to run in a container.
    • Stateful applications: Apps that maintain state across multiple sessions, such as databases, may not be well suited for containers. Containers are ephemeral by design, so the data stored inside a container may not persist between restarts.

    The good news is that some of these limitations can be overcome with the use of specialized containerization technologies such as Kubernetes, and by carefully designing the architecture of the application.


    The power of Kubernetes

    Speaking of Kubernetes...

    Kubernetes is a powerful tool for managing and deploying containerized applications in production environments, particularly for applications that need to scale, handle large numbers of requests, or run in multi-cloud or hybrid environments.

    Kubernetes is well-suited for a wide variety of applications, but it is particularly well-suited for the following types of applications:

    1. Microservices-based applications: Kubernetes provides a powerful set of tools for managing and deploying microservices-based applications, making it easy to scale, update, and manage the individual components of the application.
    2. Stateful applications: Kubernetes provides support for stateful applications through the use of Persistent Volumes and StatefulSets, allowing for applications that need to maintain state across multiple instances.
    3. Large-scale, highly-available systems: Kubernetes provides built-in support for scaling, self-healing, and rolling updates, making it an ideal choice for large-scale, highly-available systems that need to handle large numbers of users and requests.
    4. Multi-cloud and hybrid environments: Kubernetes can be used to deploy and manage applications across multiple cloud providers and on-premises environments, making it a good choice for organizations that want to take advantage of the benefits of multiple cloud providers or that need to deploy applications in a hybrid environment.
    New to Kubernetes?

    Where does Serverless fit in?

    Serverless is a cloud computing model where the cloud provider (like Azure) is responsible for executing a piece of code by dynamically allocating the resources. With serverless, you only pay for the exact amount of compute time that you use, rather than paying for a fixed amount of resources. This can lead to significant cost savings, particularly for applications with variable or unpredictable workloads.

    Serverless is commonly used for building applications like web or mobile apps, IoT, data processing, and real-time streaming - apps where the workloads are variable and high scalability is required. It's important to note that serverless is not a replacement for all types of workloads - it's best suited for stateless, short-lived and small-scale workloads.

    For a detailed look into the world of Serverless and lots of great learning content, revisit #30DaysofServerless.


    Resources


    What's up next in #CloudNativeNewYear?

    Week 1 has been all about the fundamentals of cloud-native. Next week, the team will be diving in to application deployment with Azure Kubernetes Service. Don't forget to subscribe to the blog to get daily posts delivered directly to your favorite feed reader!


    - + \ No newline at end of file diff --git a/cnny-2023/fundamentals-day-1/index.html b/cnny-2023/fundamentals-day-1/index.html index a273139a0e..5283ea06c2 100644 --- a/cnny-2023/fundamentals-day-1/index.html +++ b/cnny-2023/fundamentals-day-1/index.html @@ -14,13 +14,13 @@ - +

    2-1. Kubernetes Fundamentals - Pods and Deployments

    · 14 min read
    Steven Murawski

    Welcome to Day #1 of Week 2 of #CloudNativeNewYear!

    The theme for this week is Kubernetes fundamentals. Last week we talked about Cloud Native architectures and the Cloud Native landscape. Today we'll explore the topic of Pods and Deployments in Kubernetes.

    Ask the Experts Thursday, February 9th at 9 AM PST
    Catch the Replay of the Live Demo

    Watch the recorded demo and conversation about this week's topics.

    We were live on YouTube walking through today's (and the rest of this week's) demos.

    What We'll Cover

    • Setting Up A Kubernetes Environment in Azure
    • Running Containers in Kubernetes Pods
    • Making the Pods Resilient with Deployments
    • Exercise
    • Resources

    Setting Up A Kubernetes Environment in Azure

    For this week, we'll be working with a simple app - the Azure Voting App. My teammate Paul Yu ported the app to Rust and we tweaked it a bit to let us highlight some of the basic features of Kubernetes.

    You should be able to replicate this in just about any Kubernetes environment, but we'll use Azure Kubernetes Service (AKS) as our working environment for this week.

    To make it easier to get started, there's a Bicep template to deploy an AKS cluster, an Azure Container Registry (ACR) (to host our container image), and connect the two so that we can easily deploy our application.

    Step 0 - Prerequisites

    There are a few things you'll need if you want to work through this and the following examples this week.

    Required:

    • Git (and probably a GitHub account if you want to persist your work outside of your computer)
    • Azure CLI
    • An Azure subscription (if you want to follow along with the Azure steps)
    • Kubectl (the command line tool for managing Kubernetes)

    Helpful:

    • Visual Studio Code (or equivalent editor)

    Step 1 - Clone the application repository

    First, I forked the source repository to my account.

    $GitHubOrg = 'smurawski' # Replace this with your GitHub account name or org name
    git clone "https://github.com/$GitHubOrg/azure-voting-app-rust"
    cd azure-voting-app-rust

    Leave your shell opened with your current location inside the application repository.

    Step 2 - Set up AKS

    Running the template deployment from the demo script (I'm using the PowerShell example in cnny23-week2-day1.ps1, but there's a Bash variant at cnny23-week2-day1.sh) stands up the environment. The second, third, and fourth commands take some of the output from the Bicep deployment to set up for later commands, so don't close out your shell after you run these commands.

    az deployment sub create --template-file ./deploy/main.bicep --location eastus --parameters 'resourceGroup=cnny-week2'
    $AcrName = az deployment sub show --name main --query 'properties.outputs.acr_name.value' -o tsv
    $AksName = az deployment sub show --name main --query 'properties.outputs.aks_name.value' -o tsv
    $ResourceGroup = az deployment sub show --name main --query 'properties.outputs.resource_group_name.value' -o tsv

    az aks get-credentials --resource-group $ResourceGroup --name $AksName

    Step 3 - Build our application container

    Since we have an Azure Container Registry set up, I'll use ACR Build Tasks to build and store my container image.

    az acr build --registry $AcrName --% --image cnny2023/azure-voting-app-rust:{{.Run.ID}} .
    $BuildTag = az acr repository show-tags `
    --name $AcrName `
    --repository cnny2023/azure-voting-app-rust `
    --orderby time_desc `
    --query '[0]' -o tsv
    tip

    Wondering what the --% is in the first command line? That tells the PowerShell interpreter to pass the input after it "as is" to the command without parsing/evaluating it. Otherwise, PowerShell messes a bit with the templated {{.Run.ID}} bit.

    Running Containers in Kubernetes Pods

    Now that we have our AKS cluster and application image ready to go, let's look into how Kubernetes runs containers.

    If you've been in tech for any length of time, you've seen that every framework, runtime, orchestrator, etc.. can have their own naming scheme for their concepts. So let's get into some of what Kubernetes calls things.

    The Pod

    A container running in Kubernetes is called a Pod. A Pod is basically a running container on a Node or VM. It can be more. For example you can run multiple containers and specify some funky configuration, but we'll keep it simple for now - add the complexity when you need it.

    Our Pod definition can be created via the kubectl command imperatively from arguments or declaratively from a configuration file. We'll do a little of both. We'll use the kubectl command to help us write our configuration files. Kubernetes configuration files are YAML, so having an editor that supports and can help you syntax check YAML is really helpful.

    Creating a Pod Definition

    Let's create a few Pod definitions. Our application requires two containers to get working - the application and a database.

    Let's create the database Pod first. And before you comment, the configuration isn't secure nor best practice. We'll fix that later this week. For now, let's focus on getting up and running.

    This is a trick I learned from one of my teammates - Paul. By using the --output yaml and --dry-run=client options, we can have the command help us write our YAML. And with a bit of output redirection, we can stash it safely in a file for later use.

    kubectl run azure-voting-db `
    --image "postgres:15.0-alpine" `
    --env "POSTGRES_PASSWORD=mypassword" `
    --output yaml `
    --dry-run=client > manifests/pod-db.yaml

    This creates a file that looks like:

    apiVersion: v1
    kind: Pod
    metadata:
    creationTimestamp: null
    labels:
    run: azure-voting-db
    name: azure-voting-db
    spec:
    containers:
    - env:
    - name: POSTGRES_PASSWORD
    value: mypassword
    image: postgres:15.0-alpine
    name: azure-voting-db
    resources: {}
    dnsPolicy: ClusterFirst
    restartPolicy: Always
    status: {}

    The file, when supplied to the Kubernetes API, will identify what kind of resource to create, the API version to use, and the details of the container (as well as an environment variable to be supplied).

    We'll get that container image started with the kubectl command. Because the details of what to create are in the file, we don't need to specify much else to the kubectl command but the path to the file.

    kubectl apply -f ./manifests/pod-db.yaml

    I'm going to need the IP address of the Pod, so that my application can connect to it, so we can use kubectl to get some information about our pod. By default, kubectl get pod only displays certain information but it retrieves a lot more. We can use the JSONPath syntax to index into the response and get the information you want.

    tip

    To see what you can get, I usually run the kubectl command with the output type (-o JSON) of JSON and then I can find where the data I want is and create my JSONPath query to get it.

    $DB_IP = kubectl get pod azure-voting-db -o jsonpath='{.status.podIP}'

    Now, let's create our Pod definition for our application. We'll use the same technique as before.

    kubectl run azure-voting-app `
    --image "$AcrName.azurecr.io/cnny2023/azure-voting-app-rust:$BuildTag" `
    --env "DATABASE_SERVER=$DB_IP" `
    --env "DATABASE_PASSWORD=mypassword`
    --output yaml `
    --dry-run=client > manifests/pod-app.yaml

    That command gets us a similar YAML file to the database container - you can see the full file here

    Let's get our application container running.

    kubectl apply -f ./manifests/pod-app.yaml

    Now that the Application is Running

    We can check the status of our Pods with:

    kubectl get pods

    And we should see something like:

    azure-voting-app-rust ❯  kubectl get pods
    NAME READY STATUS RESTARTS AGE
    azure-voting-app 1/1 Running 0 36s
    azure-voting-db 1/1 Running 0 84s

    Once our pod is running, we can check to make sure everything is working by letting kubectl proxy network connections to our Pod running the application. If we get the voting web page, we'll know the application found the database and we can start voting!

    kubectl port-forward pod/azure-voting-app 8080:8080

    Azure voting website in a browser with three buttons, one for Dogs, one for Cats, and one for Reset.  The counter is Dogs - 0 and Cats - 0.

    When you are done voting, you can stop the port forwarding by using Control-C to break the command.

    Clean Up

    Let's clean up after ourselves and see if we can't get Kubernetes to help us keep our application running. We can use the same configuration files to ensure that Kubernetes only removes what we want removed.

    kubectl delete -f ./manifests/pod-app.yaml
    kubectl delete -f ./manifests/pod-db.yaml

    Summary - Pods

    A Pod is the most basic unit of work inside Kubernetes. Once the Pod is deleted, it's gone. That leads us to our next topic (and final topic for today.)

    Making the Pods Resilient with Deployments

    We've seen how easy it is to deploy a Pod and get our containers running on Nodes in our Kubernetes cluster. But there's a problem with that. Let's illustrate it.

    Breaking Stuff

    Setting Back Up

    First, let's redeploy our application environment. We'll start with our application container.

    kubectl apply -f ./manifests/pod-db.yaml
    kubectl get pod azure-voting-db -o jsonpath='{.status.podIP}'

    The second command will report out the new IP Address for our database container. Let's open ./manifests/pod-app.yaml and update the container IP to our new one.

    - name: DATABASE_SERVER
    value: YOUR_NEW_IP_HERE

    Then we can deploy the application with the information it needs to find its database. We'll also list out our pods to see what is running.

    kubectl apply -f ./manifests/pod-app.yaml
    kubectl get pods

    Feel free to look back and use the port forwarding trick to make sure your app is running if you'd like.

    Knocking It Down

    The first thing we'll try to break is our application pod. Let's delete it.

    kubectl delete pod azure-voting-app

    Then, we'll check our pod's status:

    kubectl get pods

    Which should show something like:

    azure-voting-app-rust ❯  kubectl get pods
    NAME READY STATUS RESTARTS AGE
    azure-voting-db 1/1 Running 0 50s

    We should be able to recreate our application pod deployment with no problem, since it has the current database IP address and nothing else depends on it.

    kubectl apply -f ./manifests/pod-app.yaml

    Again, feel free to do some fun port forwarding and check your site is running.

    Uncomfortable Truths

    Here's where it gets a bit stickier, what if we delete the database container?

    If we delete our database container and recreate it, it'll likely have a new IP address, which would force us to update our application configuration. We'll look at some solutions for these problems in the next three posts this week.

    Because our database problem is a bit tricky, we'll primarily focus on making our application layer more resilient and prepare our database layer for those other techniques over the next few days.

    Let's clean back up and look into making things more resilient.

    kubectl delete -f ./manifests/pod-app.yaml
    kubectl delete -f ./manifests/pod-db.yaml

    The Deployment

    One of the reasons you may want to use Kubernetes is it's ability to orchestrate workloads. Part of that orchestration includes being able to ensure that certain workloads are running (regardless of what Node they might be on).

    We saw that we could delete our application pod and then restart it from the manifest with little problem. It just meant that we had to run a command to restart it. We can use the Deployment in Kubernetes to tell the orchestrator to ensure we have our application pod running.

    The Deployment also can encompass a lot of extra configuration - controlling how many containers of a particular type should be running, how upgrades of container images should proceed, and more.

    Creating the Deployment

    First, we'll create a Deployment for our database. We'll use a technique similar to what we did for the Pod, with just a bit of difference.

    kubectl create deployment azure-voting-db `
    --image "postgres:15.0-alpine" `
    --port 5432 `
    --output yaml `
    --dry-run=client > manifests/deployment-db.yaml

    Unlike our Pod definition creation, we can't pass in environment variable configuration from the command line. We'll have to edit the YAML file to add that.

    So, let's open ./manifests/deployment-db.yaml in our editor and add the following in the spec/containers configuration.

            env:
    - name: POSTGRES_PASSWORD
    value: "mypassword"

    Your file should look like this deployment-db.yaml.

    Once we have our configuration file updated, we can deploy our database container image.

    kubectl apply -f ./manifests/deployment-db.yaml

    For our application, we'll use the same technique.

    kubectl create deployment azure-voting-app `
    --image "$AcrName.azurecr.io/cnny2023/azure-voting-app-rust:$BuildTag" `
    --port 8080 `
    --output yaml `
    --dry-run=client > manifests/deployment-app.yaml

    Next, we'll need to add an environment variable to the generated configuration. We'll also need the new IP address for the database deployment.

    Previously, we named the pod and were able to ask for the IP address with kubectl and a bit of JSONPath. Now, the deployment created the pod for us, so there's a bit of random in the naming. Check out:

    kubectl get pods

    Should return something like:

    azure-voting-app-rust ❯  kubectl get pods
    NAME READY STATUS RESTARTS AGE
    azure-voting-db-686d758fbf-8jnq8 1/1 Running 0 7s

    We can either ask for the IP with the new pod name, or we can use a selector to find our desired pod.

    kubectl get pod --selector app=azure-voting-db -o jsonpath='{.items[0].status.podIP}'

    Now, we can update our application deployment configuration file with:

            env:
    - name: DATABASE_SERVER
    value: YOUR_NEW_IP_HERE
    - name: DATABASE_PASSWORD
    value: mypassword

    Your file should look like this deployment-app.yaml (but with IPs and image names matching your environment).

    After we save those changes, we can deploy our application.

    kubectl apply -f ./manifests/deployment-app.yaml

    Let's test the resilience of our app now. First, we'll delete the pod running our application, then we'll check to make sure Kubernetes restarted our application pod.

    kubectl get pods
    azure-voting-app-rust ❯  kubectl get pods
    NAME READY STATUS RESTARTS AGE
    azure-voting-app-56c9ccc89d-skv7x 1/1 Running 0 71s
    azure-voting-db-686d758fbf-8jnq8 1/1 Running 0 12m
    kubectl delete pod azure-voting-app-56c9ccc89d-skv7x
    kubectl get pods
    azure-voting-app-rust ❯  kubectl delete pod azure-voting-app-56c9ccc89d-skv7x
    >> kubectl get pods
    pod "azure-voting-app-56c9ccc89d-skv7x" deleted
    NAME READY STATUS RESTARTS AGE
    azure-voting-app-56c9ccc89d-2b5mx 1/1 Running 0 2s
    azure-voting-db-686d758fbf-8jnq8 1/1 Running 0 14m
    info

    Your Pods will likely have different identifiers at the end, so adjust your commands to match the names in your environment.

    As you can see, by the time the kubectl get pods command was run, Kubernetes had already spun up a new pod for the application container image. Thanks Kubernetes!

    Clean up

    Since we can't just delete the pods, we have to delete the deployments.

    kubectl delete -f ./manifests/deployment-app.yaml
    kubectl delete -f ./manifests/deployment-db.yaml

    Summary - Deployments

    Deployments allow us to create more durable configuration for the workloads we deploy into Kubernetes. As we dig deeper, we'll discover more capabilities the deployments offer. Check out the Resources below for more.

    Exercise

    If you want to try these steps, head over to the source repository, fork it, clone it locally, and give it a spin!

    You can check your manifests against the manifests in the week2/day1 branch of the source repository.

    Resources

    Take the Cloud Skills Challenge!

    Enroll in the Cloud Skills Challenge!

    Don't miss out on this opportunity to level up your skills and stay ahead of the curve in the world of cloud native.

    Documentation

    Training

    - + \ No newline at end of file diff --git a/cnny-2023/fundamentals-day-2/index.html b/cnny-2023/fundamentals-day-2/index.html index a19a344588..9497a766db 100644 --- a/cnny-2023/fundamentals-day-2/index.html +++ b/cnny-2023/fundamentals-day-2/index.html @@ -14,13 +14,13 @@ - +

    2-2. Kubernetes Fundamentals - Services and Ingress

    · 11 min read
    Paul Yu

    Welcome to Day 2 of Week 2 of #CloudNativeNewYear!

    The theme for this week is #Kubernetes fundamentals. Yesterday we talked about how to deploy a containerized web app workload to Azure Kubernetes Service (AKS). Today we'll explore the topic of services and ingress and walk through the steps of making our containers accessible both internally as well as over the internet so that you can share it with the world 😊

    Ask the Experts Thursday, February 9th at 9 AM PST
    Catch the Replay of the Live Demo

    Watch the recorded demo and conversation about this week's topics.

    We were live on YouTube walking through today's (and the rest of this week's) demos.

    What We'll Cover

    • Exposing Pods via Service
    • Exposing Services via Ingress
    • Takeaways
    • Resources

    Exposing Pods via Service

    There are a few ways to expose your pod in Kubernetes. One way is to take an imperative approach and use the kubectl expose command. This is probably the quickest way to achieve your goal but it isn't the best way. A better way to expose your pod by taking a declarative approach by creating a services manifest file and deploying it using the kubectl apply command.

    Don't worry if you are unsure of how to make this manifest, we'll use kubectl to help generate it.

    First, let's ensure we have the database deployed on our AKS cluster.

    📝 NOTE: If you don't have an AKS cluster deployed, please head over to Azure-Samples/azure-voting-app-rust, clone the repo, and follow the instructions in the README.md to execute the Azure deployment and setup your kubectl context. Check out the first post this week for more on the environment setup.

    kubectl apply -f ./manifests/deployment-db.yaml

    Next, let's deploy the application. If you are following along from yesterday's content, there isn't anything you need to change; however, if you are deploy the app from scratch, you'll need to modify the deployment-app.yaml manifest and update it with your image tag and database pod's IP address.

    kubectl apply -f ./manifests/deployment-app.yaml

    Now, let's expose the database using a service so that we can leverage Kubernetes' built-in service discovery to be able to reference it by name; not pod IP. Run the following command.

    kubectl expose deployment azure-voting-db \
    --port=5432 \
    --target-port=5432

    With the database exposed using service, we can update the app deployment manifest to use the service name instead of pod IP. This way, if the pod ever gets assigned a new IP, we don't have to worry about updating the IP each time and redeploying our web application. Kubernetes has internal service discovery mechanism in place that allows us to reference a service by its name.

    Let's make an update to the manifest. Replace the environment variable for DATABASE_SERVER with the following:

    - name: DATABASE_SERVER
    value: azure-voting-db

    Re-deploy the app with the updated configuration.

    kubectl apply -f ./manifests/deployment-app.yaml

    One service down, one to go. Run the following command to expose the web application.

    kubectl expose deployment azure-voting-app \
    --type=LoadBalancer \
    --port=80 \
    --target-port=8080

    Notice the --type argument has a value of LoadBalancer. This service type is implemented by the cloud-controller-manager which is part of the Kubernetes control plane. When using a managed Kubernetes cluster such as Azure Kubernetes Service, a public standard load balancer will be able to provisioned when the service type is set to LoadBalancer. The load balancer will also have a public IP assigned which will make your deployment publicly available.

    Kubernetes supports four service types:

    • ClusterIP: this is the default and limits service access to internal traffic within the cluster
    • NodePort: this assigns a port mapping on the node's IP address and allows traffic from the virtual network (outside the cluster)
    • LoadBalancer: as mentioned above, this creates a cloud-based load balancer
    • ExternalName: this is used in special case scenarios where you want to map a service to an external DNS name

    📝 NOTE: When exposing a web application to the internet, allowing external users to connect to your Service directly is not the best approach. Instead, you should use an Ingress, which we'll cover in the next section.

    Now, let's confirm you can reach the web app from the internet. You can use the following command to print the URL to your terminal.

    echo "http://$(kubectl get service azure-voting-app -o jsonpath='{.status.loadBalancer.ingress[0].ip}')"

    Great! The kubectl expose command gets the job done, but as mentioned above, it is not the best method of exposing deployments. It is better to expose deployments declaratively using a service manifest, so let's delete the services and redeploy using manifests.

    kubectl delete service azure-voting-db azure-voting-app

    To use kubectl to generate our manifest file, we can use the same kubectl expose command that we ran earlier but this time, we'll include --output=yaml and --dry-run=client. This will instruct the command to output the manifest that would be sent to the kube-api server in YAML format to the terminal.

    Generate the manifest for the database service.

    kubectl expose deployment azure-voting-db \
    --type=ClusterIP \
    --port=5432 \
    --target-port=5432 \
    --output=yaml \
    --dry-run=client > ./manifests/service-db.yaml

    Generate the manifest for the application service.

    kubectl expose deployment azure-voting-app \
    --type=LoadBalancer \
    --port=80 \
    --target-port=8080 \
    --output=yaml \
    --dry-run=client > ./manifests/service-app.yaml

    The command above redirected the YAML output to your manifests directory. Here is what the web application service looks like.

    apiVersion: v1
    kind: Service
    metadata:
    creationTimestamp: null
    labels:
    app: azure-voting-app
    name: azure-voting-app
    spec:
    ports:
    - port: 80
    protocol: TCP
    targetPort: 8080
    selector:
    app: azure-voting-app
    type: LoadBalancer
    status:
    loadBalancer: {}

    💡 TIP: To view the schema of any api-resource in Kubernetes, you can use the kubectl explain command. In this case the kubectl explain service command will tell us exactly what each of these fields do.

    Re-deploy the services using the new service manifests.

    kubectl apply -f ./manifests/service-db.yaml -f ./manifests/service-app.yaml

    # You should see TYPE is set to LoadBalancer and the EXTERNAL-IP is set
    kubectl get service azure-voting-db azure-voting-app

    Confirm again that our application is accessible again. Run the following command to print the URL to the terminal.

    echo "http://$(kubectl get service azure-voting-app -o jsonpath='{.status.loadBalancer.ingress[0].ip}')"

    That was easy, right? We just exposed both of our pods using Kubernetes services. The database only needs to be accessible from within the cluster so ClusterIP is perfect for that. For the web application, we specified the type to be LoadBalancer so that we can access the application over the public internet.

    But wait... remember that if you want to expose web applications over the public internet, a Service with a public IP is not the best way; the better approach is to use an Ingress resource.

    Exposing Services via Ingress

    If you read through the Kubernetes documentation on Ingress you will see a diagram that depicts the Ingress sitting in front of the Service resource with a routing rule between it. In order to use Ingress, you need to deploy an Ingress Controller and it can be configured with many routing rules to forward traffic to one or many backend services. So effectively, an Ingress is a load balancer for your Services.

    With that said, we no longer need a service type of LoadBalancer since the service does not need to be accessible from the internet. It only needs to be accessible from the Ingress Controller (internal to the cluster) so we can change the service type to ClusterIP.

    Update your service.yaml file to look like this:

    apiVersion: v1
    kind: Service
    metadata:
    creationTimestamp: null
    labels:
    app: azure-voting-app
    name: azure-voting-app
    spec:
    ports:
    - port: 80
    protocol: TCP
    targetPort: 8080
    selector:
    app: azure-voting-app

    📝 NOTE: The default service type is ClusterIP so we can omit the type altogether.

    Re-apply the app service manifest.

    kubectl apply -f ./manifests/service-app.yaml

    # You should see TYPE set to ClusterIP and EXTERNAL-IP set to <none>
    kubectl get service azure-voting-app

    Next, we need to install an Ingress Controller. There are quite a few options, and the Kubernetes-maintained NGINX Ingress Controller is commonly deployed.

    You could install this manually by following these instructions, but if you do that you'll be responsible for maintaining and supporting the resource.

    I like to take advantage of free maintenance and support when I can get it, so I'll opt to use the Web Application Routing add-on for AKS.

    💡 TIP: Whenever you install an AKS add-on, it will be maintained and fully supported by Azure Support.

    Enable the web application routing add-on in our AKS cluster with the following command.

    az aks addon enable \
    --name <YOUR_AKS_NAME> \
    --resource-group <YOUR_AKS_RESOURCE_GROUP>
    --addon web_application_routing

    ⚠️ WARNING: This command can take a few minutes to complete

    Now, let's use the same approach we took in creating our service to create our Ingress resource. Run the following command to generate the Ingress manifest.

    kubectl create ingress azure-voting-app \
    --class=webapprouting.kubernetes.azure.com \
    --rule="/*=azure-voting-app:80" \
    --output yaml \
    --dry-run=client > ./manifests/ingress.yaml

    The --class=webapprouting.kubernetes.azure.com option activates the AKS web application routing add-on. This AKS add-on can also integrate with other Azure services such as Azure DNS and Azure Key Vault for TLS certificate management and this special class makes it all work.

    The --rule="/*=azure-voting-app:80" option looks confusing but we can use kubectl again to help us understand how to format the value for the option.

    kubectl create ingress --help

    In the output you will see the following:

    --rule=[]:
    Rule in format host/path=service:port[,tls=secretname]. Paths containing the leading character '*' are
    considered pathType=Prefix. tls argument is optional.

    It expects a host and path separated by a forward-slash, then expects the backend service name and port separated by a colon. We're not using a hostname for this demo so we can omit it. For the path, an asterisk is used to specify a wildcard path prefix.

    So, the value of /*=azure-voting-app:80 creates a routing rule for all paths following the domain (or in our case since we don't have a hostname specified, the IP) to route traffic to our azure-voting-app backend service on port 80.

    📝 NOTE: Configuring the hostname and TLS is outside the scope of this demo but please visit this URL https://bit.ly/aks-webapp-routing for an in-depth hands-on lab centered around Web Application Routing on AKS.

    Your ingress.yaml file should look like this:

    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
    creationTimestamp: null
    name: azure-voting-app
    spec:
    ingressClassName: webapprouting.kubernetes.azure.com
    rules:
    - http:
    paths:
    - backend:
    service:
    name: azure-voting-app
    port:
    number: 80
    path: /
    pathType: Prefix
    status:
    loadBalancer: {}

    Apply the app ingress manifest.

    kubectl apply -f ./manifests/ingress.yaml

    Validate the web application is available from the internet again. You can run the following command to print the URL to the terminal.

    echo "http://$(kubectl get ingress azure-voting-app -o jsonpath='{.status.loadBalancer.ingress[0].ip}')"

    Takeaways

    Exposing your applications both internally and externally can be easily achieved using Service and Ingress resources respectively. If your service is HTTP or HTTPS based and needs to be accessible from outsie the cluster, use Ingress with an internal Service (i.e., ClusterIP or NodePort); otherwise, use the Service resource. If your TCP-based Service needs to be publicly accessible, you set the type to LoadBalancer to expose a public IP for it. To learn more about these resources, please visit the links listed below.

    Lastly, if you are unsure how to begin writing your service manifest, you can use kubectl and have it do most of the work for you 🥳

    Resources

    Take the Cloud Skills Challenge!

    Enroll in the Cloud Skills Challenge!

    Don't miss out on this opportunity to level up your skills and stay ahead of the curve in the world of cloud native.

    - + \ No newline at end of file diff --git a/cnny-2023/fundamentals-day-3/index.html b/cnny-2023/fundamentals-day-3/index.html index 9a4653956d..b302aa68ae 100644 --- a/cnny-2023/fundamentals-day-3/index.html +++ b/cnny-2023/fundamentals-day-3/index.html @@ -14,14 +14,14 @@ - +

    2-3. Kubernetes Fundamentals - ConfigMaps and Secrets

    · 6 min read
    Josh Duffney

    Welcome to Day 3 of Week 2 of #CloudNativeNewYear!

    The theme for this week is Kubernetes fundamentals. Yesterday we talked about Services and Ingress. Today we'll explore the topic of passing configuration and secrets to our applications in Kubernetes with ConfigMaps and Secrets.

    Ask the Experts Thursday, February 9th at 9 AM PST
    Catch the Replay of the Live Demo

    Watch the recorded demo and conversation about this week's topics.

    We were live on YouTube walking through today's (and the rest of this week's) demos.

    What We'll Cover

    • Decouple configurations with ConfigMaps and Secerts
    • Passing Environment Data with ConfigMaps and Secrets
    • Conclusion

    Decouple configurations with ConfigMaps and Secerts

    A ConfigMap is a Kubernetes object that decouples configuration data from pod definitions. Kubernetes secerts are similar, but were designed to decouple senstive information.

    Separating the configuration and secerts from your application promotes better organization and security of your Kubernetes environment. It also enables you to share the same configuration and different secerts across multiple pods and deployments which can simplify scaling and management. Using ConfigMaps and Secerts in Kubernetes is a best practice that can help to improve the scalability, security, and maintainability of your cluster.

    By the end of this tutorial, you'll have added a Kubernetes ConfigMap and Secret to the Azure Voting deployment.

    Passing Environment Data with ConfigMaps and Secrets

    📝 NOTE: If you don't have an AKS cluster deployed, please head over to Azure-Samples/azure-voting-app-rust, clone the repo, and follow the instructions in the README.md to execute the Azure deployment and setup your kubectl context. Check out the first post this week for more on the environment setup.

    Create the ConfigMap

    ConfigMaps can be used in one of two ways; as environment variables or volumes.

    For this tutorial you'll use a ConfigMap to create three environment variables inside the pod; DATABASE_SERVER, FISRT_VALUE, and SECOND_VALUE. The DATABASE_SERVER provides part of connection string to a Postgres. FIRST_VALUE and SECOND_VALUE are configuration options that change what voting options the application presents to the users.

    Follow the below steps to create a new ConfigMap:

    1. Create a YAML file named 'config-map.yaml'. In this file, specify the environment variables for the application.

      apiVersion: v1
      kind: ConfigMap
      metadata:
      name: azure-voting-config
      data:
      DATABASE_SERVER: azure-voting-db
      FIRST_VALUE: "Go"
      SECOND_VALUE: "Rust"
    2. Create the config map in your Kubernetes cluster by running the following command:

      kubectl create -f config-map.yaml

    Create the Secret

    The deployment-db.yaml and deployment-app.yaml are Kubernetes manifests that deploy the Azure Voting App. Currently, those deployment manifests contain the environment variables POSTGRES_PASSWORD and DATABASE_PASSWORD with the value stored as plain text. Your task is to replace that environment variable with a Kubernetes Secret.

    Create a Secret running the following commands:

    1. Encode mypassword.

      echo -n "mypassword" | base64
    2. Create a YAML file named secret.yaml. In this file, add POSTGRES_PASSWORD as the key and the encoded value returned above under as the value in the data section.

      apiVersion: v1
      kind: Secret
      metadata:
      name: azure-voting-secret
      type: Opaque
      data:
      POSTGRES_PASSWORD: bXlwYXNzd29yZA==
    3. Create the Secret in your Kubernetes cluster by running the following command:

      kubectl create -f secret.yaml

    [!WARNING] base64 encoding is a simple and widely supported way to obscure plaintext data, it is not secure, as it can easily be decoded. If you want to store sensitive data like password, you should use a more secure method like encrypting with a Key Management Service (KMS) before storing it in the Secret.

    Modify the app deployment manifest

    With the ConfigMap and Secert both created the next step is to replace the environment variables provided in the application deployment manuscript with the values stored in the ConfigMap and the Secert.

    Complete the following steps to add the ConfigMap and Secert to the deployment mainifest:

    1. Open the Kubernetes manifest file deployment-app.yaml.

    2. In the containers section, add an envFrom section and upate the env section.

      envFrom:
      - configMapRef:
      name: azure-voting-config
      env:
      - name: DATABASE_PASSWORD
      valueFrom:
      secretKeyRef:
      name: azure-voting-secret
      key: POSTGRES_PASSWORD

      Using envFrom exposes all the values witin the ConfigMap as environment variables. Making it so you don't have to list them individually.

    3. Save the changes to the deployment manifest file.

    4. Apply the changes to the deployment by running the following command:

      kubectl apply -f deployment-app.yaml

    Modify the database deployment manifest

    Next, update the database deployment manifest and replace the plain text environment variable with the Kubernetes Secert.

    1. Open the deployment-db.yaml.

    2. To add the secret to the deployment, replace the env section with the following code:

      env:
      - name: POSTGRES_PASSWORD
      valueFrom:
      secretKeyRef:
      name: azure-voting-secret
      key: POSTGRES_PASSWORD
    3. Apply the updated manifest.

      kubectl apply -f deployment-db.yaml

    Verify the ConfigMap and output environment variables

    Verify that the ConfigMap was added to your deploy by running the following command:

    ```bash
    kubectl describe deployment azure-voting-app
    ```

    Browse the output until you find the envFrom section with the config map reference.

    You can also verify that the environment variables from the config map are being passed to the container by running the command kubectl exec -it <pod-name> -- printenv. This command will show you all the environment variables passed to the pod including the one from configmap.

    By following these steps, you will have successfully added a config map to the Azure Voting App Kubernetes deployment, and the environment variables defined in the config map will be passed to the container running in the pod.

    Verify the Secret and describe the deployment

    Once the secret has been created you can verify it exists by running the following command:

    kubectl get secrets

    You can view additional information, such as labels, annotations, type, and the Data by running kubectl describe:

    kubectl describe secret azure-voting-secret

    By default, the describe command doesn't output the encoded value, but if you output the results as JSON or YAML you'll be able to see the secret's encoded value.

     kubectl get secret azure-voting-secret -o json

    Conclusion

    In conclusion, using ConfigMaps and Secrets in Kubernetes can help to improve the scalability, security, and maintainability of your cluster. By decoupling configuration data and sensitive information from pod definitions, you can promote better organization and security in your Kubernetes environment. Additionally, separating these elements allows for sharing the same configuration and different secrets across multiple pods and deployments, simplifying scaling and management.

    Take the Cloud Skills Challenge!

    Enroll in the Cloud Skills Challenge!

    Don't miss out on this opportunity to level up your skills and stay ahead of the curve in the world of cloud native.

    - + \ No newline at end of file diff --git a/cnny-2023/fundamentals-day-4/index.html b/cnny-2023/fundamentals-day-4/index.html index 6bf4659b6b..e96398c89d 100644 --- a/cnny-2023/fundamentals-day-4/index.html +++ b/cnny-2023/fundamentals-day-4/index.html @@ -14,13 +14,13 @@ - +

    2-4. Kubernetes Fundamentals - Volumes, Mounts, and Claims

    · 8 min read
    Paul Yu

    Welcome to Day 4 of Week 2 of #CloudNativeNewYear!

    The theme for this week is Kubernetes fundamentals. Yesterday we talked about how to set app configurations and secrets at runtime using Kubernetes ConfigMaps and Secrets. Today we'll explore the topic of persistent storage on Kubernetes and show you can leverage Persistent Volumes and Persistent Volume Claims to ensure your PostgreSQL data can survive container restarts.

    Ask the Experts Thursday, February 9th at 9 AM PST
    Catch the Replay of the Live Demo

    Watch the recorded demo and conversation about this week's topics.

    We were live on YouTube walking through today's (and the rest of this week's) demos.

    What We'll Cover

    • Containers are ephemeral
    • Persistent storage on Kubernetes
    • Persistent storage on AKS
    • Takeaways
    • Resources

    Containers are ephemeral

    In our sample application, the frontend UI writes vote values to a backend PostgreSQL database. By default the database container stores its data on the container's local file system, so there will be data loss when the pod is re-deployed or crashes as containers are meant to start with a clean slate each time.

    Let's re-deploy our sample app and experience the problem first hand.

    📝 NOTE: If you don't have an AKS cluster deployed, please head over to Azure-Samples/azure-voting-app-rust, clone the repo, and follow the instructions in the README.md to execute the Azure deployment and setup your kubectl context. Check out the first post this week for more on the environment setup.

    kubectl apply -f ./manifests

    Wait for the azure-voting-app service to be assigned a public IP then browse to the website and submit some votes. Use the command below to print the URL to the terminal.

    echo "http://$(kubectl get ingress azure-voting-app -o jsonpath='{.status.loadBalancer.ingress[0].ip}')"

    Now, let's delete the pods and watch Kubernetes do what it does best... that is, re-schedule pods.

    # wait for the pod to come up then ctrl+c to stop watching
    kubectl delete --all pod --wait=false && kubectl get po -w

    Once the pods have been recovered, reload the website and confirm the vote tally has been reset to zero.

    We need to fix this so that the data outlives the container.

    Persistent storage on Kubernetes

    In order for application data to survive crashes and restarts, you must implement Persistent Volumes and Persistent Volume Claims.

    A persistent volume represents storage that is available to the cluster. Storage volumes can be provisioned manually by an administrator or dynamically using Container Storage Interface (CSI) and storage classes, which includes information on how to provision CSI volumes.

    When a user needs to add persistent storage to their application, a persistent volume claim is made to allocate chunks of storage from the volume. This "claim" includes things like volume mode (e.g., file system or block storage), the amount of storage to allocate, the access mode, and optionally a storage class. Once a persistent volume claim has been deployed, users can add the volume to the pod and mount it in a container.

    In the next section, we'll demonstrate how to enable persistent storage on AKS.

    Persistent storage on AKS

    With AKS, CSI drivers and storage classes are pre-deployed into your cluster. This allows you to natively use Azure Disks, Azure Files, and Azure Blob Storage as persistent volumes. You can either bring your own Azure storage account and use it with AKS or have AKS provision an Azure storage account for you.

    To view the Storage CSI drivers that have been enabled in your AKS cluster, run the following command.

    az aks show \
    --name <YOUR_AKS_NAME> \
    --resource-group <YOUR_AKS_RESOURCE_GROUP> \
    --query storageProfile

    You should see output that looks like this.

    {
    "blobCsiDriver": null,
    "diskCsiDriver": {
    "enabled": true,
    "version": "v1"
    },
    "fileCsiDriver": {
    "enabled": true
    },
    "snapshotController": {
    "enabled": true
    }
    }

    To view the storage classes that have been installed in your cluster, run the following command.

    kubectl get storageclass

    Workload requirements will dictate which CSI driver and storage class you will need to use.

    If you need block storage, then you should use the blobCsiDriver. The driver may not be enabled by default but you can enable it by following instructions which can be found in the Resources section below.

    If you need file storage you should leverage either diskCsiDriver or fileCsiDriver. The decision between these two boils down to whether or not you need to have the underlying storage accessible by one pod or multiple pods. It is important to note that diskCsiDriver currently supports access from a single pod only. Therefore, if you need data to be accessible by multiple pods at the same time, then you should opt for fileCsiDriver.

    For our PostgreSQL deployment, we'll use the diskCsiDriver and have AKS create an Azure Disk resource for us. There is no need to create a PV resource, all we need to do to is create a PVC using the managed-csi-premium storage class.

    Run the following command to create the PVC.

    kubectl apply -f - <<EOF            
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
    name: pvc-azuredisk
    spec:
    accessModes:
    - ReadWriteOnce
    resources:
    requests:
    storage: 10Gi
    storageClassName: managed-csi-premium
    EOF

    When you check the PVC resource, you'll notice the STATUS is set to Pending. It will be set to Bound once the volume is mounted in the PostgreSQL container.

    kubectl get persistentvolumeclaim

    Let's delete the azure-voting-db deployment.

    kubectl delete deploy azure-voting-db

    Next, we need to apply an updated deployment manifest which includes our PVC.

    kubectl apply -f - <<EOF
    apiVersion: apps/v1
    kind: Deployment
    metadata:
    creationTimestamp: null
    labels:
    app: azure-voting-db
    name: azure-voting-db
    spec:
    replicas: 1
    selector:
    matchLabels:
    app: azure-voting-db
    strategy: {}
    template:
    metadata:
    creationTimestamp: null
    labels:
    app: azure-voting-db
    spec:
    containers:
    - image: postgres:15.0-alpine
    name: postgres
    ports:
    - containerPort: 5432
    env:
    - name: POSTGRES_PASSWORD
    valueFrom:
    secretKeyRef:
    name: azure-voting-secret
    key: POSTGRES_PASSWORD
    resources: {}
    volumeMounts:
    - name: mypvc
    mountPath: "/var/lib/postgresql/data"
    subPath: "data"
    volumes:
    - name: mypvc
    persistentVolumeClaim:
    claimName: pvc-azuredisk
    EOF

    In the manifest above, you'll see that we are mounting a new volume called mypvc (the name can be whatever you want) in the pod which points to a PVC named pvc-azuredisk. With the volume in place, we can mount it in the container by referencing the name of the volume mypvc and setting the mount path to /var/lib/postgresql/data (which is the default path).

    💡 IMPORTANT: When mounting a volume into a non-empty subdirectory, you must add subPath to the volume mount and point it to a subdirectory in the volume rather than mounting at root. In our case, when Azure Disk is formatted, it leaves a lost+found directory as documented here.

    Watch the pods and wait for the STATUS to show Running and the pod's READY status shows 1/1.

    # wait for the pod to come up then ctrl+c to stop watching
    kubectl get po -w

    Verify that the STATUS of the PVC is now set to Bound

    kubectl get persistentvolumeclaim

    With the new database container running, let's restart the application pod, wait for the pod's READY status to show 1/1, then head back over to our web browser and submit a few votes.

    kubectl delete pod -lapp=azure-voting-app --wait=false && kubectl get po -lapp=azure-voting-app -w

    Now the moment of truth... let's rip out the pods again, wait for the pods to be re-scheduled, and confirm our vote counts remain in tact.

    kubectl delete --all pod --wait=false && kubectl get po -w

    If you navigate back to the website, you'll find the vote are still there 🎉

    Takeaways

    By design, containers are meant to be ephemeral and stateless workloads are ideal on Kubernetes. However, there will come a time when your data needs to outlive the container. To persist data in your Kubernetes workloads, you need to leverage PV, PVC, and optionally storage classes. In our demo scenario, we leveraged CSI drivers built into AKS and created a PVC using pre-installed storage classes. From there, we updated the database deployment to mount the PVC in the container and AKS did the rest of the work in provisioning the underlying Azure Disk. If the built-in storage classes does not fit your needs; for example, you need to change the ReclaimPolicy or change the SKU for the Azure resource, then you can create your own custom storage class and configure it just the way you need it 😊

    We'll revisit this topic again next week but in the meantime, check out some of the resources listed below to learn more.

    See you in the next post!

    Resources

    Take the Cloud Skills Challenge!

    Enroll in the Cloud Skills Challenge!

    Don't miss out on this opportunity to level up your skills and stay ahead of the curve in the world of cloud native.

    - + \ No newline at end of file diff --git a/cnny-2023/fundamentals-day-5/index.html b/cnny-2023/fundamentals-day-5/index.html index 6b14af64d3..14cd180184 100644 --- a/cnny-2023/fundamentals-day-5/index.html +++ b/cnny-2023/fundamentals-day-5/index.html @@ -14,13 +14,13 @@ - +

    2-5. Kubernetes Fundamentals - Scaling Pods and Nodes

    · 10 min read
    Steven Murawski

    Welcome to Day 5 of Week 2 of #CloudNativeNewYear!

    The theme for this week is Kubernetes fundamentals. Yesterday we talked about adding persistent storage to our deployment. Today we'll explore the topic of scaling pods and nodes in our Kubernetes cluster.

    Ask the Experts Thursday, February 9th at 9 AM PST
    Catch the Replay of the Live Demo

    Watch the recorded demo and conversation about this week's topics.

    We were live on YouTube walking through today's (and the rest of this week's) demos.

    What We'll Cover

    • Scaling Our Application
    • Scaling Pods
    • Scaling Nodes
    • Exercise
    • Resources

    Scaling Our Application

    One of our primary reasons to use a service like Kubernetes to orchestrate our workloads is the ability to scale. We've approached scaling in a multitude of ways over the years, taking advantage of the ever-evolving levels of hardware and software. Kubernetes allows us to scale our units of work, Pods, and the Nodes they run on. This allows us to take advantage of both hardware and software scaling abilities. Kubernetes can help improve the utilization of existing hardware (by scheduling Pods on Nodes that have resource capacity). And, with the capabilities of virtualization and/or cloud hosting (or a bit more work, if you have a pool of physical machines), Kubernetes can expand (or contract) the number of Nodes capable of hosting Pods. Scaling is primarily driven by resource utilization, but can be triggered by a variety of other sources thanks to projects like Kubernetes Event-driven Autoscaling (KEDA).

    Scaling Pods

    Our first level of scaling is with our Pods. Earlier, when we worked on our deployment, we talked about how the Kubernetes would use the deployment configuration to ensure that we had the desired workloads running. One thing we didn't explore was running more than one instance of a pod. We can define a number of replicas of a pod in our Deployment.

    Manually Scale Pods

    So, if we wanted to define more pods right at the start (or at any point really), we could update our deployment configuration file with the number of replicas and apply that configuration file.

    spec:
    replicas: 5

    Or we could use the kubectl scale command to update the deployment with a number of pods to create.

    kubectl scale --replicas=5 deployment/azure-voting-app

    Both of these approaches modify the running configuration of our Kubernetes cluster and request that it ensure that we have that set number of replicas running. Because this was a manual change, the Kubernetes cluster won't automatically increase or decrease the number of pods. It'll just ensure that there are always the specified number of pods running.

    Autoscale Pods with the Horizontal Pod Autoscaler

    Another approach to scaling our pods is to allow the Horizontal Pod Autoscaler to help us scale in response to resources being used by the pod. This requires a bit more configuration up front. When we define our pod in our deployment, we need to include resource requests and limits. The requests help Kubernetes determine what nodes may have capacity for a new instance of a pod. The limit tells us where the node should cap utilization for a particular instance of a pod. For example, we'll update our deployment to request 0.25 CPU and set a limit of 0.5 CPU.

        spec:
    containers:
    - image: acrudavoz.azurecr.io/cnny2023/azure-voting-app-rust:ca4
    name: azure-voting-app-rust
    ports:
    - containerPort: 8080
    env:
    - name: DATABASE_URL
    value: postgres://postgres:mypassword@10.244.0.29
    resources:
    requests:
    cpu: 250m
    limits:
    cpu: 500m

    Now that we've given Kubernetes an allowed range and an idea of what free resources a node should have to place new pods, we can set up autoscaling. Because autoscaling is a persistent configuration, I like to define it in a configuration file that I'll be able to keep with the rest of my cluster configuration. We'll use the kubectl command to help us write the configuration file. We'll request that Kubernetes watch our pods and when the average CPU utilization if 50% of the requested usage (in our case if it's using more than 0.375 CPU across the current number of pods), it can grow the number of pods serving requests up to 10. If the utilization drops, Kubernetes will have the permission to deprovision pods down to the minimum (three in our example).

    kubectl autoscale deployment azure-voting-app --cpu-percent=50 --min=3 --max=10 -o YAML --dry-run=client

    Which would give us:

    apiVersion: autoscaling/v1
    kind: HorizontalPodAutoscaler
    metadata:
    creationTimestamp: null
    name: azure-voting-app
    spec:
    maxReplicas: 10
    minReplicas: 3
    scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: azure-voting-app
    targetCPUUtilizationPercentage: 50
    status:
    currentReplicas: 0
    desiredReplicas: 0

    So, how often does the autoscaler check the metrics being monitored? The autoscaler checks the Metrics API every 15 seconds, however the pods stats are only updated every 60 seconds. This means that an autoscale event may be evaluated about once a minute. Once an autoscale down event happens however, Kubernetes has a cooldown period to give the new pods a chance to distribute the workload and let the new metrics accumulate. There is no delay on scale up events.

    Application Architecture Considerations

    We've focused in this example on our front end, which is an easier scaling story. When we start talking about scaling our database layers or anything that deals with persistent storage or has primary/replica configuration requirements things get a bit more complicated. Some of these applications may have built-in leader election or could use sidecars to help use existing features in Kubernetes to perform that function. For shared storage scenarios, persistent volumes (or persistent volumes with Azure) can be of help, if the application knows how to play well with shared file access.

    Ultimately, you know your application architecture and, while Kubernetes may not have an exact match to how you are doing things today, the underlying capability is probably there under a different name. This abstraction allows you to more effectively use Kubernetes to operate a variety of workloads with the levels of controls you need.

    Scaling Nodes

    We've looked at how to scale our pods, but that assumes we have enough resources in our existing pool of nodes to accomodate those scaling requests. Kubernetes can also help scale our available nodes to ensure that our applications have the necessary resources to meet their performance requirements.

    Manually Scale Nodes

    Manually scaling nodes isn't a direct function of Kubernetes, so your operating environment instructions may vary. On Azure, it's pretty straight forward. Using the Azure CLI (or other tools), we can tell our AKS cluster to scale up or scale down the number of nodes in our node pool.

    First, we'll check out how many nodes we currently have in our working environment.

    kubectl get nodes

    This will show us

    azure-voting-app-rust ❯  kubectl get nodes
    NAME STATUS ROLES AGE VERSION
    aks-pool0-37917684-vmss000000 Ready agent 5d21h v1.24.6

    Then, we'll scale it up to three nodes.

    az aks scale --resource-group $ResourceGroup --name $AksName --node-count 3

    Then, we'll check out how many nodes we now have in our working environment.

    kubectl get nodes

    Which returns:

    azure-voting-app-rust ❯  kubectl get nodes
    NAME STATUS ROLES AGE VERSION
    aks-pool0-37917684-vmss000000 Ready agent 5d21h v1.24.6
    aks-pool0-37917684-vmss000001 Ready agent 5m27s v1.24.6
    aks-pool0-37917684-vmss000002 Ready agent 5m10s v1.24.6

    Autoscale Nodes with the Cluster Autoscaler

    Things get more interesting when we start working with the Cluster Autoscaler. The Cluster Autoscaler watches for the inability of Kubernetes to schedule the required number of pods due to resource constraints (and a few other criteria like affinity/anti-affinity). If there are insufficient resources available on the existing nodes, the autoscaler can provision new nodes into the nodepool. Likewise, the autoscaler watches to see if the existing pods could be consolidated to a smaller set of nodes and can remove excess nodes.

    Enabling the autoscaler is likewise an update that can be dependent on where and how your Kubernetes cluster is hosted. Azure makes it easy with a simple Azure CLI command.

    az aks update `
    --resource-group $ResourceGroup `
    --name $AksName `
    --update-cluster-autoscaler `
    --min-count 1 `
    --max-count 5

    There are a variety of settings that can be configured to tune how the autoscaler works.

    Scaling on Different Events

    CPU and memory utilization are the primary drivers for the Horizontal Pod Autoscaler, but those might not be the best measures as to when you might want to scale workloads. There are other options for scaling triggers and one of the more common plugins to help with that is the Kubernetes Event-driven Autoscaling (KEDA) project. The KEDA project makes it easy to plug in different event sources to help drive scaling. Find more information about using KEDA on AKS here.

    Exercise

    Let's try out the scaling configurations that we just walked through using our sample application. If you still have your environment from Day 1, you can use that.

    📝 NOTE: If you don't have an AKS cluster deployed, please head over to Azure-Samples/azure-voting-app-rust, clone the repo, and follow the instructions in the README.md to execute the Azure deployment and setup your kubectl context. Check out the first post this week for more on the environment setup.

    Configure Horizontal Pod Autoscaler

    • Edit ./manifests/deployment-app.yaml to include resource requests and limits.
            resources:
    requests:
    cpu: 250m
    limits:
    cpu: 500m
    • Apply the updated deployment configuration.
    kubectl apply -f ./manifests/deployment-app.yaml
    • Create the horizontal pod autoscaler configuration and apply it
    kubectl autoscale deployment azure-voting-app --cpu-percent=50 --min=3 --max=10 -o YAML --dry-run=client > ./manifests/scaler-app.yaml
    kubectl apply -f ./manifests/scaler-app.yaml
    • Check to see your pods scale out to the minimum.
    kubectl get pods

    Configure Cluster Autoscaler

    Configuring the basic behavior of the Cluster Autoscaler is a bit simpler. We just need to run the Azure CLI command to enable the autoscaler and define our lower and upper limits.

    • Check the current nodes available (should be 1).
    kubectl get nodes
    • Update the cluster to enable the autoscaler
    az aks update `
    --resource-group $ResourceGroup `
    --name $AksName `
    --update-cluster-autoscaler `
    --min-count 2 `
    --max-count 5
    • Check to see the current number of nodes (should be 2 now).
    kubectl get nodes

    Resources

    Take the Cloud Skills Challenge!

    Enroll in the Cloud Skills Challenge!

    Don't miss out on this opportunity to level up your skills and stay ahead of the curve in the world of cloud native.

    Documentation

    Training

    - + \ No newline at end of file diff --git a/cnny-2023/index.html b/cnny-2023/index.html index 17023bff5e..ddc5b36b82 100644 --- a/cnny-2023/index.html +++ b/cnny-2023/index.html @@ -14,13 +14,13 @@ - +

    · 4 min read
    Cory Skimming
    Devanshi Joshi
    Steven Murawski
    Nitya Narasimhan

    Welcome to the Kick-off Post for #30DaysOfCloudNative - one of the core initiatives within #CloudNativeNewYear! Over the next four weeks, join us as we take you from fundamentals to functional usage of Cloud-native technologies, one blog post at a time! Read on to learn a little bit about this initiative and what you can expect to learn from this journey!

    What We'll Cover


    Cloud-native New Year

    Welcome to Week 01 of 🥳 #CloudNativeNewYear ! Today, we kick off a full month of content and activities to skill you up on all things Cloud-native on Azure with content, events, and community interactions! Read on to learn about what we have planned!


    Explore our initiatives

    We have a number of initiatives planned for the month to help you learn and skill up on relevant technologies. Click on the links to visit the relevant pages for each.

    We'll go into more details about #30DaysOfCloudNative in this post - don't forget to subscribe to the blog to get daily posts delivered directly to your preferred feed reader!


    Register for events!

    What are 3 things you can do today, to jumpstart your learning journey?


    #30DaysOfCloudNative

    #30DaysOfCloudNative is a month-long series of daily blog posts grouped into 4 themed weeks - taking you from core concepts to end-to-end solution examples in 30 days. Each article will be short (5-8 mins reading time) and provide exercises and resources to help you reinforce learnings and take next steps.

    This series focuses on the Cloud-native On Azure learning journey in four stages, each building on the previous week to help you skill up in a beginner-friendly way:

    We have a tentative weekly-themed roadmap for the topics we hope to cover and will keep this updated as we go with links to actual articles as they get published.

    Week 1: FOCUS ON CLOUD-NATIVE FUNDAMENTALS

    Here's a sneak peek at the week 1 schedule. We'll start with a broad review of cloud-native fundamentals and walkthrough the core concepts of microservices, containers and Kubernetes.

    • Jan 23: Learn Core Concepts for Cloud-native
    • Jan 24: Container 101
    • Jan 25: Adopting Microservices with Kubernetes
    • Jan 26: Kubernetes 101
    • Jan 27: Exploring your Cloud Native Options

    Let's Get Started!

    Now you know everything! We hope you are as excited as we are to dive into a full month of active learning and doing! Don't forget to subscribe for updates in your favorite feed reader! And look out for our first Cloud-native Fundamentals post on January 23rd!


    - + \ No newline at end of file diff --git a/cnny-2023/microservices-101/index.html b/cnny-2023/microservices-101/index.html index 885c8cfcad..7e4e4a60a2 100644 --- a/cnny-2023/microservices-101/index.html +++ b/cnny-2023/microservices-101/index.html @@ -14,13 +14,13 @@ - +

    1-4. Microservices 101

    · 6 min read
    Josh Duffney

    Welcome to Day 4 of Week 1 of #CloudNativeNewYear!

    This week we'll focus on advanced topics and best practices for Cloud-Native practitioners, kicking off with this post on Serverless Container Options with Azure. We'll look at technologies, tools and best practices that range from managed services like Azure Kubernetes Service, to options allowing finer granularity of control and oversight.

    What We'll Cover

    • What is Microservice Architecture?
    • How do you design a Microservice?
    • What challenges do Microservices introduce?
    • Conclusion
    • Resources


    Microservices are a modern way of designing and building software that increases deployment velocity by decomposing an application into small autonomous services that can be deployed independently.

    By deploying loosely coupled microservices your applications can be developed, deployed, and scaled independently. Because each service is independent, it can be updated or replaced without having to worry about the impact on the rest of the application. This means that if a bug is found in one service, it can be fixed without having to redeploy the entire application. All of which gives an organization the ability to deliver value to their customers faster.

    In this article, we will explore the basics of microservices architecture, its benefits and challenges, and how it can help improve the development, deployment, and maintenance of software applications.

    What is Microservice Architecture?

    Before explaining what Microservice architecture is, it’s important to understand what problems microservices aim to address.

    Traditional software development is centered around building monolithic applications. Monolithic applications are built as a single, large codebase. Meaning your code is tightly coupled causing the monolithic app to suffer from the following:

    Too much Complexity: Monolithic applications can become complex and difficult to understand and maintain as they grow. This can make it hard to identify and fix bugs and add new features.

    Difficult to Scale: Monolithic applications can be difficult to scale as they often have a single point of failure, which can cause the whole application to crash if a service fails.

    Slow Deployment: Deploying a monolithic application can be risky and time-consuming, as a small change in one part of the codebase can affect the entire application.

    Microservice architecture (often called microservices) is an architecture style that addresses the challenges created by Monolithic applications. Microservices architecture is a way of designing and building software applications as a collection of small, independent services that communicate with each other through APIs. This allows for faster development and deployment cycles, as well as easier scaling and maintenance than is possible with a monolithic application.

    How do you design a Microservice?

    Building applications with Microservices architecture requires a different approach. Microservices architecture focuses on business capabilities rather than technical layers, such as data access or messaging. Doing so requires that you shift your focus away from the technical stack and model your applications based upon the various domains that exist within the business.

    Domain-driven design (DDD) is a way to design software by focusing on the business needs. You can use Domain-driven design as a framework that guides the development of well-designed microservices by building services that encapsulate knowledge in each domain and abstract that knowledge from clients.

    In Domain-driven design you start by modeling the business domain and creating a domain model. A domain model is an abstract model of the business model that distills and organizes a domain of knowledge and provides a common language for developers and domain experts. It’s the resulting domain model that microservices a best suited to be built around because it helps establish a well-defined boundary between external systems and other internal applications.

    In short, before you begin designing microservices, start by mapping the functions of the business and their connections to create a domain model for the microservice(s) to be built around.

    What challenges do Microservices introduce?

    Microservices solve a lot of problems and have several advantages, but the grass isn’t always greener on the other side.

    One of the key challenges of microservices is managing communication between services. Because services are independent, they need to communicate with each other through APIs. This can be complex and difficult to manage, especially as the number of services grows. To address this challenge, it is important to have a clear API design, with well-defined inputs and outputs for each service. It is also important to have a system for managing and monitoring communication between services, to ensure that everything is running smoothly.

    Another challenge of microservices is managing the deployment and scaling of services. Because each service is independent, it needs to be deployed and scaled separately from the rest of the application. This can be complex and difficult to manage, especially as the number of services grows. To address this challenge, it is important to have a clear and consistent deployment process, with well-defined steps for deploying and scaling each service. Furthermore, it is advisable to host them on a system with self-healing capabilities to reduce operational burden.

    It is also important to have a system for monitoring and managing the deployment and scaling of services, to ensure optimal performance.

    Each of these challenges has created fertile ground for tooling and process that exists in the cloud-native ecosystem. Kubernetes, CI CD, and other DevOps practices are part of the package of adopting the microservices architecture.

    Conclusion

    In summary, microservices architecture focuses on software applications as a collection of small, independent services that communicate with each other over well-defined APIs.

    The main advantages of microservices include:

    • increased flexibility and scalability per microservice,
    • efficient resource utilization (with help from a container orchestrator like Kubernetes),
    • and faster development cycles.

    Continue following along with this series to see how you can use Kubernetes to help adopt microservices patterns in your own environments!

    Resources

    - + \ No newline at end of file diff --git a/cnny-2023/page/10/index.html b/cnny-2023/page/10/index.html index ffbbdaf00a..5bfcabaf7f 100644 --- a/cnny-2023/page/10/index.html +++ b/cnny-2023/page/10/index.html @@ -14,13 +14,13 @@ - +

    · 8 min read
    Paul Yu

    Welcome to Day 4 of Week 2 of #CloudNativeNewYear!

    The theme for this week is Kubernetes fundamentals. Yesterday we talked about how to set app configurations and secrets at runtime using Kubernetes ConfigMaps and Secrets. Today we'll explore the topic of persistent storage on Kubernetes and show you can leverage Persistent Volumes and Persistent Volume Claims to ensure your PostgreSQL data can survive container restarts.

    Ask the Experts Thursday, February 9th at 9 AM PST
    Catch the Replay of the Live Demo

    Watch the recorded demo and conversation about this week's topics.

    We were live on YouTube walking through today's (and the rest of this week's) demos.

    What We'll Cover

    • Containers are ephemeral
    • Persistent storage on Kubernetes
    • Persistent storage on AKS
    • Takeaways
    • Resources

    Containers are ephemeral

    In our sample application, the frontend UI writes vote values to a backend PostgreSQL database. By default the database container stores its data on the container's local file system, so there will be data loss when the pod is re-deployed or crashes as containers are meant to start with a clean slate each time.

    Let's re-deploy our sample app and experience the problem first hand.

    📝 NOTE: If you don't have an AKS cluster deployed, please head over to Azure-Samples/azure-voting-app-rust, clone the repo, and follow the instructions in the README.md to execute the Azure deployment and setup your kubectl context. Check out the first post this week for more on the environment setup.

    kubectl apply -f ./manifests

    Wait for the azure-voting-app service to be assigned a public IP then browse to the website and submit some votes. Use the command below to print the URL to the terminal.

    echo "http://$(kubectl get ingress azure-voting-app -o jsonpath='{.status.loadBalancer.ingress[0].ip}')"

    Now, let's delete the pods and watch Kubernetes do what it does best... that is, re-schedule pods.

    # wait for the pod to come up then ctrl+c to stop watching
    kubectl delete --all pod --wait=false && kubectl get po -w

    Once the pods have been recovered, reload the website and confirm the vote tally has been reset to zero.

    We need to fix this so that the data outlives the container.

    Persistent storage on Kubernetes

    In order for application data to survive crashes and restarts, you must implement Persistent Volumes and Persistent Volume Claims.

    A persistent volume represents storage that is available to the cluster. Storage volumes can be provisioned manually by an administrator or dynamically using Container Storage Interface (CSI) and storage classes, which includes information on how to provision CSI volumes.

    When a user needs to add persistent storage to their application, a persistent volume claim is made to allocate chunks of storage from the volume. This "claim" includes things like volume mode (e.g., file system or block storage), the amount of storage to allocate, the access mode, and optionally a storage class. Once a persistent volume claim has been deployed, users can add the volume to the pod and mount it in a container.

    In the next section, we'll demonstrate how to enable persistent storage on AKS.

    Persistent storage on AKS

    With AKS, CSI drivers and storage classes are pre-deployed into your cluster. This allows you to natively use Azure Disks, Azure Files, and Azure Blob Storage as persistent volumes. You can either bring your own Azure storage account and use it with AKS or have AKS provision an Azure storage account for you.

    To view the Storage CSI drivers that have been enabled in your AKS cluster, run the following command.

    az aks show \
    --name <YOUR_AKS_NAME> \
    --resource-group <YOUR_AKS_RESOURCE_GROUP> \
    --query storageProfile

    You should see output that looks like this.

    {
    "blobCsiDriver": null,
    "diskCsiDriver": {
    "enabled": true,
    "version": "v1"
    },
    "fileCsiDriver": {
    "enabled": true
    },
    "snapshotController": {
    "enabled": true
    }
    }

    To view the storage classes that have been installed in your cluster, run the following command.

    kubectl get storageclass

    Workload requirements will dictate which CSI driver and storage class you will need to use.

    If you need block storage, then you should use the blobCsiDriver. The driver may not be enabled by default but you can enable it by following instructions which can be found in the Resources section below.

    If you need file storage you should leverage either diskCsiDriver or fileCsiDriver. The decision between these two boils down to whether or not you need to have the underlying storage accessible by one pod or multiple pods. It is important to note that diskCsiDriver currently supports access from a single pod only. Therefore, if you need data to be accessible by multiple pods at the same time, then you should opt for fileCsiDriver.

    For our PostgreSQL deployment, we'll use the diskCsiDriver and have AKS create an Azure Disk resource for us. There is no need to create a PV resource, all we need to do to is create a PVC using the managed-csi-premium storage class.

    Run the following command to create the PVC.

    kubectl apply -f - <<EOF            
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
    name: pvc-azuredisk
    spec:
    accessModes:
    - ReadWriteOnce
    resources:
    requests:
    storage: 10Gi
    storageClassName: managed-csi-premium
    EOF

    When you check the PVC resource, you'll notice the STATUS is set to Pending. It will be set to Bound once the volume is mounted in the PostgreSQL container.

    kubectl get persistentvolumeclaim

    Let's delete the azure-voting-db deployment.

    kubectl delete deploy azure-voting-db

    Next, we need to apply an updated deployment manifest which includes our PVC.

    kubectl apply -f - <<EOF
    apiVersion: apps/v1
    kind: Deployment
    metadata:
    creationTimestamp: null
    labels:
    app: azure-voting-db
    name: azure-voting-db
    spec:
    replicas: 1
    selector:
    matchLabels:
    app: azure-voting-db
    strategy: {}
    template:
    metadata:
    creationTimestamp: null
    labels:
    app: azure-voting-db
    spec:
    containers:
    - image: postgres:15.0-alpine
    name: postgres
    ports:
    - containerPort: 5432
    env:
    - name: POSTGRES_PASSWORD
    valueFrom:
    secretKeyRef:
    name: azure-voting-secret
    key: POSTGRES_PASSWORD
    resources: {}
    volumeMounts:
    - name: mypvc
    mountPath: "/var/lib/postgresql/data"
    subPath: "data"
    volumes:
    - name: mypvc
    persistentVolumeClaim:
    claimName: pvc-azuredisk
    EOF

    In the manifest above, you'll see that we are mounting a new volume called mypvc (the name can be whatever you want) in the pod which points to a PVC named pvc-azuredisk. With the volume in place, we can mount it in the container by referencing the name of the volume mypvc and setting the mount path to /var/lib/postgresql/data (which is the default path).

    💡 IMPORTANT: When mounting a volume into a non-empty subdirectory, you must add subPath to the volume mount and point it to a subdirectory in the volume rather than mounting at root. In our case, when Azure Disk is formatted, it leaves a lost+found directory as documented here.

    Watch the pods and wait for the STATUS to show Running and the pod's READY status shows 1/1.

    # wait for the pod to come up then ctrl+c to stop watching
    kubectl get po -w

    Verify that the STATUS of the PVC is now set to Bound

    kubectl get persistentvolumeclaim

    With the new database container running, let's restart the application pod, wait for the pod's READY status to show 1/1, then head back over to our web browser and submit a few votes.

    kubectl delete pod -lapp=azure-voting-app --wait=false && kubectl get po -lapp=azure-voting-app -w

    Now the moment of truth... let's rip out the pods again, wait for the pods to be re-scheduled, and confirm our vote counts remain in tact.

    kubectl delete --all pod --wait=false && kubectl get po -w

    If you navigate back to the website, you'll find the vote are still there 🎉

    Takeaways

    By design, containers are meant to be ephemeral and stateless workloads are ideal on Kubernetes. However, there will come a time when your data needs to outlive the container. To persist data in your Kubernetes workloads, you need to leverage PV, PVC, and optionally storage classes. In our demo scenario, we leveraged CSI drivers built into AKS and created a PVC using pre-installed storage classes. From there, we updated the database deployment to mount the PVC in the container and AKS did the rest of the work in provisioning the underlying Azure Disk. If the built-in storage classes does not fit your needs; for example, you need to change the ReclaimPolicy or change the SKU for the Azure resource, then you can create your own custom storage class and configure it just the way you need it 😊

    We'll revisit this topic again next week but in the meantime, check out some of the resources listed below to learn more.

    See you in the next post!

    Resources

    Take the Cloud Skills Challenge!

    Enroll in the Cloud Skills Challenge!

    Don't miss out on this opportunity to level up your skills and stay ahead of the curve in the world of cloud native.

    - + \ No newline at end of file diff --git a/cnny-2023/page/11/index.html b/cnny-2023/page/11/index.html index 43c2df3fd7..1edc494034 100644 --- a/cnny-2023/page/11/index.html +++ b/cnny-2023/page/11/index.html @@ -14,13 +14,13 @@ - +

    · 10 min read
    Steven Murawski

    Welcome to Day 5 of Week 2 of #CloudNativeNewYear!

    The theme for this week is Kubernetes fundamentals. Yesterday we talked about adding persistent storage to our deployment. Today we'll explore the topic of scaling pods and nodes in our Kubernetes cluster.

    Ask the Experts Thursday, February 9th at 9 AM PST
    Catch the Replay of the Live Demo

    Watch the recorded demo and conversation about this week's topics.

    We were live on YouTube walking through today's (and the rest of this week's) demos.

    What We'll Cover

    • Scaling Our Application
    • Scaling Pods
    • Scaling Nodes
    • Exercise
    • Resources

    Scaling Our Application

    One of our primary reasons to use a service like Kubernetes to orchestrate our workloads is the ability to scale. We've approached scaling in a multitude of ways over the years, taking advantage of the ever-evolving levels of hardware and software. Kubernetes allows us to scale our units of work, Pods, and the Nodes they run on. This allows us to take advantage of both hardware and software scaling abilities. Kubernetes can help improve the utilization of existing hardware (by scheduling Pods on Nodes that have resource capacity). And, with the capabilities of virtualization and/or cloud hosting (or a bit more work, if you have a pool of physical machines), Kubernetes can expand (or contract) the number of Nodes capable of hosting Pods. Scaling is primarily driven by resource utilization, but can be triggered by a variety of other sources thanks to projects like Kubernetes Event-driven Autoscaling (KEDA).

    Scaling Pods

    Our first level of scaling is with our Pods. Earlier, when we worked on our deployment, we talked about how the Kubernetes would use the deployment configuration to ensure that we had the desired workloads running. One thing we didn't explore was running more than one instance of a pod. We can define a number of replicas of a pod in our Deployment.

    Manually Scale Pods

    So, if we wanted to define more pods right at the start (or at any point really), we could update our deployment configuration file with the number of replicas and apply that configuration file.

    spec:
    replicas: 5

    Or we could use the kubectl scale command to update the deployment with a number of pods to create.

    kubectl scale --replicas=5 deployment/azure-voting-app

    Both of these approaches modify the running configuration of our Kubernetes cluster and request that it ensure that we have that set number of replicas running. Because this was a manual change, the Kubernetes cluster won't automatically increase or decrease the number of pods. It'll just ensure that there are always the specified number of pods running.

    Autoscale Pods with the Horizontal Pod Autoscaler

    Another approach to scaling our pods is to allow the Horizontal Pod Autoscaler to help us scale in response to resources being used by the pod. This requires a bit more configuration up front. When we define our pod in our deployment, we need to include resource requests and limits. The requests help Kubernetes determine what nodes may have capacity for a new instance of a pod. The limit tells us where the node should cap utilization for a particular instance of a pod. For example, we'll update our deployment to request 0.25 CPU and set a limit of 0.5 CPU.

        spec:
    containers:
    - image: acrudavoz.azurecr.io/cnny2023/azure-voting-app-rust:ca4
    name: azure-voting-app-rust
    ports:
    - containerPort: 8080
    env:
    - name: DATABASE_URL
    value: postgres://postgres:mypassword@10.244.0.29
    resources:
    requests:
    cpu: 250m
    limits:
    cpu: 500m

    Now that we've given Kubernetes an allowed range and an idea of what free resources a node should have to place new pods, we can set up autoscaling. Because autoscaling is a persistent configuration, I like to define it in a configuration file that I'll be able to keep with the rest of my cluster configuration. We'll use the kubectl command to help us write the configuration file. We'll request that Kubernetes watch our pods and when the average CPU utilization if 50% of the requested usage (in our case if it's using more than 0.375 CPU across the current number of pods), it can grow the number of pods serving requests up to 10. If the utilization drops, Kubernetes will have the permission to deprovision pods down to the minimum (three in our example).

    kubectl autoscale deployment azure-voting-app --cpu-percent=50 --min=3 --max=10 -o YAML --dry-run=client

    Which would give us:

    apiVersion: autoscaling/v1
    kind: HorizontalPodAutoscaler
    metadata:
    creationTimestamp: null
    name: azure-voting-app
    spec:
    maxReplicas: 10
    minReplicas: 3
    scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: azure-voting-app
    targetCPUUtilizationPercentage: 50
    status:
    currentReplicas: 0
    desiredReplicas: 0

    So, how often does the autoscaler check the metrics being monitored? The autoscaler checks the Metrics API every 15 seconds, however the pods stats are only updated every 60 seconds. This means that an autoscale event may be evaluated about once a minute. Once an autoscale down event happens however, Kubernetes has a cooldown period to give the new pods a chance to distribute the workload and let the new metrics accumulate. There is no delay on scale up events.

    Application Architecture Considerations

    We've focused in this example on our front end, which is an easier scaling story. When we start talking about scaling our database layers or anything that deals with persistent storage or has primary/replica configuration requirements things get a bit more complicated. Some of these applications may have built-in leader election or could use sidecars to help use existing features in Kubernetes to perform that function. For shared storage scenarios, persistent volumes (or persistent volumes with Azure) can be of help, if the application knows how to play well with shared file access.

    Ultimately, you know your application architecture and, while Kubernetes may not have an exact match to how you are doing things today, the underlying capability is probably there under a different name. This abstraction allows you to more effectively use Kubernetes to operate a variety of workloads with the levels of controls you need.

    Scaling Nodes

    We've looked at how to scale our pods, but that assumes we have enough resources in our existing pool of nodes to accomodate those scaling requests. Kubernetes can also help scale our available nodes to ensure that our applications have the necessary resources to meet their performance requirements.

    Manually Scale Nodes

    Manually scaling nodes isn't a direct function of Kubernetes, so your operating environment instructions may vary. On Azure, it's pretty straight forward. Using the Azure CLI (or other tools), we can tell our AKS cluster to scale up or scale down the number of nodes in our node pool.

    First, we'll check out how many nodes we currently have in our working environment.

    kubectl get nodes

    This will show us

    azure-voting-app-rust ❯  kubectl get nodes
    NAME STATUS ROLES AGE VERSION
    aks-pool0-37917684-vmss000000 Ready agent 5d21h v1.24.6

    Then, we'll scale it up to three nodes.

    az aks scale --resource-group $ResourceGroup --name $AksName --node-count 3

    Then, we'll check out how many nodes we now have in our working environment.

    kubectl get nodes

    Which returns:

    azure-voting-app-rust ❯  kubectl get nodes
    NAME STATUS ROLES AGE VERSION
    aks-pool0-37917684-vmss000000 Ready agent 5d21h v1.24.6
    aks-pool0-37917684-vmss000001 Ready agent 5m27s v1.24.6
    aks-pool0-37917684-vmss000002 Ready agent 5m10s v1.24.6

    Autoscale Nodes with the Cluster Autoscaler

    Things get more interesting when we start working with the Cluster Autoscaler. The Cluster Autoscaler watches for the inability of Kubernetes to schedule the required number of pods due to resource constraints (and a few other criteria like affinity/anti-affinity). If there are insufficient resources available on the existing nodes, the autoscaler can provision new nodes into the nodepool. Likewise, the autoscaler watches to see if the existing pods could be consolidated to a smaller set of nodes and can remove excess nodes.

    Enabling the autoscaler is likewise an update that can be dependent on where and how your Kubernetes cluster is hosted. Azure makes it easy with a simple Azure CLI command.

    az aks update `
    --resource-group $ResourceGroup `
    --name $AksName `
    --update-cluster-autoscaler `
    --min-count 1 `
    --max-count 5

    There are a variety of settings that can be configured to tune how the autoscaler works.

    Scaling on Different Events

    CPU and memory utilization are the primary drivers for the Horizontal Pod Autoscaler, but those might not be the best measures as to when you might want to scale workloads. There are other options for scaling triggers and one of the more common plugins to help with that is the Kubernetes Event-driven Autoscaling (KEDA) project. The KEDA project makes it easy to plug in different event sources to help drive scaling. Find more information about using KEDA on AKS here.

    Exercise

    Let's try out the scaling configurations that we just walked through using our sample application. If you still have your environment from Day 1, you can use that.

    📝 NOTE: If you don't have an AKS cluster deployed, please head over to Azure-Samples/azure-voting-app-rust, clone the repo, and follow the instructions in the README.md to execute the Azure deployment and setup your kubectl context. Check out the first post this week for more on the environment setup.

    Configure Horizontal Pod Autoscaler

    • Edit ./manifests/deployment-app.yaml to include resource requests and limits.
            resources:
    requests:
    cpu: 250m
    limits:
    cpu: 500m
    • Apply the updated deployment configuration.
    kubectl apply -f ./manifests/deployment-app.yaml
    • Create the horizontal pod autoscaler configuration and apply it
    kubectl autoscale deployment azure-voting-app --cpu-percent=50 --min=3 --max=10 -o YAML --dry-run=client > ./manifests/scaler-app.yaml
    kubectl apply -f ./manifests/scaler-app.yaml
    • Check to see your pods scale out to the minimum.
    kubectl get pods

    Configure Cluster Autoscaler

    Configuring the basic behavior of the Cluster Autoscaler is a bit simpler. We just need to run the Azure CLI command to enable the autoscaler and define our lower and upper limits.

    • Check the current nodes available (should be 1).
    kubectl get nodes
    • Update the cluster to enable the autoscaler
    az aks update `
    --resource-group $ResourceGroup `
    --name $AksName `
    --update-cluster-autoscaler `
    --min-count 2 `
    --max-count 5
    • Check to see the current number of nodes (should be 2 now).
    kubectl get nodes

    Resources

    Take the Cloud Skills Challenge!

    Enroll in the Cloud Skills Challenge!

    Don't miss out on this opportunity to level up your skills and stay ahead of the curve in the world of cloud native.

    Documentation

    Training

    - + \ No newline at end of file diff --git a/cnny-2023/page/12/index.html b/cnny-2023/page/12/index.html index 7bf0727210..b9b26b3d87 100644 --- a/cnny-2023/page/12/index.html +++ b/cnny-2023/page/12/index.html @@ -14,7 +14,7 @@ - + @@ -22,7 +22,7 @@

    · 14 min read
    Steven Murawski

    Welcome to Day 1 of Week 3 of #CloudNativeNewYear!

    The theme for this week is Bringing Your Application to Kubernetes. Last we talked about Kubernetes Fundamentals. Today we'll explore getting an existing application running in Kubernetes with a full pipeline in GitHub Actions.

    Ask the Experts Thursday, February 9th at 9 AM PST
    Friday, February 10th at 11 AM PST

    Watch the recorded demo and conversation about this week's topics.

    We were live on YouTube walking through today's (and the rest of this week's) demos. Join us Friday, February 10th and bring your questions!

    What We'll Cover

    • Our Application
    • Adding Some Infrastructure as Code
    • Building and Publishing a Container Image
    • Deploying to Kubernetes
    • Summary
    • Resources

    Our Application

    This week we'll be taking an exisiting application - something similar to a typical line of business application - and setting it up to run in Kubernetes. Over the course of the week, we'll address different concerns. Today we'll focus on updating our CI/CD process to handle standing up (or validating that we have) an Azure Kubernetes Service (AKS) environment, building and publishing container images for our web site and API server, and getting those services running in Kubernetes.

    The application we'll be starting with is eShopOnWeb. This application has a web site and API which are backed by a SQL Server instance. It's built in .NET 7, so it's cross-platform.

    info

    For the enterprising among you, you may notice that there are a number of different eShopOn* variants on GitHub, including eShopOnContainers. We aren't using that example as it's more of an end state than a starting place. Afterwards, feel free to check out that example as what this solution could look like as a series of microservices.

    Adding Some Infrastructure as Code

    Just like last week, we need to stand up an AKS environment. This week, however, rather than running commands in our own shell, we'll set up GitHub Actions to do that for us.

    There is a LOT of plumbing in this section, but once it's set up, it'll make our lives a lot easier. This section ensures that we have an environment to deploy our application into configured the way we want. We can easily extend this to accomodate multiple environments or add additional microservices with minimal new effort.

    Federated Identity

    Setting up a federated identity will allow us a more securable and auditable way of accessing Azure from GitHub Actions. For more about setting up a federated identity, Microsoft Learn has the details on connecting GitHub Actions to Azure.

    Here, we'll just walk through the setup of the identity and configure GitHub to use that idenity to deploy our AKS environment and interact with our Azure Container Registry.

    The examples will use PowerShell, but a Bash version of the setup commands is available in the week3/day1 branch.

    Prerequisites

    To follow along, you'll need:

    • a GitHub account
    • an Azure Subscription
    • the Azure CLI
    • and the Git CLI.

    You'll need to fork the source repository under your GitHub user or organization where you can manage secrets and GitHub Actions.

    It would be helpful to have the GitHub CLI, but it's not required.

    Set Up Some Defaults

    You will need to update one or more of the variables (your user or organization, what branch you want to work off of, and possibly the Azure AD application name if there is a conflict).

    # Replace the gitHubOrganizationName value
    # with the user or organization you forked
    # the repository under.

    $githubOrganizationName = 'Azure-Samples'
    $githubRepositoryName = 'eShopOnAKS'
    $branchName = 'week3/day1'
    $applicationName = 'cnny-week3-day1'

    Create an Azure AD Application

    Next, we need to create an Azure AD application.

    # Create an Azure AD application
    $aksDeploymentApplication = New-AzADApplication -DisplayName $applicationName

    Set Up Federation for that Azure AD Application

    And configure that application to allow federated credential requests from our GitHub repository for a particular branch.

    # Create a federated identity credential for the application
    New-AzADAppFederatedCredential `
    -Name $applicationName `
    -ApplicationObjectId $aksDeploymentApplication.Id `
    -Issuer 'https://token.actions.githubusercontent.com' `
    -Audience 'api://AzureADTokenExchange' `
    -Subject "repo:$($githubOrganizationName)/$($githubRepositoryName):ref:refs/heads/$branchName"

    Create a Service Principal for the Azure AD Application

    Once the application has been created, we need a service principal tied to that application. The service principal can be granted rights in Azure.

    # Create a service principal for the application
    New-AzADServicePrincipal -AppId $($aksDeploymentApplication.AppId)

    Give that Service Principal Rights to Azure Resources

    Because our Bicep deployment exists at the subscription level and we are creating role assignments, we need to give it Owner rights. If we changed the scope of the deployment to just a resource group, we could apply more scoped permissions.

    $azureContext = Get-AzContext
    New-AzRoleAssignment `
    -ApplicationId $($aksDeploymentApplication.AppId) `
    -RoleDefinitionName Owner `
    -Scope $azureContext.Subscription.Id

    Add Secrets to GitHub Repository

    If you have the GitHub CLI, you can use that right from your shell to set the secrets needed.

    gh secret set AZURE_CLIENT_ID --body $aksDeploymentApplication.AppId
    gh secret set AZURE_TENANT_ID --body $azureContext.Tenant.Id
    gh secret set AZURE_SUBSCRIPTION_ID --body $azureContext.Subscription.Id

    Otherwise, you can create them through the web interface like I did in the Learn Live event below.

    info

    It may look like the whole video will play, but it'll stop after configuring the secrets in GitHub (after about 9 minutes)

    The video shows creating the Azure AD application, service principals, and configuring the federated identity in Azure AD and GitHub.

    Creating a Bicep Deployment

    Resuable Workflows

    We'll create our Bicep deployment in a reusable workflows. What are they? The previous link has the documentation or the video below has my colleague Brandon Martinez and I talking about them.

    This workflow is basically the same deployment we did in last week's series, just in GitHub Actions.

    Start by creating a file called deploy_aks.yml in the .github/workflows directory with the below contents.

    name: deploy

    on:
    workflow_call:
    inputs:
    resourceGroupName:
    required: true
    type: string
    secrets:
    AZURE_CLIENT_ID:
    required: true
    AZURE_TENANT_ID:
    required: true
    AZURE_SUBSCRIPTION_ID:
    required: true
    outputs:
    containerRegistryName:
    description: Container Registry Name
    value: ${{ jobs.deploy.outputs.containerRegistryName }}
    containerRegistryUrl:
    description: Container Registry Login Url
    value: ${{ jobs.deploy.outputs.containerRegistryUrl }}
    resourceGroupName:
    description: Resource Group Name
    value: ${{ jobs.deploy.outputs.resourceGroupName }}
    aksName:
    description: Azure Kubernetes Service Cluster Name
    value: ${{ jobs.deploy.outputs.aksName }}

    permissions:
    id-token: write
    contents: read

    jobs:
    validate:
    runs-on: ubuntu-latest
    steps:
    - uses: actions/checkout@v2
    - uses: azure/login@v1
    name: Sign in to Azure
    with:
    client-id: ${{ secrets.AZURE_CLIENT_ID }}
    tenant-id: ${{ secrets.AZURE_TENANT_ID }}
    subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}
    - uses: azure/arm-deploy@v1
    name: Run preflight validation
    with:
    deploymentName: ${{ github.run_number }}
    scope: subscription
    region: eastus
    template: ./deploy/main.bicep
    parameters: >
    resourceGroup=${{ inputs.resourceGroupName }}
    deploymentMode: Validate

    deploy:
    needs: validate
    runs-on: ubuntu-latest
    outputs:
    containerRegistryName: ${{ steps.deploy.outputs.acr_name }}
    containerRegistryUrl: ${{ steps.deploy.outputs.acr_login_server_url }}
    resourceGroupName: ${{ steps.deploy.outputs.resource_group_name }}
    aksName: ${{ steps.deploy.outputs.aks_name }}
    steps:
    - uses: actions/checkout@v2
    - uses: azure/login@v1
    name: Sign in to Azure
    with:
    client-id: ${{ secrets.AZURE_CLIENT_ID }}
    tenant-id: ${{ secrets.AZURE_TENANT_ID }}
    subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}
    - uses: azure/arm-deploy@v1
    id: deploy
    name: Deploy Bicep file
    with:
    failOnStdErr: false
    deploymentName: ${{ github.run_number }}
    scope: subscription
    region: eastus
    template: ./deploy/main.bicep
    parameters: >
    resourceGroup=${{ inputs.resourceGroupName }}

    Adding the Bicep Deployment

    Once we have the Bicep deployment workflow, we can add it to the primary build definition in .github/workflows/dotnetcore.yml

    Permissions

    First, we need to add a permissions block to let the workflow request our Azure AD token. This can go towards the top of the YAML file (I started it on line 5).

    permissions:
    id-token: write
    contents: read

    Deploy AKS Job

    Next, we'll add a reference to our reusable workflow. This will go after the build job.

      deploy_aks:
    needs: [build]
    uses: ./.github/workflows/deploy_aks.yml
    with:
    resourceGroupName: 'cnny-week3'
    secrets:
    AZURE_CLIENT_ID: ${{ secrets.AZURE_CLIENT_ID }}
    AZURE_TENANT_ID: ${{ secrets.AZURE_TENANT_ID }}
    AZURE_SUBSCRIPTION_ID: ${{ secrets.AZURE_SUBSCRIPTION_ID }}

    Building and Publishing a Container Image

    Now that we have our target environment in place and an Azure Container Registry, we can build and publish our container images.

    Add a Reusable Workflow

    First, we'll create a new file for our reusable workflow at .github/workflows/publish_container_image.yml.

    We'll start the file with a name, the parameters it needs to run, and the permissions requirements for the federated identity request.

    name: Publish Container Images

    on:
    workflow_call:
    inputs:
    containerRegistryName:
    required: true
    type: string
    containerRegistryUrl:
    required: true
    type: string
    githubSha:
    required: true
    type: string
    secrets:
    AZURE_CLIENT_ID:
    required: true
    AZURE_TENANT_ID:
    required: true
    AZURE_SUBSCRIPTION_ID:
    required: true

    permissions:
    id-token: write
    contents: read

    Build the Container Images

    Our next step is to build the two container images we'll need for the application, the website and the API. We'll build the container images on our build worker and tag it with the git SHA, so there'll be a direct tie between the point in time in our codebase and the container images that represent it.

    jobs:
    publish_container_image:
    runs-on: ubuntu-latest

    steps:
    - uses: actions/checkout@v2
    - name: docker build
    run: |
    docker build . -f src/Web/Dockerfile -t ${{ inputs.containerRegistryUrl }}/web:${{ inputs.githubSha }}
    docker build . -f src/PublicApi/Dockerfile -t ${{ inputs.containerRegistryUrl }}/api:${{ inputs.githubSha}}

    Scan the Container Images

    Before we publish those container images, we'll scan them for vulnerabilities and best practice violations. We can add these two steps (one scan for each image).

        - name: scan web container image
    uses: Azure/container-scan@v0
    with:
    image-name: ${{ inputs.containerRegistryUrl }}/web:${{ inputs.githubSha}}
    - name: scan api container image
    uses: Azure/container-scan@v0
    with:
    image-name: ${{ inputs.containerRegistryUrl }}/web:${{ inputs.githubSha}}

    The container images provided have a few items that'll be found. We can create an allowed list at .github/containerscan/allowedlist.yaml to define vulnerabilities or best practice violations that we'll explictly allow to not fail our build.

    general:
    vulnerabilities:
    - CVE-2022-29458
    - CVE-2022-3715
    - CVE-2022-1304
    - CVE-2021-33560
    - CVE-2020-16156
    - CVE-2019-8457
    - CVE-2018-8292
    bestPracticeViolations:
    - CIS-DI-0001
    - CIS-DI-0005
    - CIS-DI-0006
    - CIS-DI-0008

    Publish the Container Images

    Finally, we'll log in to Azure, then log in to our Azure Container Registry, and push our images.

        - uses: azure/login@v1
    name: Sign in to Azure
    with:
    client-id: ${{ secrets.AZURE_CLIENT_ID }}
    tenant-id: ${{ secrets.AZURE_TENANT_ID }}
    subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}
    - name: acr login
    run: az acr login --name ${{ inputs.containerRegistryName }}
    - name: docker push
    run: |
    docker push ${{ inputs.containerRegistryUrl }}/web:${{ inputs.githubSha}}
    docker push ${{ inputs.containerRegistryUrl }}/api:${{ inputs.githubSha}}

    Update the Build With the Image Build and Publish

    Now that we have our reusable workflow to create and publish our container images, we can include that in our primary build defnition at .github/workflows/dotnetcore.yml.

      publish_container_image:
    needs: [deploy_aks]
    uses: ./.github/workflows/publish_container_image.yml
    with:
    containerRegistryName: ${{ needs.deploy_aks.outputs.containerRegistryName }}
    containerRegistryUrl: ${{ needs.deploy_aks.outputs.containerRegistryUrl }}
    githubSha: ${{ github.sha }}
    secrets:
    AZURE_CLIENT_ID: ${{ secrets.AZURE_CLIENT_ID }}
    AZURE_TENANT_ID: ${{ secrets.AZURE_TENANT_ID }}
    AZURE_SUBSCRIPTION_ID: ${{ secrets.AZURE_SUBSCRIPTION_ID }}

    Deploying to Kubernetes

    Finally, we've gotten enough set up that a commit to the target branch will:

    • build and test our application code
    • set up (or validate) our AKS and ACR environment
    • and create, scan, and publish our container images to ACR

    Our last step will be to deploy our application to Kubernetes. We'll use the basic building blocks we worked with last week, deployments and services.

    Starting the Reusable Workflow to Deploy to AKS

    We'll start our workflow with our parameters that we need, as well as the permissions to access the token to log in to Azure.

    We'll check out our code, then log in to Azure, and use the az CLI to get credentials for our AKS cluster.

    name: deploy_to_aks

    on:
    workflow_call:
    inputs:
    aksName:
    required: true
    type: string
    resourceGroupName:
    required: true
    type: string
    containerRegistryUrl:
    required: true
    type: string
    githubSha:
    required: true
    type: string
    secrets:
    AZURE_CLIENT_ID:
    required: true
    AZURE_TENANT_ID:
    required: true
    AZURE_SUBSCRIPTION_ID:
    required: true

    permissions:
    id-token: write
    contents: read

    jobs:
    deploy:
    runs-on: ubuntu-latest
    steps:
    - uses: actions/checkout@v2
    - uses: azure/login@v1
    name: Sign in to Azure
    with:
    client-id: ${{ secrets.AZURE_CLIENT_ID }}
    tenant-id: ${{ secrets.AZURE_TENANT_ID }}
    subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}
    - name: Get AKS Credentials
    run: |
    az aks get-credentials --resource-group ${{ inputs.resourceGroupName }} --name ${{ inputs.aksName }}

    Edit the Deployment For Our Current Image Tag

    Let's add the Kubernetes manifests to our repo. This post is long enough, so you can find the content for the manifests folder in the manifests folder in the source repo under the week3/day1 branch.

    tip

    If you only forked the main branch of the source repo, you can easily get the updated manifests by using the following git commands:

    git remote add upstream https://github.com/Azure-Samples/eShopOnAks
    git fetch upstream week3/day1
    git checkout upstream/week3/day1 manifests

    This will make the week3/day1 branch available locally and then we can update the manifests directory to match the state of that branch.

    The deployments and the service defintions should be familiar from last week's content (but not the same). This week, however, there's a new file in the manifests - ./manifests/kustomization.yaml

    This file helps us more dynamically edit our kubernetes manifests and support is baked right in to the kubectl command.

    Kustomize Definition

    Kustomize allows us to specify specific resource manifests and areas of that manifest to replace. We've put some placeholders in our file as well, so we can replace those for each run of our CI/CD system.

    In ./manifests/kustomization.yaml you will see:

    resources:
    - deployment-api.yaml
    - deployment-web.yaml

    # Change the image name and version
    images:
    - name: notavalidregistry.azurecr.io/api:v0.1.0
    newName: <YOUR_ACR_SERVER>/api
    newTag: <YOUR_IMAGE_TAG>
    - name: notavalidregistry.azurecr.io/web:v0.1.0
    newName: <YOUR_ACR_SERVER>/web
    newTag: <YOUR_IMAGE_TAG>

    Replacing Values in our Build

    Now, we encounter a little problem - our deployment files need to know the tag and ACR server. We can do a bit of sed magic to edit the file on the fly.

    In .github/workflows/deploy_to_aks.yml, we'll add:

          - name: replace_placeholders_with_current_run
    run: |
    sed -i "s/<YOUR_ACR_SERVER>/${{ inputs.containerRegistryUrl }}/g" ./manifests/kustomization.yaml
    sed -i "s/<YOUR_IMAGE_TAG>/${{ inputs.githubSha }}/g" ./manifests/kustomization.yaml

    Deploying the Manifests

    We have our manifests in place and our kustomization.yaml file (with commands to update it at runtime) ready to go, we can deploy our manifests.

    First, we'll deploy our database (deployment and service). Next, we'll use the -k parameter on kubectl to tell it to look for a kustomize configuration, transform the requested manifests and apply those. Finally, we apply the service defintions for the web and API deployments.

            run: |
    kubectl apply -f ./manifests/deployment-db.yaml \
    -f ./manifests/service-db.yaml
    kubectl apply -k ./manifests
    kubectl apply -f ./manifests/service-api.yaml \
    -f ./manifests/service-web.yaml

    Summary

    We've covered a lot of ground in today's post. We set up federated credentials with GitHub. Then we added reusable workflows to deploy an AKS environment and build/scan/publish our container images, and then to deploy them into our AKS environment.

    This sets us up to start making changes to our application and Kubernetes configuration and have those changes automatically validated and deployed by our CI/CD system. Tomorrow, we'll look at updating our application environment with runtime configuration, persistent storage, and more.

    Resources

    Take the Cloud Skills Challenge!

    Enroll in the Cloud Skills Challenge!

    Don't miss out on this opportunity to level up your skills and stay ahead of the curve in the world of cloud native.

    - + \ No newline at end of file diff --git a/cnny-2023/page/13/index.html b/cnny-2023/page/13/index.html index d8d2d5ef8a..34df6c29d0 100644 --- a/cnny-2023/page/13/index.html +++ b/cnny-2023/page/13/index.html @@ -14,13 +14,13 @@ - +

    · 12 min read
    Paul Yu

    Welcome to Day 2 of Week 3 of #CloudNativeNewYear!

    The theme for this week is Bringing Your Application to Kubernetes. Yesterday we talked about getting an existing application running in Kubernetes with a full pipeline in GitHub Actions. Today we'll evaluate our sample application's configuration, storage, and networking requirements and implement using Kubernetes and Azure resources.

    Ask the Experts Thursday, February 9th at 9 AM PST
    Friday, February 10th at 11 AM PST

    Watch the recorded demo and conversation about this week's topics.

    We were live on YouTube walking through today's (and the rest of this week's) demos. Join us Friday, February 10th and bring your questions!

    What We'll Cover

    • Gather requirements
    • Implement environment variables using ConfigMaps
    • Implement persistent volumes using Azure Files
    • Implement secrets using Azure Key Vault
    • Re-package deployments
    • Conclusion
    • Resources
    caution

    Before you begin, make sure you've gone through yesterday's post to set up your AKS cluster.

    Gather requirements

    The eShopOnWeb application is written in .NET 7 and has two major pieces of functionality. The web UI is where customers can browse and shop. The web UI also includes an admin portal for managing the product catalog. This admin portal, is packaged as a WebAssembly application and relies on a separate REST API service. Both the web UI and the REST API connect to the same SQL Server container.

    Looking through the source code which can be found here we can identify requirements for configs, persistent storage, and secrets.

    Database server

    • Need to store the password for the sa account as a secure secret
    • Need persistent storage volume for data directory
    • Need to inject environment variables for SQL Server license type and EULA acceptance

    Web UI and REST API service

    • Need to store database connection string as a secure secret
    • Need to inject ASP.NET environment variables to override app settings
    • Need persistent storage volume for ASP.NET key storage

    Implement environment variables using ConfigMaps

    ConfigMaps are relatively straight-forward to create. If you were following along with the examples last week, this should be review 😉

    Create a ConfigMap to store database environment variables.

    kubectl apply -f - <<EOF
    apiVersion: v1
    kind: ConfigMap
    metadata:
    name: mssql-settings
    data:
    MSSQL_PID: Developer
    ACCEPT_EULA: "Y"
    EOF

    Create another ConfigMap to store ASP.NET environment variables.

    kubectl apply -f - <<EOF
    apiVersion: v1
    kind: ConfigMap
    metadata:
    name: aspnet-settings
    data:
    ASPNETCORE_ENVIRONMENT: Development
    EOF

    Implement persistent volumes using Azure Files

    Similar to last week, we'll take advantage of storage classes built into AKS. For our SQL Server data, we'll use the azurefile-csi-premium storage class and leverage an Azure Files resource as our PersistentVolume.

    Create a PersistentVolumeClaim (PVC) for persisting SQL Server data.

    kubectl apply -f - <<EOF
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
    name: mssql-data
    spec:
    accessModes:
    - ReadWriteMany
    storageClassName: azurefile-csi-premium
    resources:
    requests:
    storage: 5Gi
    EOF

    Create another PVC for persisting ASP.NET data.

    kubectl apply -f - <<EOF
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
    name: aspnet-data
    spec:
    accessModes:
    - ReadWriteMany
    storageClassName: azurefile-csi-premium
    resources:
    requests:
    storage: 5Gi
    EOF

    Implement secrets using Azure Key Vault

    It's a well known fact that Kubernetes secretes are not really secrets. They're just base64-encoded values and not secure, especially if malicious users have access to your Kubernetes cluster.

    In a production scenario, you will want to leverage an external vault like Azure Key Vault or HashiCorp Vault to encrypt and store secrets.

    With AKS, we can enable the Secrets Store CSI driver add-on which will allow us to leverage Azure Key Vault.

    # Set some variables
    RG_NAME=<YOUR_RESOURCE_GROUP_NAME>
    AKS_NAME=<YOUR_AKS_CLUSTER_NAME>
    ACR_NAME=<YOUR_ACR_NAME>

    az aks enable-addons \
    --addons azure-keyvault-secrets-provider \
    --name $AKS_NAME \
    --resource-group $RG_NAME

    With the add-on enabled, you should see aks-secrets-store-csi-driver and aks-secrets-store-provider-azure resources installed on each node in your Kubernetes cluster.

    Run the command below to verify.

    kubectl get pods \
    --namespace kube-system \
    --selector 'app in (secrets-store-csi-driver, secrets-store-provider-azure)'

    The Secrets Store CSI driver allows us to use secret stores via Container Storage Interface (CSI) volumes. This provider offers capabilities such as mounting and syncing between the secure vault and Kubernetes Secrets. On AKS, the Azure Key Vault Provider for Secrets Store CSI Driver enables integration with Azure Key Vault.

    You may not have an Azure Key Vault created yet, so let's create one and add some secrets to it.

    AKV_NAME=$(az keyvault create \
    --name akv-eshop$RANDOM \
    --resource-group $RG_NAME \
    --query name -o tsv)

    # Database server password
    az keyvault secret set \
    --vault-name $AKV_NAME \
    --name mssql-password \
    --value "@someThingComplicated1234"

    # Catalog database connection string
    az keyvault secret set \
    --vault-name $AKV_NAME \
    --name mssql-connection-catalog \
    --value "Server=db;Database=Microsoft.eShopOnWeb.CatalogDb;User Id=sa;Password=@someThingComplicated1234;TrustServerCertificate=True;"

    # Identity database connection string
    az keyvault secret set \
    --vault-name $AKV_NAME \
    --name mssql-connection-identity \
    --value "Server=db;Database=Microsoft.eShopOnWeb.Identity;User Id=sa;Password=@someThingComplicated1234;TrustServerCertificate=True;"

    Pods authentication using Azure Workload Identity

    In order for our Pods to retrieve secrets from Azure Key Vault, we'll need to set up a way for the Pod to authenticate against Azure AD. This can be achieved by implementing the new Azure Workload Identity feature of AKS.

    info

    At the time of this writing, the workload identity feature of AKS is in Preview.

    The workload identity feature within AKS allows us to leverage native Kubernetes resources and link a Kubernetes ServiceAccount to an Azure Managed Identity to authenticate against Azure AD.

    For the authentication flow, our Kubernetes cluster will act as an Open ID Connect (OIDC) issuer and will be able issue identity tokens to ServiceAccounts which will be assigned to our Pods.

    The Azure Managed Identity will be granted permission to access secrets in our Azure Key Vault and with the ServiceAccount being assigned to our Pods, they will be able to retrieve our secrets.

    For more information on how the authentication mechanism all works, check out this doc.

    To implement all this, start by enabling the new preview feature for AKS.

    az feature register \
    --namespace "Microsoft.ContainerService" \
    --name "EnableWorkloadIdentityPreview"
    caution

    This can take several minutes to complete.

    Check the status and ensure the state shows Regestered before moving forward.

    az feature show \
    --namespace "Microsoft.ContainerService" \
    --name "EnableWorkloadIdentityPreview"

    Update your AKS cluster to enable the workload identity feature and enable the OIDC issuer endpoint.

    az aks update \
    --name $AKS_NAME \
    --resource-group $RG_NAME \
    --enable-workload-identity \
    --enable-oidc-issuer

    Create an Azure Managed Identity and retrieve its client ID.

    MANAGED_IDENTITY_CLIENT_ID=$(az identity create \
    --name aks-workload-identity \
    --resource-group $RG_NAME \
    --subscription $(az account show --query id -o tsv) \
    --query 'clientId' -o tsv)

    Create the Kubernetes ServiceAccount.

    # Set namespace (this must align with the namespace that your app is deployed into)
    SERVICE_ACCOUNT_NAMESPACE=default

    # Set the service account name
    SERVICE_ACCOUNT_NAME=eshop-serviceaccount

    # Create the service account
    kubectl apply -f - <<EOF
    apiVersion: v1
    kind: ServiceAccount
    metadata:
    annotations:
    azure.workload.identity/client-id: ${MANAGED_IDENTITY_CLIENT_ID}
    labels:
    azure.workload.identity/use: "true"
    name: ${SERVICE_ACCOUNT_NAME}
    namespace: ${SERVICE_ACCOUNT_NAMESPACE}
    EOF
    info

    Note to enable this ServiceAccount to work with Azure Workload Identity, you must annotate the resource with azure.workload.identity/client-id, and add a label of azure.workload.identity/use: "true"

    That was a lot... Let's review what we just did.

    We have an Azure Managed Identity (object in Azure AD), an OIDC issuer URL (endpoint in our Kubernetes cluster), and a Kubernetes ServiceAccount.

    The next step is to "tie" these components together and establish a Federated Identity Credential so that Azure AD can trust authentication requests from your Kubernetes cluster.

    info

    This identity federation can be established between Azure AD any Kubernetes cluster; not just AKS 🤗

    To establish the federated credential, we'll need the OIDC issuer URL, and a subject which points to your Kubernetes ServiceAccount.

    # Get the OIDC issuer URL
    OIDC_ISSUER_URL=$(az aks show \
    --name $AKS_NAME \
    --resource-group $RG_NAME \
    --query "oidcIssuerProfile.issuerUrl" -o tsv)

    # Set the subject name using this format: `system:serviceaccount:<YOUR_SERVICE_ACCOUNT_NAMESPACE>:<YOUR_SERVICE_ACCOUNT_NAME>`
    SUBJECT=system:serviceaccount:$SERVICE_ACCOUNT_NAMESPACE:$SERVICE_ACCOUNT_NAME

    az identity federated-credential create \
    --name aks-federated-credential \
    --identity-name aks-workload-identity \
    --resource-group $RG_NAME \
    --issuer $OIDC_ISSUER_URL \
    --subject $SUBJECT

    With the authentication components set, we can now create a SecretProviderClass which includes details about the Azure Key Vault, the secrets to pull out from the vault, and identity used to access the vault.

    # Get the tenant id for the key vault
    TENANT_ID=$(az keyvault show \
    --name $AKV_NAME \
    --resource-group $RG_NAME \
    --query properties.tenantId -o tsv)

    # Create the secret provider for azure key vault
    kubectl apply -f - <<EOF
    apiVersion: secrets-store.csi.x-k8s.io/v1
    kind: SecretProviderClass
    metadata:
    name: eshop-azure-keyvault
    spec:
    provider: azure
    parameters:
    usePodIdentity: "false"
    useVMManagedIdentity: "false"
    clientID: "${MANAGED_IDENTITY_CLIENT_ID}"
    keyvaultName: "${AKV_NAME}"
    cloudName: ""
    objects: |
    array:
    - |
    objectName: mssql-password
    objectType: secret
    objectVersion: ""
    - |
    objectName: mssql-connection-catalog
    objectType: secret
    objectVersion: ""
    - |
    objectName: mssql-connection-identity
    objectType: secret
    objectVersion: ""
    tenantId: "${TENANT_ID}"
    secretObjects:
    - secretName: eshop-secrets
    type: Opaque
    data:
    - objectName: mssql-password
    key: mssql-password
    - objectName: mssql-connection-catalog
    key: mssql-connection-catalog
    - objectName: mssql-connection-identity
    key: mssql-connection-identity
    EOF

    Finally, lets grant the Azure Managed Identity permissions to retrieve secrets from the Azure Key Vault.

    az keyvault set-policy \
    --name $AKV_NAME \
    --secret-permissions get \
    --spn $MANAGED_IDENTITY_CLIENT_ID

    Re-package deployments

    Update your database deployment to load environment variables from our ConfigMap, attach the PVC and SecretProviderClass as volumes, mount the volumes into the Pod, and use the ServiceAccount to retrieve secrets.

    Additionally, you may notice the database Pod is set to use fsGroup:10001 as part of the securityContext. This is required as the MSSQL container runs using a non-root account called mssql and this account has the proper permissions to read/write data at the /var/opt/mssql mount path.

    kubectl apply -f - <<EOF
    apiVersion: apps/v1
    kind: Deployment
    metadata:
    name: db
    labels:
    app: db
    spec:
    replicas: 1
    selector:
    matchLabels:
    app: db
    template:
    metadata:
    labels:
    app: db
    spec:
    securityContext:
    fsGroup: 10001
    serviceAccountName: ${SERVICE_ACCOUNT_NAME}
    containers:
    - name: db
    image: mcr.microsoft.com/mssql/server:2019-latest
    ports:
    - containerPort: 1433
    envFrom:
    - configMapRef:
    name: mssql-settings
    env:
    - name: MSSQL_SA_PASSWORD
    valueFrom:
    secretKeyRef:
    name: eshop-secrets
    key: mssql-password
    resources: {}
    volumeMounts:
    - name: mssqldb
    mountPath: /var/opt/mssql
    - name: eshop-secrets
    mountPath: "/mnt/secrets-store"
    readOnly: true
    volumes:
    - name: mssqldb
    persistentVolumeClaim:
    claimName: mssql-data
    - name: eshop-secrets
    csi:
    driver: secrets-store.csi.k8s.io
    readOnly: true
    volumeAttributes:
    secretProviderClass: eshop-azure-keyvault
    EOF

    We'll update the API and Web deployments in a similar way.

    # Set the image tag
    IMAGE_TAG=<YOUR_IMAGE_TAG>

    # API deployment
    kubectl apply -f - <<EOF
    apiVersion: apps/v1
    kind: Deployment
    metadata:
    name: api
    labels:
    app: api
    spec:
    replicas: 1
    selector:
    matchLabels:
    app: api
    template:
    metadata:
    labels:
    app: api
    spec:
    serviceAccount: ${SERVICE_ACCOUNT_NAME}
    containers:
    - name: api
    image: ${ACR_NAME}.azurecr.io/api:${IMAGE_TAG}
    ports:
    - containerPort: 80
    envFrom:
    - configMapRef:
    name: aspnet-settings
    env:
    - name: ConnectionStrings__CatalogConnection
    valueFrom:
    secretKeyRef:
    name: eshop-secrets
    key: mssql-connection-catalog
    - name: ConnectionStrings__IdentityConnection
    valueFrom:
    secretKeyRef:
    name: eshop-secrets
    key: mssql-connection-identity
    resources: {}
    volumeMounts:
    - name: aspnet
    mountPath: ~/.aspnet/https:/root/.aspnet/https:ro
    - name: eshop-secrets
    mountPath: "/mnt/secrets-store"
    readOnly: true
    volumes:
    - name: aspnet
    persistentVolumeClaim:
    claimName: aspnet-data
    - name: eshop-secrets
    csi:
    driver: secrets-store.csi.k8s.io
    readOnly: true
    volumeAttributes:
    secretProviderClass: eshop-azure-keyvault
    EOF

    ## Web deployment
    kubectl apply -f - <<EOF
    apiVersion: apps/v1
    kind: Deployment
    metadata:
    name: web
    labels:
    app: web
    spec:
    replicas: 1
    selector:
    matchLabels:
    app: web
    template:
    metadata:
    labels:
    app: web
    spec:
    serviceAccount: ${SERVICE_ACCOUNT_NAME}
    containers:
    - name: web
    image: ${ACR_NAME}.azurecr.io/web:${IMAGE_TAG}
    ports:
    - containerPort: 80
    envFrom:
    - configMapRef:
    name: aspnet-settings
    env:
    - name: ConnectionStrings__CatalogConnection
    valueFrom:
    secretKeyRef:
    name: eshop-secrets
    key: mssql-connection-catalog
    - name: ConnectionStrings__IdentityConnection
    valueFrom:
    secretKeyRef:
    name: eshop-secrets
    key: mssql-connection-identity
    resources: {}
    volumeMounts:
    - name: aspnet
    mountPath: ~/.aspnet/https:/root/.aspnet/https:ro
    - name: eshop-secrets
    mountPath: "/mnt/secrets-store"
    readOnly: true
    volumes:
    - name: aspnet
    persistentVolumeClaim:
    claimName: aspnet-data
    - name: eshop-secrets
    csi:
    driver: secrets-store.csi.k8s.io
    readOnly: true
    volumeAttributes:
    secretProviderClass: eshop-azure-keyvault
    EOF

    If all went well with your deployment updates, you should be able to browse to your website and buy some merchandise again 🥳

    echo "http://$(kubectl get service web -o jsonpath='{.status.loadBalancer.ingress[0].ip}')"

    Conclusion

    Although there is no visible changes on with our website, we've made a ton of changes on the Kubernetes backend to make this application much more secure and resilient.

    We used a combination of Kubernetes resources and AKS-specific features to achieve our goal of securing our secrets and ensuring data is not lost on container crashes and restarts.

    To learn more about the components we leveraged here today, checkout the resources and additional tutorials listed below.

    You can also find manifests with all the changes made in today's post in the Azure-Samples/eShopOnAKS repository.

    See you in the next post!

    Resources

    Take the Cloud Skills Challenge!

    Enroll in the Cloud Skills Challenge!

    Don't miss out on this opportunity to level up your skills and stay ahead of the curve in the world of cloud native.

    - + \ No newline at end of file diff --git a/cnny-2023/page/14/index.html b/cnny-2023/page/14/index.html index 86aef427a4..3e92ef6a53 100644 --- a/cnny-2023/page/14/index.html +++ b/cnny-2023/page/14/index.html @@ -14,13 +14,13 @@ - +

    · 10 min read
    Paul Yu

    Welcome to Day 3 of Week 3 of #CloudNativeNewYear!

    The theme for this week is Bringing Your Application to Kubernetes. Yesterday we added configuration, secrets, and storage to our app. Today we'll explore how to expose the eShopOnWeb app so that customers can reach it over the internet using a custom domain name and TLS.

    Ask the Experts Thursday, February 9th at 9 AM PST
    Friday, February 10th at 11 AM PST

    Watch the recorded demo and conversation about this week's topics.

    We were live on YouTube walking through today's (and the rest of this week's) demos. Join us Friday, February 10th and bring your questions!

    What We'll Cover

    • Gather requirements
    • Generate TLS certificate and store in Azure Key Vault
    • Implement custom DNS using Azure DNS
    • Enable Web Application Routing add-on for AKS
    • Implement Ingress for the web application
    • Conclusion
    • Resources

    Gather requirements

    Currently, our eShopOnWeb app has three Kubernetes services deployed:

    1. db exposed internally via ClusterIP
    2. api exposed externally via LoadBalancer
    3. web exposed externally via LoadBalancer

    As mentioned in my post last week, Services allow applications to communicate with each other using DNS names. Kubernetes has service discovery capabilities built-in that allows Pods to resolve Services simply by using their names.

    In the case of our api and web deployments, they can simply reach the database by calling its name. The service type of ClusterIP for the db can remain as-is since it only needs to be accessed by the api and web apps.

    On the other hand, api and web both need to be accessed over the public internet. Currently, these services are using service type LoadBalancer which tells AKS to provision an Azure Load Balancer with a public IP address. No one is going to remember the IP addresses, so we need to make the app more accessible by adding a custom domain name and securing it with a TLS certificate.

    Here's what we're going to need:

    • Custom domain name for our app
    • TLS certificate for the custom domain name
    • Routing rule to ensure requests with /api/ in the URL is routed to the backend REST API
    • Routing rule to ensure requests without /api/ in the URL is routing to the web UI

    Just like last week, we will use the Web Application Routing add-on for AKS. But this time, we'll integrate it with Azure DNS and Azure Key Vault to satisfy all of our requirements above.

    info

    At the time of this writing the add-on is still in Public Preview

    Generate TLS certificate and store in Azure Key Vault

    We deployed an Azure Key Vault yesterday to store secrets. We'll use it again to store a TLS certificate too.

    Let's create and export a self-signed certificate for the custom domain.

    DNS_NAME=eshoponweb$RANDOM.com
    openssl req -new -x509 -nodes -out web-tls.crt -keyout web-tls.key -subj "/CN=${DNS_NAME}" -addext "subjectAltName=DNS:${DNS_NAME}"
    openssl pkcs12 -export -in web-tls.crt -inkey web-tls.key -out web-tls.pfx -password pass:
    info

    For learning purposes we'll use a self-signed certificate and a fake custom domain name.

    To browse to the site using the fake domain, we'll mimic a DNS lookup by adding an entry to your host file which maps the public IP address assigned to the ingress controller to the custom domain.

    In a production scenario, you will need to have a real domain delegated to Azure DNS and a valid TLS certificate for the domain.

    Grab your Azure Key Vault name and set the value in a variable for later use.

    RESOURCE_GROUP=cnny-week3

    AKV_NAME=$(az resource list \
    --resource-group $RESOURCE_GROUP \
    --resource-type Microsoft.KeyVault/vaults \
    --query "[0].name" -o tsv)

    Grant yourself permissions to get, list, and import certificates.

    MY_USER_NAME=$(az account show --query user.name -o tsv)
    MY_USER_OBJECT_ID=$(az ad user show --id $MY_USER_NAME --query id -o tsv)

    az keyvault set-policy \
    --name $AKV_NAME \
    --object-id $MY_USER_OBJECT_ID \
    --certificate-permissions get list import

    Upload the TLS certificate to Azure Key Vault and grab its certificate URI.

    WEB_TLS_CERT_ID=$(az keyvault certificate import \
    --vault-name $AKV_NAME \
    --name web-tls \
    --file web-tls.pfx \
    --query id \
    --output tsv)

    Implement custom DNS with Azure DNS

    Create a custom domain for our application and grab its Azure resource id.

    DNS_ZONE_ID=$(az network dns zone create \
    --name $DNS_NAME \
    --resource-group $RESOURCE_GROUP \
    --query id \
    --output tsv)

    Enable Web Application Routing add-on for AKS

    As we enable the Web Application Routing add-on, we'll also pass in the Azure DNS Zone resource id which triggers the installation of the external-dns controller in your Kubernetes cluster. This controller will be able to write Azure DNS zone entries on your behalf as you deploy Ingress manifests.

    AKS_NAME=$(az resource list \
    --resource-group $RESOURCE_GROUP \
    --resource-type Microsoft.ContainerService/managedClusters \
    --query "[0].name" -o tsv)

    az aks enable-addons \
    --name $AKS_NAME \
    --resource-group $RESOURCE_GROUP \
    --addons web_application_routing \
    --dns-zone-resource-id=$DNS_ZONE_ID \
    --enable-secret-rotation

    The add-on will also deploy a new Azure Managed Identity which is used by the external-dns controller when writing Azure DNS zone entries. Currently, it does not have permission to do that, so let's grant it permission.

    # This is where resources are automatically deployed by AKS
    NODE_RESOURCE_GROUP=$(az aks show \
    --name $AKS_NAME \
    --resource-group $RESOURCE_GROUP \
    --query nodeResourceGroup -o tsv)

    # This is the managed identity created by the Web Application Routing add-on
    MANAGED_IDENTTIY_OBJECT_ID=$(az resource show \
    --name webapprouting-${AKS_NAME} \
    --resource-group $NODE_RESOURCE_GROUP \
    --resource-type Microsoft.ManagedIdentity/userAssignedIdentities \
    --query properties.principalId \
    --output tsv)

    # Grant the managed identity permissions to write DNS entries
    az role assignment create \
    --role "DNS Zone Contributor" \
    --assignee $MANAGED_IDENTTIY_OBJECT_ID \
    --scope $DNS_ZONE_ID

    The Azure Managed Identity will also be used to retrieve and rotate TLS certificates from Azure Key Vault. So we'll need to grant it permission for that too.

    az keyvault set-policy \
    --name $AKV_NAME \
    --object-id $MANAGED_IDENTTIY_OBJECT_ID \
    --secret-permissions get \
    --certificate-permissions get

    Implement Ingress for the web application

    Before we create a new Ingress manifest, let's update the existing services to use ClusterIP instead of LoadBalancer. With an Ingress in place, there is no reason why we need the Service resources to be accessible from outside the cluster. The new Ingress will be the only entrypoint for external users.

    We can use the kubectl patch command to update the services

    kubectl patch service api -p '{"spec": {"type": "ClusterIP"}}'
    kubectl patch service web -p '{"spec": {"type": "ClusterIP"}}'

    Deploy a new Ingress to place in front of the web Service. Notice there is a special annotations entry for kubernetes.azure.com/tls-cert-keyvault-uri which points back to our self-signed certificate that was uploaded to Azure Key Vault.

    kubectl apply -f - <<EOF
    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
    annotations:
    kubernetes.azure.com/tls-cert-keyvault-uri: ${WEB_TLS_CERT_ID}
    name: web
    spec:
    ingressClassName: webapprouting.kubernetes.azure.com
    rules:
    - host: ${DNS_NAME}
    http:
    paths:
    - backend:
    service:
    name: web
    port:
    number: 80
    path: /
    pathType: Prefix
    - backend:
    service:
    name: api
    port:
    number: 80
    path: /api
    pathType: Prefix
    tls:
    - hosts:
    - ${DNS_NAME}
    secretName: web-tls
    EOF

    In our manifest above, we've also configured the Ingress route the traffic to either the web or api services based on the URL path requested. If the request URL includes /api/ then it will send traffic to the api backend service. Otherwise, it will send traffic to the web service.

    Within a few minutes, the external-dns controller will add an A record to Azure DNS which points to the Ingress resource's public IP. With the custom domain in place, we can simply browse using this domain name.

    info

    As mentioned above, since this is not a real domain name, we need to modify our host file to make it seem like our custom domain is resolving to the Ingress' public IP address.

    To get the ingress public IP, run the following:

    # Get the IP
    kubectl get ingress web -o jsonpath="{.status.loadBalancer.ingress[0].ip}"

    # Get the hostname
    kubectl get ingress web -o jsonpath="{.spec.tls[0].hosts[0]}"

    Next, open your host file and add an entry using the format <YOUR_PUBLIC_IP> <YOUR_CUSTOM_DOMAIN>. Below is an example of what it should look like.

    20.237.116.224 eshoponweb11265.com

    See this doc for more info on how to do this.

    When browsing to the website, you may be presented with a warning about the connection not being private. This is due to the fact that we are using a self-signed certificate. This is expected, so go ahead and proceed anyway to load up the page.

    Why is the Admin page broken?

    If you log in using the admin@microsoft.com account and browse to the Admin page, you'll notice no products are loaded on the page.

    This is because the admin page is built using Blazor and compiled as a WebAssembly application that runs in your browser. When the application was compiled, it packed the appsettings.Development.json file as an embedded resource. This file contains the base URL for the public API and it currently points to https://localhost:5099. Now that we have a domain name, we can update the base URL and point it to our custom domain.

    From the root of the eShopOnWeb repo, update the configuration file using a sed command.

    sed -i -e "s/localhost:5099/${DNS_NAME}/g" ./src/BlazorAdmin/wwwroot/appsettings.Development.json

    Rebuild and push the container to Azure Container Registry.

    # Grab the name of your Azure Container Registry
    ACR_NAME=$(az resource list \
    --resource-group $RESOURCE_GROUP \
    --resource-type Microsoft.ContainerRegistry/registries \
    --query "[0].name" -o tsv)

    # Invoke a build and publish job
    az acr build \
    --registry $ACR_NAME \
    --image $ACR_NAME.azurecr.io/web:v0.1.0 \
    --file ./src/Web/Dockerfile .

    Once the container build has completed, we can issue a kubectl patch command to quickly update the web deployment to test our change.

    kubectl patch deployment web -p "$(cat <<EOF
    {
    "spec": {
    "template": {
    "spec": {
    "containers": [
    {
    "name": "web",
    "image": "${ACR_NAME}.azurecr.io/web:v0.1.0"
    }
    ]
    }
    }
    }
    }
    EOF
    )"

    If all went well, you will be able to browse the admin page again and confirm product data is being loaded 🥳

    Conclusion

    The Web Application Routing add-on for AKS aims to streamline the process of exposing it to the public using the open-source NGINX Ingress Controller. With the add-on being managed by Azure, it natively integrates with other Azure services like Azure DNS and eliminates the need to manually create DNS entries. It can also integrate with Azure Key Vault to automatically pull in TLS certificates and rotate them as needed to further reduce operational overhead.

    We are one step closer to production and in the upcoming posts we'll further operationalize and secure our deployment, so stay tuned!

    In the meantime, check out the resources listed below for further reading.

    You can also find manifests with all the changes made in today's post in the Azure-Samples/eShopOnAKS repository.

    Resources

    Take the Cloud Skills Challenge!

    Enroll in the Cloud Skills Challenge!

    Don't miss out on this opportunity to level up your skills and stay ahead of the curve in the world of cloud native.

    - + \ No newline at end of file diff --git a/cnny-2023/page/15/index.html b/cnny-2023/page/15/index.html index 19f870f902..188e7db58f 100644 --- a/cnny-2023/page/15/index.html +++ b/cnny-2023/page/15/index.html @@ -14,13 +14,13 @@ - +

    · 9 min read
    Steven Murawski

    Welcome to Day 4 of Week 3 of #CloudNativeNewYear!

    The theme for this week is Bringing Your Application to Kubernetes. Yesterday we exposed the eShopOnWeb app so that customers can reach it over the internet using a custom domain name and TLS. Today we'll explore the topic of debugging and instrumentation.

    Ask the Experts Thursday, February 9th at 9 AM PST
    Friday, February 10th at 11 AM PST

    Watch the recorded demo and conversation about this week's topics.

    We were live on YouTube walking through today's (and the rest of this week's) demos. Join us Friday, February 10th and bring your questions!

    What We'll Cover

    • Debugging
    • Bridge To Kubernetes
    • Instrumentation
    • Resources: For self-study!

    Debugging

    Debugging applications in a Kubernetes cluster can be challenging for several reasons:

    • Complexity: Kubernetes is a complex system with many moving parts, including pods, nodes, services, and config maps, all of which can interact in unexpected ways and cause issues.
    • Distributed Environment: Applications running in a Kubernetes cluster are often distributed across multiple nodes, which makes it harder to determine the root cause of an issue.
    • Logging and Monitoring: Debugging an application in a Kubernetes cluster requires access to logs and performance metrics, which can be difficult to obtain in a large and dynamic environment.
    • Resource Management: Kubernetes manages resources such as CPU and memory, which can impact the performance and behavior of applications. Debugging resource-related issues requires a deep understanding of the Kubernetes resource model and the underlying infrastructure.
    • Dynamic Nature: Kubernetes is designed to be dynamic, with the ability to add and remove resources as needed. This dynamic nature can make it difficult to reproduce issues and debug problems.

    However, there are many tools and practices that can help make debugging applications in a Kubernetes cluster easier, such as using centralized logging, monitoring, and tracing solutions, and following best practices for managing resources and deployment configurations.

    There's also another great tool in our toolbox - Bridge to Kubernetes.

    Bridge to Kubernetes

    Bridge to Kubernetes is a great tool for microservice development and debugging applications without having to locally replicate all the required microservices.

    Bridge to Kubernetes works with Visual Studio or Visual Studio Code.

    We'll walk through using it with Visual Studio Code.

    Connecting Bridge to Kubernetes to Our Cluster

    Ensure your AKS cluster is the default for kubectl

    If you've recently spun up a new AKS cluster or you have been working with a different cluster, you may need to change what cluster credentials you have configured.

    If it's a new cluster, we can use:

    RESOURCE_GROUP=<YOUR RESOURCE GROUP NAME>
    CLUSTER_NAME=<YOUR AKS CLUSTER NAME>
    az aks get-credentials az aks get-credentials --resource-group $RESOURCE_GROUP --name $CLUSTER_NAME

    Open the command palette

    Open the command palette and find Bridge to Kubernetes: Configure. You may need to start typing the name to get it to show up.

    The command palette for Visual Studio Code is open and the first item is Bridge to Kubernetes: Configure

    Pick the service you want to debug

    Bridge to Kubernetes will redirect a service for you. Pick the service you want to redirect, in this case we'll pick web.

    Selecting the `web` service to redirect in Visual Studio Code

    Identify the port your application runs on

    Next, we'll be prompted to identify what port our application will run on locally. For this application it'll be 5001, but that's just specific to this application (and the default for ASP.NET 7, I believe).

    Setting port 5001 as the port to redirect to the `web` Kubernetes service in Visual Studio Code

    Pick a debug configuration to extend

    Bridge to Kubernetes has a couple of ways to run - it can inject it's setup and teardown to your existing debug configurations. We'll pick .NET Core Launch (web).

    Telling Bridge to Kubernetes to use the .NET Core Launch (web) debug configuration in Visual Studio Code

    Forward Traffic for All Requests

    The last prompt you'll get in the configuration is about how you want Bridge to Kubernetes to handle re-routing traffic. The default is that all requests into the service will get your local version.

    You can also redirect specific traffic. Bridge to Kubernetes will set up a subdomain and route specific traffic to your local service, while allowing other traffic to the deployed service.

    Allowing the launch of Endpoint Manager on Windows

    Using Bridge to Kubernetes to Debug Our Service

    Now that we've configured Bridge to Kubernetes, we see that tasks and a new launch configuration have been added.

    Added to .vscode/tasks.json:

            {
    "label": "bridge-to-kubernetes.resource",
    "type": "bridge-to-kubernetes.resource",
    "resource": "web",
    "resourceType": "service",
    "ports": [
    5001
    ],
    "targetCluster": "aks1",
    "targetNamespace": "default",
    "useKubernetesServiceEnvironmentVariables": false
    },
    {
    "label": "bridge-to-kubernetes.compound",
    "dependsOn": [
    "bridge-to-kubernetes.resource",
    "build"
    ],
    "dependsOrder": "sequence"
    }

    And added to .vscode/launch.json:

    {
    "name": ".NET Core Launch (web) with Kubernetes",
    "type": "coreclr",
    "request": "launch",
    "preLaunchTask": "bridge-to-kubernetes.compound",
    "program": "${workspaceFolder}/src/Web/bin/Debug/net7.0/Web.dll",
    "args": [],
    "cwd": "${workspaceFolder}/src/Web",
    "stopAtEntry": false,
    "env": {
    "ASPNETCORE_ENVIRONMENT": "Development",
    "ASPNETCORE_URLS": "http://+:5001"
    },
    "sourceFileMap": {
    "/Views": "${workspaceFolder}/Views"
    }
    }

    Launch the debug configuration

    We can start the process with the .NET Core Launch (web) with Kubernetes launch configuration in the Debug pane in Visual Studio Code.

    Launch the `.NET Core Launch (web) with Kubernetes` from the Debug pane in Visual Studio Code

    Enable the Endpoint Manager

    Part of this process includes a local service to help manage the traffic routing and your hosts file. This will require admin or sudo privileges. On Windows, you'll get a prompt like:

    Prompt to launch the endpoint manager.

    Access your Kubernetes cluster "locally"

    Bridge to Kubernetes will set up a tunnel (thanks to port forwarding) to your local workstation and create local endpoints for the other Kubernetes hosted services in your cluster, as well as pretending to be a pod in that cluster (for the application you are debugging).

    Output from Bridge To Kubernetes setup task.

    After making the connection to your Kubernetes cluster, the launch configuration will continue. In this case, we'll make a debug build of the application and attach the debugger. (This process may cause the terminal in VS Code to scroll with build output. You can find the Bridge to Kubernetes output with the local IP addresses and ports in the Output pane for Bridge to Kubernetes.)

    You can set breakpoints, use your debug console, set watches, run tests against your local version of the service.

    Exploring the Running Application Environment

    One of the cool things that Bridge to Kubernetes does for our debugging experience is bring the environment configuration that our deployed pod would inherit. When we launch our app, it'll see configuration and secrets that we'd expect our pod to be running with.

    To test this, we'll set a breakpoint in our application's start up to see what SQL Server is being used. We'll set a breakpoint at src/Infrastructure/Dependencies.cs on line 32.

    Then, we will start debugging the application with Bridge to Kubernetes. When it hits the breakpoint, we'll open the Debug pane and type configuration.GetConnectionString("CatalogConnection").

    When we run locally (not with Bridge to Kubernetes), we'd see:

    configuration.GetConnectionString("CatalogConnection")
    "Server=(localdb)\\mssqllocaldb;Integrated Security=true;Initial Catalog=Microsoft.eShopOnWeb.CatalogDb;"

    But, with Bridge to Kubernetes we see something more like (yours will vary based on the password ):

    configuration.GetConnectionString("CatalogConnection")
    "Server=db;Database=Microsoft.eShopOnWeb.CatalogDb;User Id=sa;Password=*****************;TrustServerCertificate=True;"

    Debugging our local application connected to Kubernetes.

    We can see that the database server configured is based on our db service and the password is pulled from our secret in Azure KeyVault (via AKS).

    This helps us run our local application just like it was actually in our cluster.

    Going Further

    Bridge to Kubernetes also supports more advanced scenarios and, as you need to start routing traffic around inside your cluster, may require you to modify your application to pass along a kubernetes-route-as header to help ensure that traffic for your debugging workloads is properly handled. The docs go into much greater detail about that.

    Instrumentation

    Now that we've figured out our debugging story, we'll need to ensure that we have the right context clues to find where we need to debug or to give us a better idea of how well our microservices are running.

    Logging and Tracing

    Logging and tracing become even more critical in Kubernetes, where your application could be running in a number of pods across different nodes. When you have an issue, in addition to the normal application data, you'll want to know what pod and what node had the issue, what the state of those resources were (were you resource constrained or were shared resources unavailable?), and if autoscaling is enabled, you'll want to know if a scale event has been triggered. There are a multitude of other concerns based on your application and the environment you maintain.

    Given these informational needs, it's crucial to revisit your existing logging and instrumentation. Most frameworks and languages have extensible logging, tracing, and instrumentation libraries that you can iteratively add information to, such as pod and node states, and ensuring that requests can be traced across your microservices. This will pay you back time and time again when you have to troubleshoot issues in your existing environment.

    Centralized Logging

    To enhance the troubleshooting process further, it's important to implement centralized logging to consolidate logs from all your microservices into a single location. This makes it easier to search and analyze logs when you're troubleshooting an issue.

    Automated Alerting

    Additionally, implementing automated alerting, such as sending notifications when specific conditions occur in the logs, can help you detect issues before they escalate.

    End to end Visibility

    End-to-end visibility is also essential in understanding the flow of requests and responses between microservices in a distributed system. With end-to-end visibility, you can quickly identify bottlenecks and slowdowns in the system, helping you to resolve issues more efficiently.

    Resources

    Take the Cloud Skills Challenge!

    Enroll in the Cloud Skills Challenge!

    Don't miss out on this opportunity to level up your skills and stay ahead of the curve in the world of cloud native.

    - + \ No newline at end of file diff --git a/cnny-2023/page/16/index.html b/cnny-2023/page/16/index.html index a88fca2169..3211614337 100644 --- a/cnny-2023/page/16/index.html +++ b/cnny-2023/page/16/index.html @@ -14,13 +14,13 @@ - +

    · 6 min read
    Josh Duffney

    Welcome to Day 5 of Week 3 of #CloudNativeNewYear!

    The theme for this week is Bringing Your Application to Kubernetes. Yesterday we talked about debugging and instrumenting our application. Today we'll explore the topic of container image signing and secure supply chain.

    Ask the Experts Thursday, February 9th at 9 AM PST
    Friday, February 10th at 11 AM PST

    Watch the recorded demo and conversation about this week's topics.

    We were live on YouTube walking through today's (and the rest of this week's) demos. Join us Friday, February 10th and bring your questions!

    What We'll Cover

    • Introduction
    • Prerequisites
    • Create a digital signing certificate
    • Generate an Azure Container Registry Token
    • Set up Notation
    • Install the Notation Azure Key Vault Plugin
    • Add the signing Certificate to Notation
    • Sign Container Images
    • Conclusion

    Introduction

    The secure supply chain is a crucial aspect of software development, delivery, and deployment, and digital signing plays a critical role in this process.

    By using digital signatures to verify the authenticity and integrity of container images, organizations can improve the security of your software supply chain and reduce the risk of security breaches and data compromise.

    In this article, you'll learn how to use Notary, an open-source project hosted by the Cloud Native Computing Foundation (CNCF) to digitally sign container images stored on Azure Container Registry.

    Prerequisites

    To follow along, you'll need an instance of:

    Create a digital signing certificate

    A digital signing certificate is a certificate that is used to digitally sign and verify the authenticity and integrity of digital artifacts. Such documents, software, and of course container images.

    Before you can implement digital signatures, you must first create a digital signing certificate.

    Run the following command to generate the certificate:

    1. Create the policy file

      cat <<EOF > ./my_policy.json
      {
      "issuerParameters": {
      "certificateTransparency": null,
      "name": "Self"
      },
      "x509CertificateProperties": {
      "ekus": [
      "1.3.6.1.5.5.7.3.3"
      ],
      "key_usage": [
      "digitalSignature"
      ],
      "subject": "CN=${keySubjectName}",
      "validityInMonths": 12
      }
      }
      EOF

      The ekus and key usage of this certificate policy dictate that the certificate can only be used for digital signatures.

    2. Create the certificate in Azure Key Vault

      az keyvault certificate create --name $keyName --vault-name $keyVaultName --policy @my_policy.json

      Replace $keyName and $keyVaultName with your desired certificate name and Azure Key Vault instance name.

    Generate a Azure Container Registry token

    Azure Container Registry tokens are used to grant access to the contents of the registry. Tokens can be used for a variety of things such as pulling images, pushing images, or managing the registry.

    As part of the container image signing workflow, you'll need a token to authenticate the Notation CLI with your Azure Container Registry.

    Run the following command to generate an ACR token:

    az acr token create \
    --name $tokenName \
    --registry $registry \
    --scope-map _repositories_admin \
    --query 'credentials.passwords[0].value' \
    --only-show-errors \
    --output tsv

    Replace $tokenName with your name for the ACR token and $registry with the name of your ACR instance.

    Setup Notation

    Notation is the command-line interface for the CNCF Notary project. You'll use it to digitally sign the api and web container images for the eShopOnWeb application.

    Run the following commands to download and install the NotationCli:

    1. Open a terminal or command prompt window

    2. Download the Notary notation release

      curl -Lo notation.tar.gz https://github.com/notaryproject/notation/releases/download/v1.0.0-rc.1/notation_1.0.0-rc.1_linux_amd64.tar.gz > /dev/null 2>&1

      If you're not using Linux, you can find the releases here.

    3. Extract the contents of the notation.tar.gz

      tar xvzf notation.tar.gz > /dev/null 2>&1
    4. Copy the notation binary to the $HOME/bin directory

      cp ./notation $HOME/bin
    5. Add the $HOME/bin directory to the PATH environment variable

      export PATH="$HOME/bin:$PATH"
    6. Remove the downloaded files

      rm notation.tar.gz LICENSE
    7. Check the notation version

      notation --version

    Install the Notation Azure Key Vault plugin

    By design the NotationCli supports plugins that extend its digital signing capabilities to remote registries. And in order to sign your container images stored in Azure Container Registry, you'll need to install the Azure Key Vault plugin for Notation.

    Run the following commands to install the azure-kv plugin:

    1. Download the plugin

      curl -Lo notation-azure-kv.tar.gz \
      https://github.com/Azure/notation-azure-kv/releases/download/v0.5.0-rc.1/notation-azure-kv_0.5.0-rc.1_linux_amd64.tar.gz > /dev/null 2>&1

      Non-Linux releases can be found here.

    2. Extract to the plugin directory & delete download files

      tar xvzf notation-azure-kv.tar.gz -C ~/.config/notation/plugins/azure-kv notation-azure-kv > /dev/null 2>&

      rm -rf notation-azure-kv.tar.gz
    3. Verify the plugin was installed

      notation plugin ls

    Add the signing certificate to Notation

    Now that you have Notation and the Azure Key Vault plugin installed, add the certificate's keyId created above to Notation.

    1. Get the Certificate Key ID from Azure Key Vault

      az keyvault certificate show \
      --vault-name $keyVaultName \
      --name $keyName \
      --query "kid" --only-show-errors --output tsv

      Replace $keyVaultName and $keyName with the appropriate information.

    2. Add the Key ID to KMS using Notation

      notation key add --plugin azure-kv --id $keyID $keyName
    3. Check the key list

      notation key ls

    Sign Container Images

    At this point, all that's left is to sign the container images.

    Run the notation sign command to sign the api and web container images:

    notation sign $registry.azurecr.io/web:$tag \
    --username $tokenName \
    --password $tokenPassword

    notation sign $registry.azurecr.io/api:$tag \
    --username $tokenName \
    --password $tokenPassword

    Replace $registry, $tag, $tokenName, and $tokenPassword with the appropriate values. To improve security, use a SHA hash for the tag.

    NOTE: If you didn't take note of the token password, you can rerun the az acr token create command to generate a new password.

    Conclusion

    Digital signing plays a critical role in ensuring the security of software supply chains.

    By signing software components, organizations can verify the authenticity and integrity of software, helping to prevent unauthorized modifications, tampering, and malware.

    And if you want to take digital signing to a whole new level by using them to prevent the deployment of unsigned container images, check out the Ratify project on GitHub!

    Take the Cloud Skills Challenge!

    Enroll in the Cloud Skills Challenge!

    Don't miss out on this opportunity to level up your skills and stay ahead of the curve in the world of cloud native.

    - + \ No newline at end of file diff --git a/cnny-2023/page/17/index.html b/cnny-2023/page/17/index.html index 5928fce10b..03a1b000f2 100644 --- a/cnny-2023/page/17/index.html +++ b/cnny-2023/page/17/index.html @@ -14,13 +14,13 @@ - +

    · 7 min read
    Nitya Narasimhan

    Welcome to Week 4 of #CloudNativeNewYear!

    This week we'll go further with Cloud-native by exploring advanced topics and best practices for the Cloud-native practitioner. We'll start with an exploration of Serverless Container Options - ranging from managed services to Azure Kubernetes Service (AKS) and Azure Container Apps (ACA), to options that allow more granular control!

    What We'll Cover

    • The Azure Compute Landscape
    • Serverless Compute on Azure
    • Comparing Container Options On Azure
    • Other Considerations
    • Exercise: Try this yourself!
    • Resources: For self-study!


    We started this series with an introduction to core concepts:

    • In Containers 101, we learned why containerization matters. Think portability, isolation, scalability, resource-efficiency and cost-effectiveness. But not all apps can be containerized.
    • In Kubernetes 101, we learned how orchestration works. Think systems to automate container deployment, scaling, and management. But using Kubernetes directly can be complex.
    • In Exploring Cloud Native Options we asked the real questions: can we containerize - and should we?. The first depends on app characteristics, the second on your requirements.

    For example:

    • Can we containerize? The answer might be no if your app has system or OS dependencies, requires access to low-level hardware, or maintains complex state across sessions.
    • Should we containerize? The answer might be yes if your app is microservices-based, is stateless by default, requires portability, or is a legaacy app that can benefit from container isolation.

    As with every technology adoption decision process, there are no clear yes/no answers - just tradeoffs that you need to evaluate based on your architecture and application requirements. In today's post, we try to look at this from two main perspectives:

    1. Should you go serverless? Think: managed services that let you focus on app, not infra.
    2. What Azure Compute should I use? Think: best fit for my architecture & technology choices.

    Azure Compute Landscape

    Let's answer the second question first by exploring all available compute options on Azure. The illustrated decision-flow below is my favorite ways to navigate the choices, with questions like:

    • Are you migrating an existing app or building a new one?
    • Can you app be containerized?
    • Does it use a specific technology (Spring Boot, Red Hat Openshift)?
    • Do you need access to the Kubernetes API?
    • What characterizes the workload? (event-driven, web app, microservices etc.)

    Read the docs to understand how your choices can be influenced by the hosting model (IaaS, PaaS, FaaS), supported features (Networking, DevOps, Scalability, Availability, Security), architectural styles (Microservices, Event-driven, High-Performance Compute, Task Automation,Web-Queue Worker) etc.

    Compute Choices

    Now that we know all available compute options, let's address the second question: why go serverless? and what are my serverless compute options on Azure?

    Azure Serverless Compute

    Serverless gets defined many ways, but from a compute perspective, we can focus on a few key characteristics that are key to influencing this decision:

    • managed services - focus on application, let cloud provider handle infrastructure.
    • pay for what you use - get cost-effective resource utilization, flexible pricing options.
    • autoscaling on demand - take advantage of built-in features like KEDA-compliant triggers.
    • use preferred languages - write code in Java, JS, C#, Python etc. (specifics based on service)
    • cloud-native architectures - can support event-driven solutions, APIs, Microservices, DevOps!

    So what are some of the key options for Serverless Compute on Azure? The article dives into serverless support for fully-managed end-to-end serverless solutions with comprehensive support for DevOps, DevTools, AI/ML, Database, Storage, Monitoring and Analytics integrations. But we'll just focus on the 4 categories of applications when we look at Compute!

    1. Serverless Containerized Microservices using Azure Container Apps. Code in your preferred language, exploit full Dapr support, scale easily with any KEDA-compliant trigger.
    2. Serverless Application Environments using Azure App Service. Suitable for hosting monolithic apps (vs. microservices) in a managed service, with built-in support for on-demand scaling.
    3. Serverless Kubernetes using Azure Kubernetes Service (AKS). Spin up pods inside container instances and deploy Kubernetes-based applications with built-in KEDA-compliant autoscaling.
    4. Serverless Functions using Azure Functions. Execute "code at the granularity of functions" in your preferred language, and scale on demand with event-driven compute.

    We'll talk about these, and other compute comparisons, at the end of the article. But let's start with the core option you might choose if you want a managed serverless compute solution with built-in features for delivering containerized microservices at scale. Hello, Azure Container Apps!.

    Azure Container Apps

    Azure Container Apps (ACA) became generally available in May 2022 - providing customers with the ability to run microservices and containerized applications on a serverless, consumption-based platform. The figure below showcases the different types of applications that can be built with ACA. Note that it comes with built-in KEDA-compliant autoscaling triggers, and other auto-scale criteria that may be better-suited to the type of application you are building.

    About ACA

    So far in the series, we've put the spotlight on Azure Kubernetes Service (AKS) - so you're probably asking yourself: How does ACA compare to AKS?. We're glad you asked. Check out our Go Cloud-native with Azure Container Apps post from the #ServerlessSeptember series last year for a deeper-dive, or review the figure below for the main comparison points.

    The key takeaway is this. Azure Container Apps (ACA) also runs on Kubernetes but abstracts away its complexity in a managed service offering that lets you get productive quickly without requiring detailed knowledge of Kubernetes workings or APIs. However, if you want full access and control over the Kubernetes API then go with Azure Kubernetes Service (AKS) instead.

    Comparison

    Other Container Options

    Azure Container Apps is the preferred Platform As a Service (PaaS) option for a fully-managed serverless solution on Azure that is purpose-built for cloud-native microservices-based application workloads. But - there are other options that may be suitable for your specific needs, from a requirements and tradeoffs perspective. Let's review them quickly:

    1. Azure Functions is the serverless Functions-as-a-Service (FaaS) option, as opposed to ACA which supports a PaaS approach. It's optimized for running event-driven applications built at the granularity of ephemeral functions that can be deployed as code or containers.
    2. Azure App Service provides fully managed hosting for web applications that may be deployed using code or containers. It can be integrated with other services including Azure Container Apps and Azure Functions. It's optimized for deploying traditional web apps.
    3. Azure Kubernetes Service provides a fully managed Kubernetes option capable of running any Kubernetes workload, with direct access to the Kubernetes API.
    4. Azure Container Instances provides a single pod of Hyper-V isolated containers on demand, making them more of a low-level "building block" option compared to ACA.

    Based on the technology choices you made for application development you may also have more specialized options you want to consider. For instance:

    1. Azure Spring Apps is ideal if you're running Spring Boot or Spring Cloud workloads on Azure,
    2. Azure Red Hat OpenShift is ideal for integrated Kubernetes-powered OpenShift on Azure.
    3. Azure Confidential Computing is ideal if you have data/code integrity and confidentiality needs.
    4. Kubernetes At The Edge is ideal for bare-metal options that extend compute to edge devices.

    This is just the tip of the iceberg in your decision-making journey - but hopefully, it gave you a good sense of the options and criteria that influences your final choices. Let's wrap this up with a look at self-study resources for skilling up further.

    Exercise

    Want to get hands on learning related to these technologies?

    TAKE THE CLOUD SKILLS CHALLENGE

    Register today and level up your skills by completing free learning modules, while competing with your peers for a place on the leaderboards!

    Resources

    - + \ No newline at end of file diff --git a/cnny-2023/page/18/index.html b/cnny-2023/page/18/index.html index 6edf999821..39fa48bc40 100644 --- a/cnny-2023/page/18/index.html +++ b/cnny-2023/page/18/index.html @@ -14,13 +14,13 @@ - +

    · 3 min read
    Cory Skimming

    It's the final week of #CloudNativeNewYear! This week we'll go further with Cloud-native by exploring advanced topics and best practices for the Cloud-native practitioner. In today's post, we will introduce you to the basics of the open-source project Draft and how it can be used to easily create and deploy applications to Kubernetes.

    It's not too late to sign up for and complete the Cloud Skills Challenge!

    What We'll Cover

    • What is Draft?
    • Draft basics
    • Demo: Developing to AKS with Draft
    • Resources


    What is Draft?

    Draft is an open-source tool that can be used to streamline the development and deployment of applications on Kubernetes clusters. It provides a simple and easy-to-use workflow for creating and deploying applications, making it easier for developers to focus on writing code and building features, rather than worrying about the underlying infrastructure. This is great for users who are just getting started with Kubernetes, or those who are just looking to simplify their experience.

    New to Kubernetes?

    Draft basics

    Draft streamlines Kubernetes development by taking a non-containerized application and generating the Dockerfiles, K8s manifests, Helm charts, and other artifacts associated with a containerized application. Draft can also create a GitHub Action workflow file to quickly build and deploy your application onto any Kubernetes cluster.

    1. 'draft create'': Create a new Draft project by simply running the 'draft create' command - this command will walk you through a series of questions on your application specification (such as the application language) and create a Dockerfile, Helm char, and Kubernetes
    2. 'draft generate-workflow'': Automatically build out a GitHub Action using the 'draft generate-workflow' command
    3. 'draft setup-gh'': If you are using Azure, use this command to automate the GitHub OIDC set up process to ensure that you will be able to deploy your application using your GitHub Action.

    At this point, you will have all the files needed to deploy your app onto a Kubernetes cluster (we told you it was easy!).

    You can also use the 'draft info' command if you are looking for information on supported languages and deployment types. Let's see it in action, shall we?


    Developing to AKS with Draft

    In this Microsoft Reactor session below, we'll briefly introduce Kubernetes and the Azure Kubernetes Service (AKS) and then demo how enable your applications for Kubernetes using the open-source tool Draft. We'll show how Draft can help you create the boilerplate code to containerize your applications and add routing and scaling behaviours.

    ##Conclusion

    Overall, Draft simplifies the process of building, deploying, and managing applications on Kubernetes, and can make the overall journey from code to Kubernetes significantly easier.


    Resources


    - + \ No newline at end of file diff --git a/cnny-2023/page/19/index.html b/cnny-2023/page/19/index.html index 8ac5012d75..5294fe41a1 100644 --- a/cnny-2023/page/19/index.html +++ b/cnny-2023/page/19/index.html @@ -14,14 +14,14 @@ - +

    · 7 min read
    Vinicius Apolinario

    Welcome to Day 3 of Week 4 of #CloudNativeNewYear!

    The theme for this week is going further with Cloud Native. Yesterday we talked about using Draft to accelerate your Kubernetes adoption. Today we'll explore the topic of Windows containers.

    What We'll Cover

    • Introduction
    • Windows containers overview
    • Windows base container images
    • Isolation
    • Exercise: Try this yourself!
    • Resources: For self-study!

    Introduction

    Windows containers were launched along with Windows Server 2016, and have evolved since. In its latest release, Windows Server 2022, Windows containers have reached a great level of maturity and allow for customers to run production grade workloads.

    While suitable for new developments, Windows containers also provide developers and operations with a different approach than Linux containers. It allows for existing Windows applications to be containerized with little or no code changes. It also allows for professionals that are more comfortable with the Windows platform and OS, to leverage their skill set, while taking advantage of the containers platform.

    Windows container overview

    In essence, Windows containers are very similar to Linux. Since Windows containers use the same foundation of Docker containers, you can expect that the same architecture applies - with the specific notes of the Windows OS. For example, when running a Windows container via Docker, you use the same commands, such as docker run. To pull a container image, you can use docker pull, just like on Linux. However, to run a Windows container, you also need a Windows container host. This requirement is there because, as you might remember, a container shares the OS kernel with its container host.

    On Kubernetes, Windows containers are supported since Windows Server 2019. Just like with Docker, you can manage Windows containers like any other resource on the Kubernetes ecosystem. A Windows node can be part of a Kubernetes cluster, allowing you to run Windows container based applications on services like Azure Kubernetes Service. To deploy an Windows application to a Windows pod in Kubernetes, you can author a YAML specification much like you would for Linux. The main difference is that you would point to an image that runs on Windows, and you need to specify a node selection tag to indicate said pod needs to run on a Windows node.

    Windows base container images

    On Windows containers, you will always use a base container image provided by Microsoft. This base container image contains the OS binaries for the container to run. This image can be as large as 3GB+, or small as ~300MB. The difference in the size is a consequence of the APIs and components available in each Windows container base container image. There are primarily, three images: Nano Server, Server Core, and Server.

    Nano Server is the smallest image, ranging around 300MB. It's a base container image for new developments and cloud-native scenarios. Applications need to target Nano Server as the Windows OS, so not all frameworks will work. For example, .Net works on Nano Server, but .Net Framework doesn't. Other third-party frameworks also work on Nano Server, such as Apache, NodeJS, Phyton, Tomcat, Java runtime, JBoss, Redis, among others.

    Server Core is a much larger base container image, ranging around 1.25GB. It's larger size is compensated by it's application compatibility. Simply put, any application that meets the requirements to be run on a Windows container, can be containerized with this image.

    The Server image builds on the Server Core one. It ranges around 3.1GB and has even greater application compatibility than the Server Core image. In addition to the traditional Windows APIs and components, this image allows for scenarios such as Machine Learning via DirectX with GPU access.

    The best image for your scenario is dependent on the requirements your application has on the Windows OS inside a container. However, there are some scenarios that are not supported at all on Windows containers - such as GUI or RDP dependent applications, some Windows Server infrastructure roles, such as Active Directory, among others.

    Isolation

    When running containers, the kernel of the container host is shared with the containers running on it. While extremely convenient, this poses a potential risk for multi-tenant scenarios. If one container is compromised and has access to the host, it could potentially compromise other containers in the same system.

    For enterprise customers running on-premises (or even in the cloud), this can be mitigated by using a VM as a container host and considering the VM itself a security boundary. However, if multiple workloads from different tenants need to share the same host, Windows containers offer another option: Hyper-V isolation. While the name Hyper-V is associated with VMs, its virtualization capabilities extend to other services, including containers. Hyper-V isolated containers run on a purpose built, extremely small, highly performant VM. However, you manage a container running with Hyper-V isolation the same way you do with a process isolated one. In fact, the only notable difference is that you need to append the --isolation=hyperv tag to the docker run command.

    Exercise

    Here are a few examples of how to use Windows containers:

    Run Windows containers via Docker on your machine

    To pull a Windows base container image:

    docker pull mcr.microsoft.com/windows/servercore:ltsc2022

    To run a basic IIS container:

    #This command will pull and start a IIS container. You can access it from http://<your local IP>:8080
    docker run -d -p 8080:80 mcr.microsoft.com/windows/servercore/iis:windowsservercore-ltsc2022

    Run the same IIS container with Hyper-V isolation

    #This command will pull and start a IIS container. You can access it from http://<your local IP>:8080
    docker run -d -p 8080:80 --isolation=hyperv mcr.microsoft.com/windows/servercore/iis:windowsservercore-ltsc2022

    To run a Windows container interactively:

    docker run -it mcr.microsoft.com/windows/servercore:ltsc2022 powershell

    Run Windows containers on Kubernetes

    To prepare an AKS cluster for Windows containers: Note: Replace the values on the example below with the ones from your environment.

    echo "Please enter the username to use as administrator credentials for Windows Server nodes on your cluster: " && read WINDOWS_USERNAME
    az aks create \
    --resource-group myResourceGroup \
    --name myAKSCluster \
    --node-count 2 \
    --generate-ssh-keys \
    --windows-admin-username $WINDOWS_USERNAME \
    --vm-set-type VirtualMachineScaleSets \
    --network-plugin azure

    To add a Windows node pool for Windows containers:

    az aks nodepool add \
    --resource-group myResourceGroup \
    --cluster-name myAKSCluster \
    --os-type Windows \
    --name npwin \
    --node-count 1

    Deploy a sample ASP.Net application to the AKS cluster above using the YAML file below:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
    name: sample
    labels:
    app: sample
    spec:
    replicas: 1
    template:
    metadata:
    name: sample
    labels:
    app: sample
    spec:
    nodeSelector:
    "kubernetes.io/os": windows
    containers:
    - name: sample
    image: mcr.microsoft.com/dotnet/framework/samples:aspnetapp
    resources:
    limits:
    cpu: 1
    memory: 800M
    ports:
    - containerPort: 80
    selector:
    matchLabels:
    app: sample
    ---
    apiVersion: v1
    kind: Service
    metadata:
    name: sample
    spec:
    type: LoadBalancer
    ports:
    - protocol: TCP
    port: 80
    selector:
    app: sample

    Save the file above and run the command below on your Kubernetes cluster:

    kubectl apply -f <filename> .

    Once deployed, you can access the application by getting the IP address of your service:

    kubectl get service

    Resources

    It's not too late to sign up for and complete the Cloud Skills Challenge!
    - + \ No newline at end of file diff --git a/cnny-2023/page/2/index.html b/cnny-2023/page/2/index.html index 5e28a471be..88082bc4d2 100644 --- a/cnny-2023/page/2/index.html +++ b/cnny-2023/page/2/index.html @@ -14,14 +14,14 @@ - +

    · 5 min read
    Cory Skimming

    Welcome to Week 1 of #CloudNativeNewYear!

    Cloud-native New Year

    You will often hear the term "cloud-native" when discussing modern application development, but even a quick online search will return a huge number of articles, tweets, and web pages with a variety of definitions. So, what does cloud-native actually mean? Also, what makes an application a cloud-native application versus a "regular" application?

    Today, we will address these questions and more as we kickstart our learning journey (and our new year!) with an introductory dive into the wonderful world of cloud-native.


    What We'll Cover

    • What is cloud-native?
    • What is a cloud-native application?
    • The benefits of cloud-native
    • The five pillars of cloud-native
    • Exercise: Take the Cloud Skills Challenge!

    1. What is cloud-native?

    The term "cloud-native" can seem pretty self-evident (yes, hello, native to the cloud?), and in a way, it is. While there are lots of definitions of cloud-native floating around, at it's core, cloud-native simply refers to a modern approach to building software that takes advantage of cloud services and environments. This includes using cloud-native technologies, such as containers, microservices, and serverless, and following best practices for deploying, scaling, and managing applications in a cloud environment.

    Official definition from the Cloud Native Computing Foundation:

    Cloud-native technologies empower organizations to build and run scalable applications in modern, dynamic environments such as public, private, and hybrid clouds. Containers, service meshes, microservices, immutable infrastructure, and declarative APIs exemplify this approach.

    These techniques enable loosely coupled systems that are resilient, manageable, and observable. Combined with robust automation, they allow engineers to make high-impact changes frequently and predictably with minimal toil. Source


    2. So, what exactly is a cloud-native application?

    Cloud-native applications are specifically designed to take advantage of the scalability, resiliency, and distributed nature of modern cloud infrastructure. But how does this differ from a "traditional" application?

    Traditional applications are generally been built, tested, and deployed as a single, monolithic unit. The monolithic nature of this type of architecture creates close dependencies between components. This complexity and interweaving only increases as an application grows and can make it difficult to evolve (not to mention troubleshoot) and challenging to operate over time.

    To contrast, in cloud-native architectures the application components are decomposed into loosely coupled services, rather than built and deployed as one block of code. This decomposition into multiple self-contained services enables teams to manage complexity and improve the speed, agility, and scale of software delivery. Many small parts enables teams to make targeted updates, deliver new features, and fix any issues without leading to broader service disruption.


    3. The benefits of cloud-native

    Cloud-native architectures can bring many benefits to an organization, including:

    1. Scalability: easily scale up or down based on demand, allowing organizations to adjust their resource usage and costs as needed.
    2. Flexibility: deploy and run on any cloud platform, and easily move between clouds and on-premises environments.
    3. High-availability: techniques such as redundancy, self-healing, and automatic failover help ensure that cloud-native applications are designed to be highly-available and fault tolerant.
    4. Reduced costs: take advantage of the pay-as-you-go model of cloud computing, reducing the need for expensive infrastructure investments.
    5. Improved security: tap in to cloud security features, such as encryption and identity management, to improve the security of the application.
    6. Increased agility: easily add new features or services to your applications to meet changing business needs and market demand.

    4. The pillars of cloud-native

    There are five areas that are generally cited as the core building blocks of cloud-native architecture:

    1. Microservices: Breaking down monolithic applications into smaller, independent, and loosely-coupled services that can be developed, deployed, and scaled independently.
    2. Containers: Packaging software in lightweight, portable, and self-sufficient containers that can run consistently across different environments.
    3. Automation: Using automation tools and DevOps processes to manage and operate the cloud-native infrastructure and applications, including deployment, scaling, monitoring, and self-healing.
    4. Service discovery: Using service discovery mechanisms, such as APIs & service meshes, to enable services to discover and communicate with each other.
    5. Observability: Collecting and analyzing data from the infrastructure and applications to understand and optimize the performance, behavior, and health of the system.

    These can (and should!) be used in combination to deliver cloud-native solutions that are highly scalable, flexible, and available.

    WHAT'S NEXT

    Stay tuned, as we will be diving deeper into these topics in the coming weeks:

    • Jan 24: Containers 101
    • Jan 25: Adopting Microservices with Kubernetes
    • Jan 26: Kubernetes 101
    • Jan 27: Exploring your Cloud-native Options

    Resources


    Don't forget to subscribe to the blog to get daily posts delivered directly to your favorite feed reader!


    - + \ No newline at end of file diff --git a/cnny-2023/page/20/index.html b/cnny-2023/page/20/index.html index 934c216324..e3ac9e8364 100644 --- a/cnny-2023/page/20/index.html +++ b/cnny-2023/page/20/index.html @@ -14,13 +14,13 @@ - +

    · 4 min read
    Jorge Arteiro

    Welcome to Day 4 of Week 4 of #CloudNativeNewYear!

    The theme for this week is going further with Cloud Native. Yesterday we talked about Windows Containers. Today we'll explore addons and extensions available to Azure Kubernetes Services (AKS).

    What We'll Cover

    • Introduction
    • Add-ons
    • Extensions
    • Add-ons vs Extensions
    • Resources

    Introduction

    Azure Kubernetes Service (AKS) is a fully managed container orchestration service that makes it easy to deploy and manage containerized applications on Azure. AKS offers a number of features and capabilities, including the ability to extend its supported functionality through the use of add-ons and extensions.

    There are also integrations available from open-source projects and third parties, but they are not covered by the AKS support policy.

    Add-ons

    Add-ons provide a supported way to extend AKS. Installation, configuration and lifecycle are managed by AKS following pre-determine updates rules.

    As an example, let's enable Container Insights with the monitoring addon. on an existing AKS cluster using az aks enable-addons --addons CLI command

    az aks enable-addons \
    --name MyManagedCluster \
    --resource-group MyResourceGroup \
    --addons monitoring

    or you can use az aks create --enable-addons when creating new clusters

    az aks create \
    --name MyManagedCluster \
    --resource-group MyResourceGroup \
    --enable-addons monitoring

    The current available add-ons are:

    1. http_application_routing - Configure ingress with automatic public DNS name creation. Only recommended for development.
    2. monitoring - Container Insights monitoring.
    3. virtual-node - CNCF virtual nodes open source project.
    4. azure-policy - Azure Policy for AKS.
    5. ingress-appgw - Application Gateway Ingress Controller (AGIC).
    6. open-service-mesh - CNCF Open Service Mesh project.
    7. azure-keyvault-secrets-provider - Azure Key Vault Secrets Provider for Secret Store CSI Driver.
    8. web_application_routing - Managed NGINX ingress Controller.
    9. keda - CNCF Event-driven autoscaling project.

    For more details, get the updated list of AKS Add-ons here

    Extensions

    Cluster Extensions uses Helm charts and integrates with Azure Resource Manager (ARM) to provide installation and lifecycle management of capabilities on top of AKS.

    Extensions can be auto upgraded using minor versions, but it requires extra management and configuration. Using Scope parameter, it can be installed on the whole cluster or per namespace.

    AKS Extensions requires an Azure CLI extension to be installed. To add or update this CLI extension use the following commands:

    az extension add --name k8s-extension

    and to update an existing extension

    az extension update --name k8s-extension

    There are only 3 available extensions:

    1. Dapr - CNCF Dapr project.
    2. Azure ML - Integrate Azure Machine Learning with AKS to train, inference and manage ML models.
    3. Flux (GitOps) - CNCF Flux project integrated with AKS to enable cluster configuration and application deployment using GitOps.

    As an example, you can install Azure ML using the following command:

    az k8s-extension create \
    --name aml-compute --extension-type Microsoft.AzureML.Kubernetes \
    --scope cluster --cluster-name <clusterName> \
    --resource-group <resourceGroupName> \
    --cluster-type managedClusters \
    --configuration-settings enableInference=True allowInsecureConnections=True

    For more details, get the updated list of AKS Extensions here

    Add-ons vs Extensions

    AKS Add-ons brings an advantage of been fully managed by AKS itself, and AKS Extensions are more flexible and configurable but requires extra level of management.

    Add-ons are part of the AKS resource provider in the Azure API, and AKS Extensions are a separate resource provider on the Azure API.

    Resources

    It's not too late to sign up for and complete the Cloud Skills Challenge!
    - + \ No newline at end of file diff --git a/cnny-2023/page/21/index.html b/cnny-2023/page/21/index.html index cfb8f53d88..893087c339 100644 --- a/cnny-2023/page/21/index.html +++ b/cnny-2023/page/21/index.html @@ -14,13 +14,13 @@ - +

    · 6 min read
    Cory Skimming
    Steven Murawski
    Paul Yu
    Josh Duffney
    Nitya Narasimhan
    Vinicius Apolinario
    Jorge Arteiro
    Devanshi Joshi

    And that's a wrap on the inaugural #CloudNativeNewYear! Thank you for joining us to kick off the new year with this learning journey into cloud-native! In this final post of the 2023 celebration of all things cloud-native, we'll do two things:

    • Look Back - with a quick retrospective of what was covered.
    • Look Ahead - with resources and suggestions for how you can continue your skilling journey!

    We appreciate your time and attention and we hope you found this curated learning valuable. Feedback and suggestions are always welcome. From our entire team, we wish you good luck with the learning journey - now go build some apps and share your knowledge! 🎉


    What We'll Cover

    • Cloud-native fundamentals
    • Kubernetes fundamentals
    • Bringing your applications to Kubernetes
    • Go further with cloud-native
    • Resources to keep the celebration going!

    Week 1: Cloud-native Fundamentals

    In Week 1, we took a tour through the fundamentals of cloud-native technologies, including a walkthrough of the core concepts of containers, microservices, and Kubernetes.

    • Jan 23 - Cloud-native Fundamentals: The answers to life and all the universe - what is cloud-native? What makes an application cloud-native? What are the benefits? (yes, we all know it's 42, but hey, gotta start somewhere!)
    • Jan 24 - Containers 101: Containers are an essential component of cloud-native development. In this intro post, we cover how containers work and why they have become so popular.
    • Jan 25 - Kubernetes 101: Kuber-what-now? Learn the basics of Kubernetes and how it enables us to deploy and manage our applications effectively and consistently.
    A QUICKSTART GUIDE TO KUBERNETES CONCEPTS

    Missed it Live? Tune in to A Quickstart Guide to Kubernetes Concepts on demand, now!

    • Jan 26 - Microservices 101: What is a microservices architecture and how can we go about designing one?
    • Jan 27 - Exploring your Cloud Native Options: Cloud-native, while catchy, can be a very broad term. What technologies should you use? Learn some basic guidelines for when it is optimal to use different technologies for your project.

    Week 2: Kubernetes Fundamentals

    In Week 2, we took a deeper dive into the Fundamentals of Kubernetes. The posts and live demo from this week took us through how to build a simple application on Kubernetes, covering everything from deployment to networking and scaling. Note: for our samples and demo we have used Azure Kubernetes Service, but the principles apply to any Kubernetes!

    • Jan 30 - Pods and Deployments: how to use pods and deployments in Kubernetes.
    • Jan 31 - Services and Ingress: how to use services and ingress and a walk through the steps of making our containers accessible internally and externally!
    • Feb 1 - ConfigMaps and Secrets: how to of passing configuration and secrets to our applications in Kubernetes with ConfigMaps and Secrets.
    • Feb 2 - Volumes, Mounts, and Claims: how to use persistent storage on Kubernetes (and ensure your data can survive container restarts!).
    • Feb 3 - Scaling Pods and Nodes: how to scale pods and nodes in our Kubernetes cluster.
    ASK THE EXPERTS: AZURE KUBERNETES SERVICE

    Missed it Live? Tune in to Ask the Expert with Azure Kubernetes Service on demand, now!


    Week 3: Bringing your applications to Kubernetes

    So, you have learned how to build an application on Kubernetes. What about your existing applications? In Week 3, we explored how to take an existing application and set it up to run in Kubernetes:

    • Feb 6 - CI/CD: learn how to get an existing application running in Kubernetes with a full pipeline in GitHub Actions.
    • Feb 7 - Adapting Storage, Secrets, and Configuration: how to evaluate our sample application's configuration, storage, and networking requirements and implement using Kubernetes.
    • Feb 8 - Opening your Application with Ingress: how to expose the eShopOnWeb app so that customers can reach it over the internet using a custom domain name and TLS.
    • Feb 9 - Debugging and Instrumentation: how to debug and instrument your application now that it is on Kubernetes.
    • Feb 10 - CI/CD Secure Supply Chain: now that we have set up our application on Kubernetes, let's talk about container image signing and how to set up a secure supply change.

    Week 4: Go Further with Cloud-Native

    This week we have gone further with Cloud-native by exploring advanced topics and best practices for the Cloud-native practitioner.

    And today, February 17th, with this one post to rule (er, collect) them all!


    Keep the Learning Going!

    Learning is great, so why stop here? We have a host of great resources and samples for you to continue your cloud-native journey with Azure below:


    - + \ No newline at end of file diff --git a/cnny-2023/page/3/index.html b/cnny-2023/page/3/index.html index e9f84e5cde..7e67b4df39 100644 --- a/cnny-2023/page/3/index.html +++ b/cnny-2023/page/3/index.html @@ -14,14 +14,14 @@ - +

    · 4 min read
    Steven Murawski
    Paul Yu
    Josh Duffney

    Welcome to Day 2 of Week 1 of #CloudNativeNewYear!

    Today, we'll focus on building an understanding of containers.

    What We'll Cover

    • Introduction
    • How do Containers Work?
    • Why are Containers Becoming so Popular?
    • Conclusion
    • Resources
    • Learning Path

    REGISTER & LEARN: KUBERNETES 101

    Interested in a dive into Kubernetes and a chance to talk to experts?

    🎙: Join us Jan 26 @1pm PST by registering here

    Here's what you will learn:

    • Key concepts and core principles of Kubernetes.
    • How to deploy, scale and manage containerized workloads.
    • Live Demo of the concepts explained
    • How to get started with Azure Kubernetes Service for free.

    Start your free Azure Kubernetes Trial Today!!: aka.ms/TryAKS

    Introduction

    In the beginning, we deployed our applications onto physical servers. We only had a certain number of those servers, so often they hosted multiple applications. This led to some problems when those applications shared dependencies. Upgrading one application could break another application on the same server.

    Enter virtualization. Virtualization allowed us to run our applications in an isolated operating system instance. This removed much of the risk of updating shared dependencies. However, it increased our overhead since we had to run a full operating system for each application environment.

    To address the challenges created by virtualization, containerization was created to improve isolation without duplicating kernel level resources. Containers provide efficient and consistent deployment and runtime experiences for our applications and have become very popular as a way of packaging and distributing applications.

    How do Containers Work?

    Containers build on two capabilities in the Linux operating system, namespaces and cgroups. These constructs allow the operating system to provide isolation to a process or group of processes, keeping their access to filesystem resources separate and providing controls on resource utilization. This, combined with tooling to help package, deploy, and run container images has led to their popularity in today’s operating environment. This provides us our isolation without the overhead of additional operating system resources.

    When a container host is deployed on an operating system, it works at scheduling the access to the OS (operating systems) components. This is done by providing a logical isolated group that can contain processes for a given application, called a namespace. The container host then manages /schedules access from the namespace to the host OS. The container host then uses cgroups to allocate compute resources. Together, the container host with the help of cgroups and namespaces can schedule multiple applications to access host OS resources.

    Overall, this gives the illusion of virtualizing the host OS, where each application gets its own OS. In actuality, all the applications are running on the same operating system and sharing the same kernel as the container host.

    Containers are popular in the software development industry because they provide several benefits over traditional virtualization methods. Some of these benefits include:

    • Portability: Containers make it easy to move an application from one environment to another without having to worry about compatibility issues or missing dependencies.
    • Isolation: Containers provide a level of isolation between the application and the host system, which means that the application running in the container cannot access the host system's resources.
    • Scalability: Containers make it easy to scale an application up or down as needed, which is useful for applications that experience a lot of traffic or need to handle a lot of data.
    • Resource Efficiency: Containers are more resource-efficient than traditional virtualization methods because they don't require a full operating system to be running on each virtual machine.
    • Cost-Effective: Containers are more cost-effective than traditional virtualization methods because they don't require expensive hardware or licensing fees.

    Conclusion

    Containers are a powerful technology that allows developers to package and deploy applications in a portable and isolated environment. This technology is becoming increasingly popular in the world of software development and is being used by many companies and organizations to improve their application deployment and management processes. With the benefits of portability, isolation, scalability, resource efficiency, and cost-effectiveness, containers are definitely worth considering for your next application development project.

    Containerizing applications is a key step in modernizing them, and there are many other patterns that can be adopted to achieve cloud-native architectures, including using serverless platforms, Kubernetes, and implementing DevOps practices.

    Resources

    Learning Path

    - + \ No newline at end of file diff --git a/cnny-2023/page/4/index.html b/cnny-2023/page/4/index.html index 37648d5144..b73e53cf26 100644 --- a/cnny-2023/page/4/index.html +++ b/cnny-2023/page/4/index.html @@ -14,14 +14,14 @@ - +

    · 3 min read
    Steven Murawski

    Welcome to Day 3 of Week 1 of #CloudNativeNewYear!

    This week we'll focus on what Kubernetes is.

    What We'll Cover

    • Introduction
    • What is Kubernetes? (Video)
    • How does Kubernetes Work? (Video)
    • Conclusion


    REGISTER & LEARN: KUBERNETES 101

    Interested in a dive into Kubernetes and a chance to talk to experts?

    🎙: Join us Jan 26 @1pm PST by registering here

    Here's what you will learn:

    • Key concepts and core principles of Kubernetes.
    • How to deploy, scale and manage containerized workloads.
    • Live Demo of the concepts explained
    • How to get started with Azure Kubernetes Service for free.

    Start your free Azure Kubernetes Trial Today!!: aka.ms/TryAKS

    Introduction

    Kubernetes is an open source container orchestration engine that can help with automated deployment, scaling, and management of our applications.

    Kubernetes takes physical (or virtual) resources and provides a consistent API over them, bringing a consistency to the management and runtime experience for our applications. Kubernetes provides us with a number of capabilities such as:

    • Container scheduling
    • Service discovery and load balancing
    • Storage orchestration
    • Automated rollouts and rollbacks
    • Automatic bin packing
    • Self-healing
    • Secret and configuration management

    We'll learn more about most of these topics as we progress through Cloud Native New Year.

    What is Kubernetes?

    Let's hear from Brendan Burns, one of the founders of Kubernetes as to what Kubernetes actually is.

    How does Kubernetes Work?

    And Brendan shares a bit more with us about how Kubernetes works.

    Conclusion

    Kubernetes allows us to deploy and manage our applications effectively and consistently.

    By providing a consistent API across many of the concerns our applications have, like load balancing, networking, storage, and compute, Kubernetes improves both our ability to build and ship new software.

    There are standards for the applications to depend on for resources needed. Deployments, metrics, and logs are provided in a standardized fashion allowing more effecient operations across our application environments.

    And since Kubernetes is an open source platform, it can be found in just about every type of operating environment - cloud, virtual machines, physical hardware, shared data centers, even small devices like Rasberry Pi's!

    Want to learn more? Join us for a webinar on Kubernetes Concepts (or catch the playback) on Thursday, January 26th at 1 PM PST and watch for the rest of this series right here!

    - + \ No newline at end of file diff --git a/cnny-2023/page/5/index.html b/cnny-2023/page/5/index.html index 0df64cbb78..276dbd016f 100644 --- a/cnny-2023/page/5/index.html +++ b/cnny-2023/page/5/index.html @@ -14,13 +14,13 @@ - +

    · 6 min read
    Josh Duffney

    Welcome to Day 4 of Week 1 of #CloudNativeNewYear!

    This week we'll focus on advanced topics and best practices for Cloud-Native practitioners, kicking off with this post on Serverless Container Options with Azure. We'll look at technologies, tools and best practices that range from managed services like Azure Kubernetes Service, to options allowing finer granularity of control and oversight.

    What We'll Cover

    • What is Microservice Architecture?
    • How do you design a Microservice?
    • What challenges do Microservices introduce?
    • Conclusion
    • Resources


    Microservices are a modern way of designing and building software that increases deployment velocity by decomposing an application into small autonomous services that can be deployed independently.

    By deploying loosely coupled microservices your applications can be developed, deployed, and scaled independently. Because each service is independent, it can be updated or replaced without having to worry about the impact on the rest of the application. This means that if a bug is found in one service, it can be fixed without having to redeploy the entire application. All of which gives an organization the ability to deliver value to their customers faster.

    In this article, we will explore the basics of microservices architecture, its benefits and challenges, and how it can help improve the development, deployment, and maintenance of software applications.

    What is Microservice Architecture?

    Before explaining what Microservice architecture is, it’s important to understand what problems microservices aim to address.

    Traditional software development is centered around building monolithic applications. Monolithic applications are built as a single, large codebase. Meaning your code is tightly coupled causing the monolithic app to suffer from the following:

    Too much Complexity: Monolithic applications can become complex and difficult to understand and maintain as they grow. This can make it hard to identify and fix bugs and add new features.

    Difficult to Scale: Monolithic applications can be difficult to scale as they often have a single point of failure, which can cause the whole application to crash if a service fails.

    Slow Deployment: Deploying a monolithic application can be risky and time-consuming, as a small change in one part of the codebase can affect the entire application.

    Microservice architecture (often called microservices) is an architecture style that addresses the challenges created by Monolithic applications. Microservices architecture is a way of designing and building software applications as a collection of small, independent services that communicate with each other through APIs. This allows for faster development and deployment cycles, as well as easier scaling and maintenance than is possible with a monolithic application.

    How do you design a Microservice?

    Building applications with Microservices architecture requires a different approach. Microservices architecture focuses on business capabilities rather than technical layers, such as data access or messaging. Doing so requires that you shift your focus away from the technical stack and model your applications based upon the various domains that exist within the business.

    Domain-driven design (DDD) is a way to design software by focusing on the business needs. You can use Domain-driven design as a framework that guides the development of well-designed microservices by building services that encapsulate knowledge in each domain and abstract that knowledge from clients.

    In Domain-driven design you start by modeling the business domain and creating a domain model. A domain model is an abstract model of the business model that distills and organizes a domain of knowledge and provides a common language for developers and domain experts. It’s the resulting domain model that microservices a best suited to be built around because it helps establish a well-defined boundary between external systems and other internal applications.

    In short, before you begin designing microservices, start by mapping the functions of the business and their connections to create a domain model for the microservice(s) to be built around.

    What challenges do Microservices introduce?

    Microservices solve a lot of problems and have several advantages, but the grass isn’t always greener on the other side.

    One of the key challenges of microservices is managing communication between services. Because services are independent, they need to communicate with each other through APIs. This can be complex and difficult to manage, especially as the number of services grows. To address this challenge, it is important to have a clear API design, with well-defined inputs and outputs for each service. It is also important to have a system for managing and monitoring communication between services, to ensure that everything is running smoothly.

    Another challenge of microservices is managing the deployment and scaling of services. Because each service is independent, it needs to be deployed and scaled separately from the rest of the application. This can be complex and difficult to manage, especially as the number of services grows. To address this challenge, it is important to have a clear and consistent deployment process, with well-defined steps for deploying and scaling each service. Furthermore, it is advisable to host them on a system with self-healing capabilities to reduce operational burden.

    It is also important to have a system for monitoring and managing the deployment and scaling of services, to ensure optimal performance.

    Each of these challenges has created fertile ground for tooling and process that exists in the cloud-native ecosystem. Kubernetes, CI CD, and other DevOps practices are part of the package of adopting the microservices architecture.

    Conclusion

    In summary, microservices architecture focuses on software applications as a collection of small, independent services that communicate with each other over well-defined APIs.

    The main advantages of microservices include:

    • increased flexibility and scalability per microservice,
    • efficient resource utilization (with help from a container orchestrator like Kubernetes),
    • and faster development cycles.

    Continue following along with this series to see how you can use Kubernetes to help adopt microservices patterns in your own environments!

    Resources

    - + \ No newline at end of file diff --git a/cnny-2023/page/6/index.html b/cnny-2023/page/6/index.html index 8e4f8d01c9..5bb08f0c8c 100644 --- a/cnny-2023/page/6/index.html +++ b/cnny-2023/page/6/index.html @@ -14,14 +14,14 @@ - +

    · 6 min read
    Cory Skimming

    We are excited to be wrapping up our first week of #CloudNativeNewYear! This week, we have tried to set the stage by covering the fundamentals of cloud-native practices and technologies, including primers on containerization, microservices, and Kubernetes.

    Don't forget to sign up for the the Cloud Skills Challenge!

    Today, we will do a brief recap of some of these technologies and provide some basic guidelines for when it is optimal to use each.


    What We'll Cover

    • To Containerize or not to Containerize?
    • The power of Kubernetes
    • Where does Serverless fit?
    • Resources
    • What's coming next!


    Just joining us now? Check out these other Week 1 posts:

    To Containerize or not to Containerize?

    As mentioned in our Containers 101 post earlier this week, containers can provide several benefits over traditional virtualization methods, which has made them popular within the software development community. Containers provide a consistent and predictable runtime environment, which can help reduce the risk of compatibility issues and simplify the deployment process. Additionally, containers can improve resource efficiency by allowing multiple applications to run on the same host while isolating their dependencies.

    Some types of apps that are a particularly good fit for containerization include:

    1. Microservices: Containers are particularly well-suited for microservices-based applications, as they can be used to isolate and deploy individual components of the system. This allows for more flexibility and scalability in the deployment process.
    2. Stateless applications: Applications that do not maintain state across multiple sessions, such as web applications, are well-suited for containers. Containers can be easily scaled up or down as needed and replaced with new instances, without losing data.
    3. Portable applications: Applications that need to be deployed in different environments, such as on-premises, in the cloud, or on edge devices, can benefit from containerization. The consistent and portable runtime environment of containers can make it easier to move the application between different environments.
    4. Legacy applications: Applications that are built using older technologies or that have compatibility issues can be containerized to run in an isolated environment, without impacting other applications or the host system.
    5. Dev and testing environments: Containerization can be used to create isolated development and testing environments, which can be easily created and destroyed as needed.

    While there are many types of applications that can benefit from a containerized approach, it's worth noting that containerization is not always the best option, and it's important to weigh the benefits and trade-offs before deciding to containerize an application. Additionally, some types of applications may not be a good fit for containers including:

    • Apps that require full access to host resources: Containers are isolated from the host system, so if an application needs direct access to hardware resources such as GPUs or specialized devices, it might not work well in a containerized environment.
    • Apps that require low-level system access: If an application requires deep access to the underlying operating system, it may not be suitable for running in a container.
    • Applications that have specific OS dependencies: Apps that have specific dependencies on a certain version of an operating system or libraries may not be able to run in a container.
    • Stateful applications: Apps that maintain state across multiple sessions, such as databases, may not be well suited for containers. Containers are ephemeral by design, so the data stored inside a container may not persist between restarts.

    The good news is that some of these limitations can be overcome with the use of specialized containerization technologies such as Kubernetes, and by carefully designing the architecture of the application.


    The power of Kubernetes

    Speaking of Kubernetes...

    Kubernetes is a powerful tool for managing and deploying containerized applications in production environments, particularly for applications that need to scale, handle large numbers of requests, or run in multi-cloud or hybrid environments.

    Kubernetes is well-suited for a wide variety of applications, but it is particularly well-suited for the following types of applications:

    1. Microservices-based applications: Kubernetes provides a powerful set of tools for managing and deploying microservices-based applications, making it easy to scale, update, and manage the individual components of the application.
    2. Stateful applications: Kubernetes provides support for stateful applications through the use of Persistent Volumes and StatefulSets, allowing for applications that need to maintain state across multiple instances.
    3. Large-scale, highly-available systems: Kubernetes provides built-in support for scaling, self-healing, and rolling updates, making it an ideal choice for large-scale, highly-available systems that need to handle large numbers of users and requests.
    4. Multi-cloud and hybrid environments: Kubernetes can be used to deploy and manage applications across multiple cloud providers and on-premises environments, making it a good choice for organizations that want to take advantage of the benefits of multiple cloud providers or that need to deploy applications in a hybrid environment.
    New to Kubernetes?

    Where does Serverless fit in?

    Serverless is a cloud computing model where the cloud provider (like Azure) is responsible for executing a piece of code by dynamically allocating the resources. With serverless, you only pay for the exact amount of compute time that you use, rather than paying for a fixed amount of resources. This can lead to significant cost savings, particularly for applications with variable or unpredictable workloads.

    Serverless is commonly used for building applications like web or mobile apps, IoT, data processing, and real-time streaming - apps where the workloads are variable and high scalability is required. It's important to note that serverless is not a replacement for all types of workloads - it's best suited for stateless, short-lived and small-scale workloads.

    For a detailed look into the world of Serverless and lots of great learning content, revisit #30DaysofServerless.


    Resources


    What's up next in #CloudNativeNewYear?

    Week 1 has been all about the fundamentals of cloud-native. Next week, the team will be diving in to application deployment with Azure Kubernetes Service. Don't forget to subscribe to the blog to get daily posts delivered directly to your favorite feed reader!


    - + \ No newline at end of file diff --git a/cnny-2023/page/7/index.html b/cnny-2023/page/7/index.html index 2ede88ef17..121c0c93bf 100644 --- a/cnny-2023/page/7/index.html +++ b/cnny-2023/page/7/index.html @@ -14,13 +14,13 @@ - +

    · 14 min read
    Steven Murawski

    Welcome to Day #1 of Week 2 of #CloudNativeNewYear!

    The theme for this week is Kubernetes fundamentals. Last week we talked about Cloud Native architectures and the Cloud Native landscape. Today we'll explore the topic of Pods and Deployments in Kubernetes.

    Ask the Experts Thursday, February 9th at 9 AM PST
    Catch the Replay of the Live Demo

    Watch the recorded demo and conversation about this week's topics.

    We were live on YouTube walking through today's (and the rest of this week's) demos.

    What We'll Cover

    • Setting Up A Kubernetes Environment in Azure
    • Running Containers in Kubernetes Pods
    • Making the Pods Resilient with Deployments
    • Exercise
    • Resources

    Setting Up A Kubernetes Environment in Azure

    For this week, we'll be working with a simple app - the Azure Voting App. My teammate Paul Yu ported the app to Rust and we tweaked it a bit to let us highlight some of the basic features of Kubernetes.

    You should be able to replicate this in just about any Kubernetes environment, but we'll use Azure Kubernetes Service (AKS) as our working environment for this week.

    To make it easier to get started, there's a Bicep template to deploy an AKS cluster, an Azure Container Registry (ACR) (to host our container image), and connect the two so that we can easily deploy our application.

    Step 0 - Prerequisites

    There are a few things you'll need if you want to work through this and the following examples this week.

    Required:

    • Git (and probably a GitHub account if you want to persist your work outside of your computer)
    • Azure CLI
    • An Azure subscription (if you want to follow along with the Azure steps)
    • Kubectl (the command line tool for managing Kubernetes)

    Helpful:

    • Visual Studio Code (or equivalent editor)

    Step 1 - Clone the application repository

    First, I forked the source repository to my account.

    $GitHubOrg = 'smurawski' # Replace this with your GitHub account name or org name
    git clone "https://github.com/$GitHubOrg/azure-voting-app-rust"
    cd azure-voting-app-rust

    Leave your shell opened with your current location inside the application repository.

    Step 2 - Set up AKS

    Running the template deployment from the demo script (I'm using the PowerShell example in cnny23-week2-day1.ps1, but there's a Bash variant at cnny23-week2-day1.sh) stands up the environment. The second, third, and fourth commands take some of the output from the Bicep deployment to set up for later commands, so don't close out your shell after you run these commands.

    az deployment sub create --template-file ./deploy/main.bicep --location eastus --parameters 'resourceGroup=cnny-week2'
    $AcrName = az deployment sub show --name main --query 'properties.outputs.acr_name.value' -o tsv
    $AksName = az deployment sub show --name main --query 'properties.outputs.aks_name.value' -o tsv
    $ResourceGroup = az deployment sub show --name main --query 'properties.outputs.resource_group_name.value' -o tsv

    az aks get-credentials --resource-group $ResourceGroup --name $AksName

    Step 3 - Build our application container

    Since we have an Azure Container Registry set up, I'll use ACR Build Tasks to build and store my container image.

    az acr build --registry $AcrName --% --image cnny2023/azure-voting-app-rust:{{.Run.ID}} .
    $BuildTag = az acr repository show-tags `
    --name $AcrName `
    --repository cnny2023/azure-voting-app-rust `
    --orderby time_desc `
    --query '[0]' -o tsv
    tip

    Wondering what the --% is in the first command line? That tells the PowerShell interpreter to pass the input after it "as is" to the command without parsing/evaluating it. Otherwise, PowerShell messes a bit with the templated {{.Run.ID}} bit.

    Running Containers in Kubernetes Pods

    Now that we have our AKS cluster and application image ready to go, let's look into how Kubernetes runs containers.

    If you've been in tech for any length of time, you've seen that every framework, runtime, orchestrator, etc.. can have their own naming scheme for their concepts. So let's get into some of what Kubernetes calls things.

    The Pod

    A container running in Kubernetes is called a Pod. A Pod is basically a running container on a Node or VM. It can be more. For example you can run multiple containers and specify some funky configuration, but we'll keep it simple for now - add the complexity when you need it.

    Our Pod definition can be created via the kubectl command imperatively from arguments or declaratively from a configuration file. We'll do a little of both. We'll use the kubectl command to help us write our configuration files. Kubernetes configuration files are YAML, so having an editor that supports and can help you syntax check YAML is really helpful.

    Creating a Pod Definition

    Let's create a few Pod definitions. Our application requires two containers to get working - the application and a database.

    Let's create the database Pod first. And before you comment, the configuration isn't secure nor best practice. We'll fix that later this week. For now, let's focus on getting up and running.

    This is a trick I learned from one of my teammates - Paul. By using the --output yaml and --dry-run=client options, we can have the command help us write our YAML. And with a bit of output redirection, we can stash it safely in a file for later use.

    kubectl run azure-voting-db `
    --image "postgres:15.0-alpine" `
    --env "POSTGRES_PASSWORD=mypassword" `
    --output yaml `
    --dry-run=client > manifests/pod-db.yaml

    This creates a file that looks like:

    apiVersion: v1
    kind: Pod
    metadata:
    creationTimestamp: null
    labels:
    run: azure-voting-db
    name: azure-voting-db
    spec:
    containers:
    - env:
    - name: POSTGRES_PASSWORD
    value: mypassword
    image: postgres:15.0-alpine
    name: azure-voting-db
    resources: {}
    dnsPolicy: ClusterFirst
    restartPolicy: Always
    status: {}

    The file, when supplied to the Kubernetes API, will identify what kind of resource to create, the API version to use, and the details of the container (as well as an environment variable to be supplied).

    We'll get that container image started with the kubectl command. Because the details of what to create are in the file, we don't need to specify much else to the kubectl command but the path to the file.

    kubectl apply -f ./manifests/pod-db.yaml

    I'm going to need the IP address of the Pod, so that my application can connect to it, so we can use kubectl to get some information about our pod. By default, kubectl get pod only displays certain information but it retrieves a lot more. We can use the JSONPath syntax to index into the response and get the information you want.

    tip

    To see what you can get, I usually run the kubectl command with the output type (-o JSON) of JSON and then I can find where the data I want is and create my JSONPath query to get it.

    $DB_IP = kubectl get pod azure-voting-db -o jsonpath='{.status.podIP}'

    Now, let's create our Pod definition for our application. We'll use the same technique as before.

    kubectl run azure-voting-app `
    --image "$AcrName.azurecr.io/cnny2023/azure-voting-app-rust:$BuildTag" `
    --env "DATABASE_SERVER=$DB_IP" `
    --env "DATABASE_PASSWORD=mypassword`
    --output yaml `
    --dry-run=client > manifests/pod-app.yaml

    That command gets us a similar YAML file to the database container - you can see the full file here

    Let's get our application container running.

    kubectl apply -f ./manifests/pod-app.yaml

    Now that the Application is Running

    We can check the status of our Pods with:

    kubectl get pods

    And we should see something like:

    azure-voting-app-rust ❯  kubectl get pods
    NAME READY STATUS RESTARTS AGE
    azure-voting-app 1/1 Running 0 36s
    azure-voting-db 1/1 Running 0 84s

    Once our pod is running, we can check to make sure everything is working by letting kubectl proxy network connections to our Pod running the application. If we get the voting web page, we'll know the application found the database and we can start voting!

    kubectl port-forward pod/azure-voting-app 8080:8080

    Azure voting website in a browser with three buttons, one for Dogs, one for Cats, and one for Reset.  The counter is Dogs - 0 and Cats - 0.

    When you are done voting, you can stop the port forwarding by using Control-C to break the command.

    Clean Up

    Let's clean up after ourselves and see if we can't get Kubernetes to help us keep our application running. We can use the same configuration files to ensure that Kubernetes only removes what we want removed.

    kubectl delete -f ./manifests/pod-app.yaml
    kubectl delete -f ./manifests/pod-db.yaml

    Summary - Pods

    A Pod is the most basic unit of work inside Kubernetes. Once the Pod is deleted, it's gone. That leads us to our next topic (and final topic for today.)

    Making the Pods Resilient with Deployments

    We've seen how easy it is to deploy a Pod and get our containers running on Nodes in our Kubernetes cluster. But there's a problem with that. Let's illustrate it.

    Breaking Stuff

    Setting Back Up

    First, let's redeploy our application environment. We'll start with our application container.

    kubectl apply -f ./manifests/pod-db.yaml
    kubectl get pod azure-voting-db -o jsonpath='{.status.podIP}'

    The second command will report out the new IP Address for our database container. Let's open ./manifests/pod-app.yaml and update the container IP to our new one.

    - name: DATABASE_SERVER
    value: YOUR_NEW_IP_HERE

    Then we can deploy the application with the information it needs to find its database. We'll also list out our pods to see what is running.

    kubectl apply -f ./manifests/pod-app.yaml
    kubectl get pods

    Feel free to look back and use the port forwarding trick to make sure your app is running if you'd like.

    Knocking It Down

    The first thing we'll try to break is our application pod. Let's delete it.

    kubectl delete pod azure-voting-app

    Then, we'll check our pod's status:

    kubectl get pods

    Which should show something like:

    azure-voting-app-rust ❯  kubectl get pods
    NAME READY STATUS RESTARTS AGE
    azure-voting-db 1/1 Running 0 50s

    We should be able to recreate our application pod deployment with no problem, since it has the current database IP address and nothing else depends on it.

    kubectl apply -f ./manifests/pod-app.yaml

    Again, feel free to do some fun port forwarding and check your site is running.

    Uncomfortable Truths

    Here's where it gets a bit stickier, what if we delete the database container?

    If we delete our database container and recreate it, it'll likely have a new IP address, which would force us to update our application configuration. We'll look at some solutions for these problems in the next three posts this week.

    Because our database problem is a bit tricky, we'll primarily focus on making our application layer more resilient and prepare our database layer for those other techniques over the next few days.

    Let's clean back up and look into making things more resilient.

    kubectl delete -f ./manifests/pod-app.yaml
    kubectl delete -f ./manifests/pod-db.yaml

    The Deployment

    One of the reasons you may want to use Kubernetes is it's ability to orchestrate workloads. Part of that orchestration includes being able to ensure that certain workloads are running (regardless of what Node they might be on).

    We saw that we could delete our application pod and then restart it from the manifest with little problem. It just meant that we had to run a command to restart it. We can use the Deployment in Kubernetes to tell the orchestrator to ensure we have our application pod running.

    The Deployment also can encompass a lot of extra configuration - controlling how many containers of a particular type should be running, how upgrades of container images should proceed, and more.

    Creating the Deployment

    First, we'll create a Deployment for our database. We'll use a technique similar to what we did for the Pod, with just a bit of difference.

    kubectl create deployment azure-voting-db `
    --image "postgres:15.0-alpine" `
    --port 5432 `
    --output yaml `
    --dry-run=client > manifests/deployment-db.yaml

    Unlike our Pod definition creation, we can't pass in environment variable configuration from the command line. We'll have to edit the YAML file to add that.

    So, let's open ./manifests/deployment-db.yaml in our editor and add the following in the spec/containers configuration.

            env:
    - name: POSTGRES_PASSWORD
    value: "mypassword"

    Your file should look like this deployment-db.yaml.

    Once we have our configuration file updated, we can deploy our database container image.

    kubectl apply -f ./manifests/deployment-db.yaml

    For our application, we'll use the same technique.

    kubectl create deployment azure-voting-app `
    --image "$AcrName.azurecr.io/cnny2023/azure-voting-app-rust:$BuildTag" `
    --port 8080 `
    --output yaml `
    --dry-run=client > manifests/deployment-app.yaml

    Next, we'll need to add an environment variable to the generated configuration. We'll also need the new IP address for the database deployment.

    Previously, we named the pod and were able to ask for the IP address with kubectl and a bit of JSONPath. Now, the deployment created the pod for us, so there's a bit of random in the naming. Check out:

    kubectl get pods

    Should return something like:

    azure-voting-app-rust ❯  kubectl get pods
    NAME READY STATUS RESTARTS AGE
    azure-voting-db-686d758fbf-8jnq8 1/1 Running 0 7s

    We can either ask for the IP with the new pod name, or we can use a selector to find our desired pod.

    kubectl get pod --selector app=azure-voting-db -o jsonpath='{.items[0].status.podIP}'

    Now, we can update our application deployment configuration file with:

            env:
    - name: DATABASE_SERVER
    value: YOUR_NEW_IP_HERE
    - name: DATABASE_PASSWORD
    value: mypassword

    Your file should look like this deployment-app.yaml (but with IPs and image names matching your environment).

    After we save those changes, we can deploy our application.

    kubectl apply -f ./manifests/deployment-app.yaml

    Let's test the resilience of our app now. First, we'll delete the pod running our application, then we'll check to make sure Kubernetes restarted our application pod.

    kubectl get pods
    azure-voting-app-rust ❯  kubectl get pods
    NAME READY STATUS RESTARTS AGE
    azure-voting-app-56c9ccc89d-skv7x 1/1 Running 0 71s
    azure-voting-db-686d758fbf-8jnq8 1/1 Running 0 12m
    kubectl delete pod azure-voting-app-56c9ccc89d-skv7x
    kubectl get pods
    azure-voting-app-rust ❯  kubectl delete pod azure-voting-app-56c9ccc89d-skv7x
    >> kubectl get pods
    pod "azure-voting-app-56c9ccc89d-skv7x" deleted
    NAME READY STATUS RESTARTS AGE
    azure-voting-app-56c9ccc89d-2b5mx 1/1 Running 0 2s
    azure-voting-db-686d758fbf-8jnq8 1/1 Running 0 14m
    info

    Your Pods will likely have different identifiers at the end, so adjust your commands to match the names in your environment.

    As you can see, by the time the kubectl get pods command was run, Kubernetes had already spun up a new pod for the application container image. Thanks Kubernetes!

    Clean up

    Since we can't just delete the pods, we have to delete the deployments.

    kubectl delete -f ./manifests/deployment-app.yaml
    kubectl delete -f ./manifests/deployment-db.yaml

    Summary - Deployments

    Deployments allow us to create more durable configuration for the workloads we deploy into Kubernetes. As we dig deeper, we'll discover more capabilities the deployments offer. Check out the Resources below for more.

    Exercise

    If you want to try these steps, head over to the source repository, fork it, clone it locally, and give it a spin!

    You can check your manifests against the manifests in the week2/day1 branch of the source repository.

    Resources

    Take the Cloud Skills Challenge!

    Enroll in the Cloud Skills Challenge!

    Don't miss out on this opportunity to level up your skills and stay ahead of the curve in the world of cloud native.

    Documentation

    Training

    - + \ No newline at end of file diff --git a/cnny-2023/page/8/index.html b/cnny-2023/page/8/index.html index b50fe5fce2..cae66d1136 100644 --- a/cnny-2023/page/8/index.html +++ b/cnny-2023/page/8/index.html @@ -14,13 +14,13 @@ - +

    · 11 min read
    Paul Yu

    Welcome to Day 2 of Week 2 of #CloudNativeNewYear!

    The theme for this week is #Kubernetes fundamentals. Yesterday we talked about how to deploy a containerized web app workload to Azure Kubernetes Service (AKS). Today we'll explore the topic of services and ingress and walk through the steps of making our containers accessible both internally as well as over the internet so that you can share it with the world 😊

    Ask the Experts Thursday, February 9th at 9 AM PST
    Catch the Replay of the Live Demo

    Watch the recorded demo and conversation about this week's topics.

    We were live on YouTube walking through today's (and the rest of this week's) demos.

    What We'll Cover

    • Exposing Pods via Service
    • Exposing Services via Ingress
    • Takeaways
    • Resources

    Exposing Pods via Service

    There are a few ways to expose your pod in Kubernetes. One way is to take an imperative approach and use the kubectl expose command. This is probably the quickest way to achieve your goal but it isn't the best way. A better way to expose your pod by taking a declarative approach by creating a services manifest file and deploying it using the kubectl apply command.

    Don't worry if you are unsure of how to make this manifest, we'll use kubectl to help generate it.

    First, let's ensure we have the database deployed on our AKS cluster.

    📝 NOTE: If you don't have an AKS cluster deployed, please head over to Azure-Samples/azure-voting-app-rust, clone the repo, and follow the instructions in the README.md to execute the Azure deployment and setup your kubectl context. Check out the first post this week for more on the environment setup.

    kubectl apply -f ./manifests/deployment-db.yaml

    Next, let's deploy the application. If you are following along from yesterday's content, there isn't anything you need to change; however, if you are deploy the app from scratch, you'll need to modify the deployment-app.yaml manifest and update it with your image tag and database pod's IP address.

    kubectl apply -f ./manifests/deployment-app.yaml

    Now, let's expose the database using a service so that we can leverage Kubernetes' built-in service discovery to be able to reference it by name; not pod IP. Run the following command.

    kubectl expose deployment azure-voting-db \
    --port=5432 \
    --target-port=5432

    With the database exposed using service, we can update the app deployment manifest to use the service name instead of pod IP. This way, if the pod ever gets assigned a new IP, we don't have to worry about updating the IP each time and redeploying our web application. Kubernetes has internal service discovery mechanism in place that allows us to reference a service by its name.

    Let's make an update to the manifest. Replace the environment variable for DATABASE_SERVER with the following:

    - name: DATABASE_SERVER
    value: azure-voting-db

    Re-deploy the app with the updated configuration.

    kubectl apply -f ./manifests/deployment-app.yaml

    One service down, one to go. Run the following command to expose the web application.

    kubectl expose deployment azure-voting-app \
    --type=LoadBalancer \
    --port=80 \
    --target-port=8080

    Notice the --type argument has a value of LoadBalancer. This service type is implemented by the cloud-controller-manager which is part of the Kubernetes control plane. When using a managed Kubernetes cluster such as Azure Kubernetes Service, a public standard load balancer will be able to provisioned when the service type is set to LoadBalancer. The load balancer will also have a public IP assigned which will make your deployment publicly available.

    Kubernetes supports four service types:

    • ClusterIP: this is the default and limits service access to internal traffic within the cluster
    • NodePort: this assigns a port mapping on the node's IP address and allows traffic from the virtual network (outside the cluster)
    • LoadBalancer: as mentioned above, this creates a cloud-based load balancer
    • ExternalName: this is used in special case scenarios where you want to map a service to an external DNS name

    📝 NOTE: When exposing a web application to the internet, allowing external users to connect to your Service directly is not the best approach. Instead, you should use an Ingress, which we'll cover in the next section.

    Now, let's confirm you can reach the web app from the internet. You can use the following command to print the URL to your terminal.

    echo "http://$(kubectl get service azure-voting-app -o jsonpath='{.status.loadBalancer.ingress[0].ip}')"

    Great! The kubectl expose command gets the job done, but as mentioned above, it is not the best method of exposing deployments. It is better to expose deployments declaratively using a service manifest, so let's delete the services and redeploy using manifests.

    kubectl delete service azure-voting-db azure-voting-app

    To use kubectl to generate our manifest file, we can use the same kubectl expose command that we ran earlier but this time, we'll include --output=yaml and --dry-run=client. This will instruct the command to output the manifest that would be sent to the kube-api server in YAML format to the terminal.

    Generate the manifest for the database service.

    kubectl expose deployment azure-voting-db \
    --type=ClusterIP \
    --port=5432 \
    --target-port=5432 \
    --output=yaml \
    --dry-run=client > ./manifests/service-db.yaml

    Generate the manifest for the application service.

    kubectl expose deployment azure-voting-app \
    --type=LoadBalancer \
    --port=80 \
    --target-port=8080 \
    --output=yaml \
    --dry-run=client > ./manifests/service-app.yaml

    The command above redirected the YAML output to your manifests directory. Here is what the web application service looks like.

    apiVersion: v1
    kind: Service
    metadata:
    creationTimestamp: null
    labels:
    app: azure-voting-app
    name: azure-voting-app
    spec:
    ports:
    - port: 80
    protocol: TCP
    targetPort: 8080
    selector:
    app: azure-voting-app
    type: LoadBalancer
    status:
    loadBalancer: {}

    💡 TIP: To view the schema of any api-resource in Kubernetes, you can use the kubectl explain command. In this case the kubectl explain service command will tell us exactly what each of these fields do.

    Re-deploy the services using the new service manifests.

    kubectl apply -f ./manifests/service-db.yaml -f ./manifests/service-app.yaml

    # You should see TYPE is set to LoadBalancer and the EXTERNAL-IP is set
    kubectl get service azure-voting-db azure-voting-app

    Confirm again that our application is accessible again. Run the following command to print the URL to the terminal.

    echo "http://$(kubectl get service azure-voting-app -o jsonpath='{.status.loadBalancer.ingress[0].ip}')"

    That was easy, right? We just exposed both of our pods using Kubernetes services. The database only needs to be accessible from within the cluster so ClusterIP is perfect for that. For the web application, we specified the type to be LoadBalancer so that we can access the application over the public internet.

    But wait... remember that if you want to expose web applications over the public internet, a Service with a public IP is not the best way; the better approach is to use an Ingress resource.

    Exposing Services via Ingress

    If you read through the Kubernetes documentation on Ingress you will see a diagram that depicts the Ingress sitting in front of the Service resource with a routing rule between it. In order to use Ingress, you need to deploy an Ingress Controller and it can be configured with many routing rules to forward traffic to one or many backend services. So effectively, an Ingress is a load balancer for your Services.

    With that said, we no longer need a service type of LoadBalancer since the service does not need to be accessible from the internet. It only needs to be accessible from the Ingress Controller (internal to the cluster) so we can change the service type to ClusterIP.

    Update your service.yaml file to look like this:

    apiVersion: v1
    kind: Service
    metadata:
    creationTimestamp: null
    labels:
    app: azure-voting-app
    name: azure-voting-app
    spec:
    ports:
    - port: 80
    protocol: TCP
    targetPort: 8080
    selector:
    app: azure-voting-app

    📝 NOTE: The default service type is ClusterIP so we can omit the type altogether.

    Re-apply the app service manifest.

    kubectl apply -f ./manifests/service-app.yaml

    # You should see TYPE set to ClusterIP and EXTERNAL-IP set to <none>
    kubectl get service azure-voting-app

    Next, we need to install an Ingress Controller. There are quite a few options, and the Kubernetes-maintained NGINX Ingress Controller is commonly deployed.

    You could install this manually by following these instructions, but if you do that you'll be responsible for maintaining and supporting the resource.

    I like to take advantage of free maintenance and support when I can get it, so I'll opt to use the Web Application Routing add-on for AKS.

    💡 TIP: Whenever you install an AKS add-on, it will be maintained and fully supported by Azure Support.

    Enable the web application routing add-on in our AKS cluster with the following command.

    az aks addon enable \
    --name <YOUR_AKS_NAME> \
    --resource-group <YOUR_AKS_RESOURCE_GROUP>
    --addon web_application_routing

    ⚠️ WARNING: This command can take a few minutes to complete

    Now, let's use the same approach we took in creating our service to create our Ingress resource. Run the following command to generate the Ingress manifest.

    kubectl create ingress azure-voting-app \
    --class=webapprouting.kubernetes.azure.com \
    --rule="/*=azure-voting-app:80" \
    --output yaml \
    --dry-run=client > ./manifests/ingress.yaml

    The --class=webapprouting.kubernetes.azure.com option activates the AKS web application routing add-on. This AKS add-on can also integrate with other Azure services such as Azure DNS and Azure Key Vault for TLS certificate management and this special class makes it all work.

    The --rule="/*=azure-voting-app:80" option looks confusing but we can use kubectl again to help us understand how to format the value for the option.

    kubectl create ingress --help

    In the output you will see the following:

    --rule=[]:
    Rule in format host/path=service:port[,tls=secretname]. Paths containing the leading character '*' are
    considered pathType=Prefix. tls argument is optional.

    It expects a host and path separated by a forward-slash, then expects the backend service name and port separated by a colon. We're not using a hostname for this demo so we can omit it. For the path, an asterisk is used to specify a wildcard path prefix.

    So, the value of /*=azure-voting-app:80 creates a routing rule for all paths following the domain (or in our case since we don't have a hostname specified, the IP) to route traffic to our azure-voting-app backend service on port 80.

    📝 NOTE: Configuring the hostname and TLS is outside the scope of this demo but please visit this URL https://bit.ly/aks-webapp-routing for an in-depth hands-on lab centered around Web Application Routing on AKS.

    Your ingress.yaml file should look like this:

    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
    creationTimestamp: null
    name: azure-voting-app
    spec:
    ingressClassName: webapprouting.kubernetes.azure.com
    rules:
    - http:
    paths:
    - backend:
    service:
    name: azure-voting-app
    port:
    number: 80
    path: /
    pathType: Prefix
    status:
    loadBalancer: {}

    Apply the app ingress manifest.

    kubectl apply -f ./manifests/ingress.yaml

    Validate the web application is available from the internet again. You can run the following command to print the URL to the terminal.

    echo "http://$(kubectl get ingress azure-voting-app -o jsonpath='{.status.loadBalancer.ingress[0].ip}')"

    Takeaways

    Exposing your applications both internally and externally can be easily achieved using Service and Ingress resources respectively. If your service is HTTP or HTTPS based and needs to be accessible from outsie the cluster, use Ingress with an internal Service (i.e., ClusterIP or NodePort); otherwise, use the Service resource. If your TCP-based Service needs to be publicly accessible, you set the type to LoadBalancer to expose a public IP for it. To learn more about these resources, please visit the links listed below.

    Lastly, if you are unsure how to begin writing your service manifest, you can use kubectl and have it do most of the work for you 🥳

    Resources

    Take the Cloud Skills Challenge!

    Enroll in the Cloud Skills Challenge!

    Don't miss out on this opportunity to level up your skills and stay ahead of the curve in the world of cloud native.

    - + \ No newline at end of file diff --git a/cnny-2023/page/9/index.html b/cnny-2023/page/9/index.html index d11c07a5e9..0b3bc855f6 100644 --- a/cnny-2023/page/9/index.html +++ b/cnny-2023/page/9/index.html @@ -14,14 +14,14 @@ - +

    · 6 min read
    Josh Duffney

    Welcome to Day 3 of Week 2 of #CloudNativeNewYear!

    The theme for this week is Kubernetes fundamentals. Yesterday we talked about Services and Ingress. Today we'll explore the topic of passing configuration and secrets to our applications in Kubernetes with ConfigMaps and Secrets.

    Ask the Experts Thursday, February 9th at 9 AM PST
    Catch the Replay of the Live Demo

    Watch the recorded demo and conversation about this week's topics.

    We were live on YouTube walking through today's (and the rest of this week's) demos.

    What We'll Cover

    • Decouple configurations with ConfigMaps and Secerts
    • Passing Environment Data with ConfigMaps and Secrets
    • Conclusion

    Decouple configurations with ConfigMaps and Secerts

    A ConfigMap is a Kubernetes object that decouples configuration data from pod definitions. Kubernetes secerts are similar, but were designed to decouple senstive information.

    Separating the configuration and secerts from your application promotes better organization and security of your Kubernetes environment. It also enables you to share the same configuration and different secerts across multiple pods and deployments which can simplify scaling and management. Using ConfigMaps and Secerts in Kubernetes is a best practice that can help to improve the scalability, security, and maintainability of your cluster.

    By the end of this tutorial, you'll have added a Kubernetes ConfigMap and Secret to the Azure Voting deployment.

    Passing Environment Data with ConfigMaps and Secrets

    📝 NOTE: If you don't have an AKS cluster deployed, please head over to Azure-Samples/azure-voting-app-rust, clone the repo, and follow the instructions in the README.md to execute the Azure deployment and setup your kubectl context. Check out the first post this week for more on the environment setup.

    Create the ConfigMap

    ConfigMaps can be used in one of two ways; as environment variables or volumes.

    For this tutorial you'll use a ConfigMap to create three environment variables inside the pod; DATABASE_SERVER, FISRT_VALUE, and SECOND_VALUE. The DATABASE_SERVER provides part of connection string to a Postgres. FIRST_VALUE and SECOND_VALUE are configuration options that change what voting options the application presents to the users.

    Follow the below steps to create a new ConfigMap:

    1. Create a YAML file named 'config-map.yaml'. In this file, specify the environment variables for the application.

      apiVersion: v1
      kind: ConfigMap
      metadata:
      name: azure-voting-config
      data:
      DATABASE_SERVER: azure-voting-db
      FIRST_VALUE: "Go"
      SECOND_VALUE: "Rust"
    2. Create the config map in your Kubernetes cluster by running the following command:

      kubectl create -f config-map.yaml

    Create the Secret

    The deployment-db.yaml and deployment-app.yaml are Kubernetes manifests that deploy the Azure Voting App. Currently, those deployment manifests contain the environment variables POSTGRES_PASSWORD and DATABASE_PASSWORD with the value stored as plain text. Your task is to replace that environment variable with a Kubernetes Secret.

    Create a Secret running the following commands:

    1. Encode mypassword.

      echo -n "mypassword" | base64
    2. Create a YAML file named secret.yaml. In this file, add POSTGRES_PASSWORD as the key and the encoded value returned above under as the value in the data section.

      apiVersion: v1
      kind: Secret
      metadata:
      name: azure-voting-secret
      type: Opaque
      data:
      POSTGRES_PASSWORD: bXlwYXNzd29yZA==
    3. Create the Secret in your Kubernetes cluster by running the following command:

      kubectl create -f secret.yaml

    [!WARNING] base64 encoding is a simple and widely supported way to obscure plaintext data, it is not secure, as it can easily be decoded. If you want to store sensitive data like password, you should use a more secure method like encrypting with a Key Management Service (KMS) before storing it in the Secret.

    Modify the app deployment manifest

    With the ConfigMap and Secert both created the next step is to replace the environment variables provided in the application deployment manuscript with the values stored in the ConfigMap and the Secert.

    Complete the following steps to add the ConfigMap and Secert to the deployment mainifest:

    1. Open the Kubernetes manifest file deployment-app.yaml.

    2. In the containers section, add an envFrom section and upate the env section.

      envFrom:
      - configMapRef:
      name: azure-voting-config
      env:
      - name: DATABASE_PASSWORD
      valueFrom:
      secretKeyRef:
      name: azure-voting-secret
      key: POSTGRES_PASSWORD

      Using envFrom exposes all the values witin the ConfigMap as environment variables. Making it so you don't have to list them individually.

    3. Save the changes to the deployment manifest file.

    4. Apply the changes to the deployment by running the following command:

      kubectl apply -f deployment-app.yaml

    Modify the database deployment manifest

    Next, update the database deployment manifest and replace the plain text environment variable with the Kubernetes Secert.

    1. Open the deployment-db.yaml.

    2. To add the secret to the deployment, replace the env section with the following code:

      env:
      - name: POSTGRES_PASSWORD
      valueFrom:
      secretKeyRef:
      name: azure-voting-secret
      key: POSTGRES_PASSWORD
    3. Apply the updated manifest.

      kubectl apply -f deployment-db.yaml

    Verify the ConfigMap and output environment variables

    Verify that the ConfigMap was added to your deploy by running the following command:

    ```bash
    kubectl describe deployment azure-voting-app
    ```

    Browse the output until you find the envFrom section with the config map reference.

    You can also verify that the environment variables from the config map are being passed to the container by running the command kubectl exec -it <pod-name> -- printenv. This command will show you all the environment variables passed to the pod including the one from configmap.

    By following these steps, you will have successfully added a config map to the Azure Voting App Kubernetes deployment, and the environment variables defined in the config map will be passed to the container running in the pod.

    Verify the Secret and describe the deployment

    Once the secret has been created you can verify it exists by running the following command:

    kubectl get secrets

    You can view additional information, such as labels, annotations, type, and the Data by running kubectl describe:

    kubectl describe secret azure-voting-secret

    By default, the describe command doesn't output the encoded value, but if you output the results as JSON or YAML you'll be able to see the secret's encoded value.

     kubectl get secret azure-voting-secret -o json

    Conclusion

    In conclusion, using ConfigMaps and Secrets in Kubernetes can help to improve the scalability, security, and maintainability of your cluster. By decoupling configuration data and sensitive information from pod definitions, you can promote better organization and security in your Kubernetes environment. Additionally, separating these elements allows for sharing the same configuration and different secrets across multiple pods and deployments, simplifying scaling and management.

    Take the Cloud Skills Challenge!

    Enroll in the Cloud Skills Challenge!

    Don't miss out on this opportunity to level up your skills and stay ahead of the curve in the world of cloud native.

    - + \ No newline at end of file diff --git a/cnny-2023/serverless-containers/index.html b/cnny-2023/serverless-containers/index.html index 600edd440e..bd2626eb23 100644 --- a/cnny-2023/serverless-containers/index.html +++ b/cnny-2023/serverless-containers/index.html @@ -14,13 +14,13 @@ - +

    4-1. Serverless Container Options

    · 7 min read
    Nitya Narasimhan

    Welcome to Week 4 of #CloudNativeNewYear!

    This week we'll go further with Cloud-native by exploring advanced topics and best practices for the Cloud-native practitioner. We'll start with an exploration of Serverless Container Options - ranging from managed services to Azure Kubernetes Service (AKS) and Azure Container Apps (ACA), to options that allow more granular control!

    What We'll Cover

    • The Azure Compute Landscape
    • Serverless Compute on Azure
    • Comparing Container Options On Azure
    • Other Considerations
    • Exercise: Try this yourself!
    • Resources: For self-study!


    We started this series with an introduction to core concepts:

    • In Containers 101, we learned why containerization matters. Think portability, isolation, scalability, resource-efficiency and cost-effectiveness. But not all apps can be containerized.
    • In Kubernetes 101, we learned how orchestration works. Think systems to automate container deployment, scaling, and management. But using Kubernetes directly can be complex.
    • In Exploring Cloud Native Options we asked the real questions: can we containerize - and should we?. The first depends on app characteristics, the second on your requirements.

    For example:

    • Can we containerize? The answer might be no if your app has system or OS dependencies, requires access to low-level hardware, or maintains complex state across sessions.
    • Should we containerize? The answer might be yes if your app is microservices-based, is stateless by default, requires portability, or is a legaacy app that can benefit from container isolation.

    As with every technology adoption decision process, there are no clear yes/no answers - just tradeoffs that you need to evaluate based on your architecture and application requirements. In today's post, we try to look at this from two main perspectives:

    1. Should you go serverless? Think: managed services that let you focus on app, not infra.
    2. What Azure Compute should I use? Think: best fit for my architecture & technology choices.

    Azure Compute Landscape

    Let's answer the second question first by exploring all available compute options on Azure. The illustrated decision-flow below is my favorite ways to navigate the choices, with questions like:

    • Are you migrating an existing app or building a new one?
    • Can you app be containerized?
    • Does it use a specific technology (Spring Boot, Red Hat Openshift)?
    • Do you need access to the Kubernetes API?
    • What characterizes the workload? (event-driven, web app, microservices etc.)

    Read the docs to understand how your choices can be influenced by the hosting model (IaaS, PaaS, FaaS), supported features (Networking, DevOps, Scalability, Availability, Security), architectural styles (Microservices, Event-driven, High-Performance Compute, Task Automation,Web-Queue Worker) etc.

    Compute Choices

    Now that we know all available compute options, let's address the second question: why go serverless? and what are my serverless compute options on Azure?

    Azure Serverless Compute

    Serverless gets defined many ways, but from a compute perspective, we can focus on a few key characteristics that are key to influencing this decision:

    • managed services - focus on application, let cloud provider handle infrastructure.
    • pay for what you use - get cost-effective resource utilization, flexible pricing options.
    • autoscaling on demand - take advantage of built-in features like KEDA-compliant triggers.
    • use preferred languages - write code in Java, JS, C#, Python etc. (specifics based on service)
    • cloud-native architectures - can support event-driven solutions, APIs, Microservices, DevOps!

    So what are some of the key options for Serverless Compute on Azure? The article dives into serverless support for fully-managed end-to-end serverless solutions with comprehensive support for DevOps, DevTools, AI/ML, Database, Storage, Monitoring and Analytics integrations. But we'll just focus on the 4 categories of applications when we look at Compute!

    1. Serverless Containerized Microservices using Azure Container Apps. Code in your preferred language, exploit full Dapr support, scale easily with any KEDA-compliant trigger.
    2. Serverless Application Environments using Azure App Service. Suitable for hosting monolithic apps (vs. microservices) in a managed service, with built-in support for on-demand scaling.
    3. Serverless Kubernetes using Azure Kubernetes Service (AKS). Spin up pods inside container instances and deploy Kubernetes-based applications with built-in KEDA-compliant autoscaling.
    4. Serverless Functions using Azure Functions. Execute "code at the granularity of functions" in your preferred language, and scale on demand with event-driven compute.

    We'll talk about these, and other compute comparisons, at the end of the article. But let's start with the core option you might choose if you want a managed serverless compute solution with built-in features for delivering containerized microservices at scale. Hello, Azure Container Apps!.

    Azure Container Apps

    Azure Container Apps (ACA) became generally available in May 2022 - providing customers with the ability to run microservices and containerized applications on a serverless, consumption-based platform. The figure below showcases the different types of applications that can be built with ACA. Note that it comes with built-in KEDA-compliant autoscaling triggers, and other auto-scale criteria that may be better-suited to the type of application you are building.

    About ACA

    So far in the series, we've put the spotlight on Azure Kubernetes Service (AKS) - so you're probably asking yourself: How does ACA compare to AKS?. We're glad you asked. Check out our Go Cloud-native with Azure Container Apps post from the #ServerlessSeptember series last year for a deeper-dive, or review the figure below for the main comparison points.

    The key takeaway is this. Azure Container Apps (ACA) also runs on Kubernetes but abstracts away its complexity in a managed service offering that lets you get productive quickly without requiring detailed knowledge of Kubernetes workings or APIs. However, if you want full access and control over the Kubernetes API then go with Azure Kubernetes Service (AKS) instead.

    Comparison

    Other Container Options

    Azure Container Apps is the preferred Platform As a Service (PaaS) option for a fully-managed serverless solution on Azure that is purpose-built for cloud-native microservices-based application workloads. But - there are other options that may be suitable for your specific needs, from a requirements and tradeoffs perspective. Let's review them quickly:

    1. Azure Functions is the serverless Functions-as-a-Service (FaaS) option, as opposed to ACA which supports a PaaS approach. It's optimized for running event-driven applications built at the granularity of ephemeral functions that can be deployed as code or containers.
    2. Azure App Service provides fully managed hosting for web applications that may be deployed using code or containers. It can be integrated with other services including Azure Container Apps and Azure Functions. It's optimized for deploying traditional web apps.
    3. Azure Kubernetes Service provides a fully managed Kubernetes option capable of running any Kubernetes workload, with direct access to the Kubernetes API.
    4. Azure Container Instances provides a single pod of Hyper-V isolated containers on demand, making them more of a low-level "building block" option compared to ACA.

    Based on the technology choices you made for application development you may also have more specialized options you want to consider. For instance:

    1. Azure Spring Apps is ideal if you're running Spring Boot or Spring Cloud workloads on Azure,
    2. Azure Red Hat OpenShift is ideal for integrated Kubernetes-powered OpenShift on Azure.
    3. Azure Confidential Computing is ideal if you have data/code integrity and confidentiality needs.
    4. Kubernetes At The Edge is ideal for bare-metal options that extend compute to edge devices.

    This is just the tip of the iceberg in your decision-making journey - but hopefully, it gave you a good sense of the options and criteria that influences your final choices. Let's wrap this up with a look at self-study resources for skilling up further.

    Exercise

    Want to get hands on learning related to these technologies?

    TAKE THE CLOUD SKILLS CHALLENGE

    Register today and level up your skills by completing free learning modules, while competing with your peers for a place on the leaderboards!

    Resources

    - + \ No newline at end of file diff --git a/cnny-2023/tags/30-daysofcloudnative/index.html b/cnny-2023/tags/30-daysofcloudnative/index.html index 6a5261ac34..90bed6f167 100644 --- a/cnny-2023/tags/30-daysofcloudnative/index.html +++ b/cnny-2023/tags/30-daysofcloudnative/index.html @@ -14,13 +14,13 @@ - +

    16 posts tagged with "30daysofcloudnative"

    View All Tags

    · 4 min read
    Cory Skimming
    Devanshi Joshi
    Steven Murawski
    Nitya Narasimhan

    Welcome to the Kick-off Post for #30DaysOfCloudNative - one of the core initiatives within #CloudNativeNewYear! Over the next four weeks, join us as we take you from fundamentals to functional usage of Cloud-native technologies, one blog post at a time! Read on to learn a little bit about this initiative and what you can expect to learn from this journey!

    What We'll Cover


    Cloud-native New Year

    Welcome to Week 01 of 🥳 #CloudNativeNewYear ! Today, we kick off a full month of content and activities to skill you up on all things Cloud-native on Azure with content, events, and community interactions! Read on to learn about what we have planned!


    Explore our initiatives

    We have a number of initiatives planned for the month to help you learn and skill up on relevant technologies. Click on the links to visit the relevant pages for each.

    We'll go into more details about #30DaysOfCloudNative in this post - don't forget to subscribe to the blog to get daily posts delivered directly to your preferred feed reader!


    Register for events!

    What are 3 things you can do today, to jumpstart your learning journey?


    #30DaysOfCloudNative

    #30DaysOfCloudNative is a month-long series of daily blog posts grouped into 4 themed weeks - taking you from core concepts to end-to-end solution examples in 30 days. Each article will be short (5-8 mins reading time) and provide exercises and resources to help you reinforce learnings and take next steps.

    This series focuses on the Cloud-native On Azure learning journey in four stages, each building on the previous week to help you skill up in a beginner-friendly way:

    We have a tentative weekly-themed roadmap for the topics we hope to cover and will keep this updated as we go with links to actual articles as they get published.

    Week 1: FOCUS ON CLOUD-NATIVE FUNDAMENTALS

    Here's a sneak peek at the week 1 schedule. We'll start with a broad review of cloud-native fundamentals and walkthrough the core concepts of microservices, containers and Kubernetes.

    • Jan 23: Learn Core Concepts for Cloud-native
    • Jan 24: Container 101
    • Jan 25: Adopting Microservices with Kubernetes
    • Jan 26: Kubernetes 101
    • Jan 27: Exploring your Cloud Native Options

    Let's Get Started!

    Now you know everything! We hope you are as excited as we are to dive into a full month of active learning and doing! Don't forget to subscribe for updates in your favorite feed reader! And look out for our first Cloud-native Fundamentals post on January 23rd!


    - + \ No newline at end of file diff --git a/cnny-2023/tags/30-daysofcloudnative/page/10/index.html b/cnny-2023/tags/30-daysofcloudnative/page/10/index.html index 7d154fee4b..27503da288 100644 --- a/cnny-2023/tags/30-daysofcloudnative/page/10/index.html +++ b/cnny-2023/tags/30-daysofcloudnative/page/10/index.html @@ -14,7 +14,7 @@ - + @@ -22,7 +22,7 @@

    16 posts tagged with "30daysofcloudnative"

    View All Tags

    · 14 min read
    Steven Murawski

    Welcome to Day 1 of Week 3 of #CloudNativeNewYear!

    The theme for this week is Bringing Your Application to Kubernetes. Last we talked about Kubernetes Fundamentals. Today we'll explore getting an existing application running in Kubernetes with a full pipeline in GitHub Actions.

    Ask the Experts Thursday, February 9th at 9 AM PST
    Friday, February 10th at 11 AM PST

    Watch the recorded demo and conversation about this week's topics.

    We were live on YouTube walking through today's (and the rest of this week's) demos. Join us Friday, February 10th and bring your questions!

    What We'll Cover

    • Our Application
    • Adding Some Infrastructure as Code
    • Building and Publishing a Container Image
    • Deploying to Kubernetes
    • Summary
    • Resources

    Our Application

    This week we'll be taking an exisiting application - something similar to a typical line of business application - and setting it up to run in Kubernetes. Over the course of the week, we'll address different concerns. Today we'll focus on updating our CI/CD process to handle standing up (or validating that we have) an Azure Kubernetes Service (AKS) environment, building and publishing container images for our web site and API server, and getting those services running in Kubernetes.

    The application we'll be starting with is eShopOnWeb. This application has a web site and API which are backed by a SQL Server instance. It's built in .NET 7, so it's cross-platform.

    info

    For the enterprising among you, you may notice that there are a number of different eShopOn* variants on GitHub, including eShopOnContainers. We aren't using that example as it's more of an end state than a starting place. Afterwards, feel free to check out that example as what this solution could look like as a series of microservices.

    Adding Some Infrastructure as Code

    Just like last week, we need to stand up an AKS environment. This week, however, rather than running commands in our own shell, we'll set up GitHub Actions to do that for us.

    There is a LOT of plumbing in this section, but once it's set up, it'll make our lives a lot easier. This section ensures that we have an environment to deploy our application into configured the way we want. We can easily extend this to accomodate multiple environments or add additional microservices with minimal new effort.

    Federated Identity

    Setting up a federated identity will allow us a more securable and auditable way of accessing Azure from GitHub Actions. For more about setting up a federated identity, Microsoft Learn has the details on connecting GitHub Actions to Azure.

    Here, we'll just walk through the setup of the identity and configure GitHub to use that idenity to deploy our AKS environment and interact with our Azure Container Registry.

    The examples will use PowerShell, but a Bash version of the setup commands is available in the week3/day1 branch.

    Prerequisites

    To follow along, you'll need:

    • a GitHub account
    • an Azure Subscription
    • the Azure CLI
    • and the Git CLI.

    You'll need to fork the source repository under your GitHub user or organization where you can manage secrets and GitHub Actions.

    It would be helpful to have the GitHub CLI, but it's not required.

    Set Up Some Defaults

    You will need to update one or more of the variables (your user or organization, what branch you want to work off of, and possibly the Azure AD application name if there is a conflict).

    # Replace the gitHubOrganizationName value
    # with the user or organization you forked
    # the repository under.

    $githubOrganizationName = 'Azure-Samples'
    $githubRepositoryName = 'eShopOnAKS'
    $branchName = 'week3/day1'
    $applicationName = 'cnny-week3-day1'

    Create an Azure AD Application

    Next, we need to create an Azure AD application.

    # Create an Azure AD application
    $aksDeploymentApplication = New-AzADApplication -DisplayName $applicationName

    Set Up Federation for that Azure AD Application

    And configure that application to allow federated credential requests from our GitHub repository for a particular branch.

    # Create a federated identity credential for the application
    New-AzADAppFederatedCredential `
    -Name $applicationName `
    -ApplicationObjectId $aksDeploymentApplication.Id `
    -Issuer 'https://token.actions.githubusercontent.com' `
    -Audience 'api://AzureADTokenExchange' `
    -Subject "repo:$($githubOrganizationName)/$($githubRepositoryName):ref:refs/heads/$branchName"

    Create a Service Principal for the Azure AD Application

    Once the application has been created, we need a service principal tied to that application. The service principal can be granted rights in Azure.

    # Create a service principal for the application
    New-AzADServicePrincipal -AppId $($aksDeploymentApplication.AppId)

    Give that Service Principal Rights to Azure Resources

    Because our Bicep deployment exists at the subscription level and we are creating role assignments, we need to give it Owner rights. If we changed the scope of the deployment to just a resource group, we could apply more scoped permissions.

    $azureContext = Get-AzContext
    New-AzRoleAssignment `
    -ApplicationId $($aksDeploymentApplication.AppId) `
    -RoleDefinitionName Owner `
    -Scope $azureContext.Subscription.Id

    Add Secrets to GitHub Repository

    If you have the GitHub CLI, you can use that right from your shell to set the secrets needed.

    gh secret set AZURE_CLIENT_ID --body $aksDeploymentApplication.AppId
    gh secret set AZURE_TENANT_ID --body $azureContext.Tenant.Id
    gh secret set AZURE_SUBSCRIPTION_ID --body $azureContext.Subscription.Id

    Otherwise, you can create them through the web interface like I did in the Learn Live event below.

    info

    It may look like the whole video will play, but it'll stop after configuring the secrets in GitHub (after about 9 minutes)

    The video shows creating the Azure AD application, service principals, and configuring the federated identity in Azure AD and GitHub.

    Creating a Bicep Deployment

    Resuable Workflows

    We'll create our Bicep deployment in a reusable workflows. What are they? The previous link has the documentation or the video below has my colleague Brandon Martinez and I talking about them.

    This workflow is basically the same deployment we did in last week's series, just in GitHub Actions.

    Start by creating a file called deploy_aks.yml in the .github/workflows directory with the below contents.

    name: deploy

    on:
    workflow_call:
    inputs:
    resourceGroupName:
    required: true
    type: string
    secrets:
    AZURE_CLIENT_ID:
    required: true
    AZURE_TENANT_ID:
    required: true
    AZURE_SUBSCRIPTION_ID:
    required: true
    outputs:
    containerRegistryName:
    description: Container Registry Name
    value: ${{ jobs.deploy.outputs.containerRegistryName }}
    containerRegistryUrl:
    description: Container Registry Login Url
    value: ${{ jobs.deploy.outputs.containerRegistryUrl }}
    resourceGroupName:
    description: Resource Group Name
    value: ${{ jobs.deploy.outputs.resourceGroupName }}
    aksName:
    description: Azure Kubernetes Service Cluster Name
    value: ${{ jobs.deploy.outputs.aksName }}

    permissions:
    id-token: write
    contents: read

    jobs:
    validate:
    runs-on: ubuntu-latest
    steps:
    - uses: actions/checkout@v2
    - uses: azure/login@v1
    name: Sign in to Azure
    with:
    client-id: ${{ secrets.AZURE_CLIENT_ID }}
    tenant-id: ${{ secrets.AZURE_TENANT_ID }}
    subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}
    - uses: azure/arm-deploy@v1
    name: Run preflight validation
    with:
    deploymentName: ${{ github.run_number }}
    scope: subscription
    region: eastus
    template: ./deploy/main.bicep
    parameters: >
    resourceGroup=${{ inputs.resourceGroupName }}
    deploymentMode: Validate

    deploy:
    needs: validate
    runs-on: ubuntu-latest
    outputs:
    containerRegistryName: ${{ steps.deploy.outputs.acr_name }}
    containerRegistryUrl: ${{ steps.deploy.outputs.acr_login_server_url }}
    resourceGroupName: ${{ steps.deploy.outputs.resource_group_name }}
    aksName: ${{ steps.deploy.outputs.aks_name }}
    steps:
    - uses: actions/checkout@v2
    - uses: azure/login@v1
    name: Sign in to Azure
    with:
    client-id: ${{ secrets.AZURE_CLIENT_ID }}
    tenant-id: ${{ secrets.AZURE_TENANT_ID }}
    subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}
    - uses: azure/arm-deploy@v1
    id: deploy
    name: Deploy Bicep file
    with:
    failOnStdErr: false
    deploymentName: ${{ github.run_number }}
    scope: subscription
    region: eastus
    template: ./deploy/main.bicep
    parameters: >
    resourceGroup=${{ inputs.resourceGroupName }}

    Adding the Bicep Deployment

    Once we have the Bicep deployment workflow, we can add it to the primary build definition in .github/workflows/dotnetcore.yml

    Permissions

    First, we need to add a permissions block to let the workflow request our Azure AD token. This can go towards the top of the YAML file (I started it on line 5).

    permissions:
    id-token: write
    contents: read

    Deploy AKS Job

    Next, we'll add a reference to our reusable workflow. This will go after the build job.

      deploy_aks:
    needs: [build]
    uses: ./.github/workflows/deploy_aks.yml
    with:
    resourceGroupName: 'cnny-week3'
    secrets:
    AZURE_CLIENT_ID: ${{ secrets.AZURE_CLIENT_ID }}
    AZURE_TENANT_ID: ${{ secrets.AZURE_TENANT_ID }}
    AZURE_SUBSCRIPTION_ID: ${{ secrets.AZURE_SUBSCRIPTION_ID }}

    Building and Publishing a Container Image

    Now that we have our target environment in place and an Azure Container Registry, we can build and publish our container images.

    Add a Reusable Workflow

    First, we'll create a new file for our reusable workflow at .github/workflows/publish_container_image.yml.

    We'll start the file with a name, the parameters it needs to run, and the permissions requirements for the federated identity request.

    name: Publish Container Images

    on:
    workflow_call:
    inputs:
    containerRegistryName:
    required: true
    type: string
    containerRegistryUrl:
    required: true
    type: string
    githubSha:
    required: true
    type: string
    secrets:
    AZURE_CLIENT_ID:
    required: true
    AZURE_TENANT_ID:
    required: true
    AZURE_SUBSCRIPTION_ID:
    required: true

    permissions:
    id-token: write
    contents: read

    Build the Container Images

    Our next step is to build the two container images we'll need for the application, the website and the API. We'll build the container images on our build worker and tag it with the git SHA, so there'll be a direct tie between the point in time in our codebase and the container images that represent it.

    jobs:
    publish_container_image:
    runs-on: ubuntu-latest

    steps:
    - uses: actions/checkout@v2
    - name: docker build
    run: |
    docker build . -f src/Web/Dockerfile -t ${{ inputs.containerRegistryUrl }}/web:${{ inputs.githubSha }}
    docker build . -f src/PublicApi/Dockerfile -t ${{ inputs.containerRegistryUrl }}/api:${{ inputs.githubSha}}

    Scan the Container Images

    Before we publish those container images, we'll scan them for vulnerabilities and best practice violations. We can add these two steps (one scan for each image).

        - name: scan web container image
    uses: Azure/container-scan@v0
    with:
    image-name: ${{ inputs.containerRegistryUrl }}/web:${{ inputs.githubSha}}
    - name: scan api container image
    uses: Azure/container-scan@v0
    with:
    image-name: ${{ inputs.containerRegistryUrl }}/web:${{ inputs.githubSha}}

    The container images provided have a few items that'll be found. We can create an allowed list at .github/containerscan/allowedlist.yaml to define vulnerabilities or best practice violations that we'll explictly allow to not fail our build.

    general:
    vulnerabilities:
    - CVE-2022-29458
    - CVE-2022-3715
    - CVE-2022-1304
    - CVE-2021-33560
    - CVE-2020-16156
    - CVE-2019-8457
    - CVE-2018-8292
    bestPracticeViolations:
    - CIS-DI-0001
    - CIS-DI-0005
    - CIS-DI-0006
    - CIS-DI-0008

    Publish the Container Images

    Finally, we'll log in to Azure, then log in to our Azure Container Registry, and push our images.

        - uses: azure/login@v1
    name: Sign in to Azure
    with:
    client-id: ${{ secrets.AZURE_CLIENT_ID }}
    tenant-id: ${{ secrets.AZURE_TENANT_ID }}
    subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}
    - name: acr login
    run: az acr login --name ${{ inputs.containerRegistryName }}
    - name: docker push
    run: |
    docker push ${{ inputs.containerRegistryUrl }}/web:${{ inputs.githubSha}}
    docker push ${{ inputs.containerRegistryUrl }}/api:${{ inputs.githubSha}}

    Update the Build With the Image Build and Publish

    Now that we have our reusable workflow to create and publish our container images, we can include that in our primary build defnition at .github/workflows/dotnetcore.yml.

      publish_container_image:
    needs: [deploy_aks]
    uses: ./.github/workflows/publish_container_image.yml
    with:
    containerRegistryName: ${{ needs.deploy_aks.outputs.containerRegistryName }}
    containerRegistryUrl: ${{ needs.deploy_aks.outputs.containerRegistryUrl }}
    githubSha: ${{ github.sha }}
    secrets:
    AZURE_CLIENT_ID: ${{ secrets.AZURE_CLIENT_ID }}
    AZURE_TENANT_ID: ${{ secrets.AZURE_TENANT_ID }}
    AZURE_SUBSCRIPTION_ID: ${{ secrets.AZURE_SUBSCRIPTION_ID }}

    Deploying to Kubernetes

    Finally, we've gotten enough set up that a commit to the target branch will:

    • build and test our application code
    • set up (or validate) our AKS and ACR environment
    • and create, scan, and publish our container images to ACR

    Our last step will be to deploy our application to Kubernetes. We'll use the basic building blocks we worked with last week, deployments and services.

    Starting the Reusable Workflow to Deploy to AKS

    We'll start our workflow with our parameters that we need, as well as the permissions to access the token to log in to Azure.

    We'll check out our code, then log in to Azure, and use the az CLI to get credentials for our AKS cluster.

    name: deploy_to_aks

    on:
    workflow_call:
    inputs:
    aksName:
    required: true
    type: string
    resourceGroupName:
    required: true
    type: string
    containerRegistryUrl:
    required: true
    type: string
    githubSha:
    required: true
    type: string
    secrets:
    AZURE_CLIENT_ID:
    required: true
    AZURE_TENANT_ID:
    required: true
    AZURE_SUBSCRIPTION_ID:
    required: true

    permissions:
    id-token: write
    contents: read

    jobs:
    deploy:
    runs-on: ubuntu-latest
    steps:
    - uses: actions/checkout@v2
    - uses: azure/login@v1
    name: Sign in to Azure
    with:
    client-id: ${{ secrets.AZURE_CLIENT_ID }}
    tenant-id: ${{ secrets.AZURE_TENANT_ID }}
    subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}
    - name: Get AKS Credentials
    run: |
    az aks get-credentials --resource-group ${{ inputs.resourceGroupName }} --name ${{ inputs.aksName }}

    Edit the Deployment For Our Current Image Tag

    Let's add the Kubernetes manifests to our repo. This post is long enough, so you can find the content for the manifests folder in the manifests folder in the source repo under the week3/day1 branch.

    tip

    If you only forked the main branch of the source repo, you can easily get the updated manifests by using the following git commands:

    git remote add upstream https://github.com/Azure-Samples/eShopOnAks
    git fetch upstream week3/day1
    git checkout upstream/week3/day1 manifests

    This will make the week3/day1 branch available locally and then we can update the manifests directory to match the state of that branch.

    The deployments and the service defintions should be familiar from last week's content (but not the same). This week, however, there's a new file in the manifests - ./manifests/kustomization.yaml

    This file helps us more dynamically edit our kubernetes manifests and support is baked right in to the kubectl command.

    Kustomize Definition

    Kustomize allows us to specify specific resource manifests and areas of that manifest to replace. We've put some placeholders in our file as well, so we can replace those for each run of our CI/CD system.

    In ./manifests/kustomization.yaml you will see:

    resources:
    - deployment-api.yaml
    - deployment-web.yaml

    # Change the image name and version
    images:
    - name: notavalidregistry.azurecr.io/api:v0.1.0
    newName: <YOUR_ACR_SERVER>/api
    newTag: <YOUR_IMAGE_TAG>
    - name: notavalidregistry.azurecr.io/web:v0.1.0
    newName: <YOUR_ACR_SERVER>/web
    newTag: <YOUR_IMAGE_TAG>

    Replacing Values in our Build

    Now, we encounter a little problem - our deployment files need to know the tag and ACR server. We can do a bit of sed magic to edit the file on the fly.

    In .github/workflows/deploy_to_aks.yml, we'll add:

          - name: replace_placeholders_with_current_run
    run: |
    sed -i "s/<YOUR_ACR_SERVER>/${{ inputs.containerRegistryUrl }}/g" ./manifests/kustomization.yaml
    sed -i "s/<YOUR_IMAGE_TAG>/${{ inputs.githubSha }}/g" ./manifests/kustomization.yaml

    Deploying the Manifests

    We have our manifests in place and our kustomization.yaml file (with commands to update it at runtime) ready to go, we can deploy our manifests.

    First, we'll deploy our database (deployment and service). Next, we'll use the -k parameter on kubectl to tell it to look for a kustomize configuration, transform the requested manifests and apply those. Finally, we apply the service defintions for the web and API deployments.

            run: |
    kubectl apply -f ./manifests/deployment-db.yaml \
    -f ./manifests/service-db.yaml
    kubectl apply -k ./manifests
    kubectl apply -f ./manifests/service-api.yaml \
    -f ./manifests/service-web.yaml

    Summary

    We've covered a lot of ground in today's post. We set up federated credentials with GitHub. Then we added reusable workflows to deploy an AKS environment and build/scan/publish our container images, and then to deploy them into our AKS environment.

    This sets us up to start making changes to our application and Kubernetes configuration and have those changes automatically validated and deployed by our CI/CD system. Tomorrow, we'll look at updating our application environment with runtime configuration, persistent storage, and more.

    Resources

    Take the Cloud Skills Challenge!

    Enroll in the Cloud Skills Challenge!

    Don't miss out on this opportunity to level up your skills and stay ahead of the curve in the world of cloud native.

    - + \ No newline at end of file diff --git a/cnny-2023/tags/30-daysofcloudnative/page/11/index.html b/cnny-2023/tags/30-daysofcloudnative/page/11/index.html index 9b8884ac7f..c9cae9d162 100644 --- a/cnny-2023/tags/30-daysofcloudnative/page/11/index.html +++ b/cnny-2023/tags/30-daysofcloudnative/page/11/index.html @@ -14,13 +14,13 @@ - +

    16 posts tagged with "30daysofcloudnative"

    View All Tags

    · 9 min read
    Steven Murawski

    Welcome to Day 4 of Week 3 of #CloudNativeNewYear!

    The theme for this week is Bringing Your Application to Kubernetes. Yesterday we exposed the eShopOnWeb app so that customers can reach it over the internet using a custom domain name and TLS. Today we'll explore the topic of debugging and instrumentation.

    Ask the Experts Thursday, February 9th at 9 AM PST
    Friday, February 10th at 11 AM PST

    Watch the recorded demo and conversation about this week's topics.

    We were live on YouTube walking through today's (and the rest of this week's) demos. Join us Friday, February 10th and bring your questions!

    What We'll Cover

    • Debugging
    • Bridge To Kubernetes
    • Instrumentation
    • Resources: For self-study!

    Debugging

    Debugging applications in a Kubernetes cluster can be challenging for several reasons:

    • Complexity: Kubernetes is a complex system with many moving parts, including pods, nodes, services, and config maps, all of which can interact in unexpected ways and cause issues.
    • Distributed Environment: Applications running in a Kubernetes cluster are often distributed across multiple nodes, which makes it harder to determine the root cause of an issue.
    • Logging and Monitoring: Debugging an application in a Kubernetes cluster requires access to logs and performance metrics, which can be difficult to obtain in a large and dynamic environment.
    • Resource Management: Kubernetes manages resources such as CPU and memory, which can impact the performance and behavior of applications. Debugging resource-related issues requires a deep understanding of the Kubernetes resource model and the underlying infrastructure.
    • Dynamic Nature: Kubernetes is designed to be dynamic, with the ability to add and remove resources as needed. This dynamic nature can make it difficult to reproduce issues and debug problems.

    However, there are many tools and practices that can help make debugging applications in a Kubernetes cluster easier, such as using centralized logging, monitoring, and tracing solutions, and following best practices for managing resources and deployment configurations.

    There's also another great tool in our toolbox - Bridge to Kubernetes.

    Bridge to Kubernetes

    Bridge to Kubernetes is a great tool for microservice development and debugging applications without having to locally replicate all the required microservices.

    Bridge to Kubernetes works with Visual Studio or Visual Studio Code.

    We'll walk through using it with Visual Studio Code.

    Connecting Bridge to Kubernetes to Our Cluster

    Ensure your AKS cluster is the default for kubectl

    If you've recently spun up a new AKS cluster or you have been working with a different cluster, you may need to change what cluster credentials you have configured.

    If it's a new cluster, we can use:

    RESOURCE_GROUP=<YOUR RESOURCE GROUP NAME>
    CLUSTER_NAME=<YOUR AKS CLUSTER NAME>
    az aks get-credentials az aks get-credentials --resource-group $RESOURCE_GROUP --name $CLUSTER_NAME

    Open the command palette

    Open the command palette and find Bridge to Kubernetes: Configure. You may need to start typing the name to get it to show up.

    The command palette for Visual Studio Code is open and the first item is Bridge to Kubernetes: Configure

    Pick the service you want to debug

    Bridge to Kubernetes will redirect a service for you. Pick the service you want to redirect, in this case we'll pick web.

    Selecting the `web` service to redirect in Visual Studio Code

    Identify the port your application runs on

    Next, we'll be prompted to identify what port our application will run on locally. For this application it'll be 5001, but that's just specific to this application (and the default for ASP.NET 7, I believe).

    Setting port 5001 as the port to redirect to the `web` Kubernetes service in Visual Studio Code

    Pick a debug configuration to extend

    Bridge to Kubernetes has a couple of ways to run - it can inject it's setup and teardown to your existing debug configurations. We'll pick .NET Core Launch (web).

    Telling Bridge to Kubernetes to use the .NET Core Launch (web) debug configuration in Visual Studio Code

    Forward Traffic for All Requests

    The last prompt you'll get in the configuration is about how you want Bridge to Kubernetes to handle re-routing traffic. The default is that all requests into the service will get your local version.

    You can also redirect specific traffic. Bridge to Kubernetes will set up a subdomain and route specific traffic to your local service, while allowing other traffic to the deployed service.

    Allowing the launch of Endpoint Manager on Windows

    Using Bridge to Kubernetes to Debug Our Service

    Now that we've configured Bridge to Kubernetes, we see that tasks and a new launch configuration have been added.

    Added to .vscode/tasks.json:

            {
    "label": "bridge-to-kubernetes.resource",
    "type": "bridge-to-kubernetes.resource",
    "resource": "web",
    "resourceType": "service",
    "ports": [
    5001
    ],
    "targetCluster": "aks1",
    "targetNamespace": "default",
    "useKubernetesServiceEnvironmentVariables": false
    },
    {
    "label": "bridge-to-kubernetes.compound",
    "dependsOn": [
    "bridge-to-kubernetes.resource",
    "build"
    ],
    "dependsOrder": "sequence"
    }

    And added to .vscode/launch.json:

    {
    "name": ".NET Core Launch (web) with Kubernetes",
    "type": "coreclr",
    "request": "launch",
    "preLaunchTask": "bridge-to-kubernetes.compound",
    "program": "${workspaceFolder}/src/Web/bin/Debug/net7.0/Web.dll",
    "args": [],
    "cwd": "${workspaceFolder}/src/Web",
    "stopAtEntry": false,
    "env": {
    "ASPNETCORE_ENVIRONMENT": "Development",
    "ASPNETCORE_URLS": "http://+:5001"
    },
    "sourceFileMap": {
    "/Views": "${workspaceFolder}/Views"
    }
    }

    Launch the debug configuration

    We can start the process with the .NET Core Launch (web) with Kubernetes launch configuration in the Debug pane in Visual Studio Code.

    Launch the `.NET Core Launch (web) with Kubernetes` from the Debug pane in Visual Studio Code

    Enable the Endpoint Manager

    Part of this process includes a local service to help manage the traffic routing and your hosts file. This will require admin or sudo privileges. On Windows, you'll get a prompt like:

    Prompt to launch the endpoint manager.

    Access your Kubernetes cluster "locally"

    Bridge to Kubernetes will set up a tunnel (thanks to port forwarding) to your local workstation and create local endpoints for the other Kubernetes hosted services in your cluster, as well as pretending to be a pod in that cluster (for the application you are debugging).

    Output from Bridge To Kubernetes setup task.

    After making the connection to your Kubernetes cluster, the launch configuration will continue. In this case, we'll make a debug build of the application and attach the debugger. (This process may cause the terminal in VS Code to scroll with build output. You can find the Bridge to Kubernetes output with the local IP addresses and ports in the Output pane for Bridge to Kubernetes.)

    You can set breakpoints, use your debug console, set watches, run tests against your local version of the service.

    Exploring the Running Application Environment

    One of the cool things that Bridge to Kubernetes does for our debugging experience is bring the environment configuration that our deployed pod would inherit. When we launch our app, it'll see configuration and secrets that we'd expect our pod to be running with.

    To test this, we'll set a breakpoint in our application's start up to see what SQL Server is being used. We'll set a breakpoint at src/Infrastructure/Dependencies.cs on line 32.

    Then, we will start debugging the application with Bridge to Kubernetes. When it hits the breakpoint, we'll open the Debug pane and type configuration.GetConnectionString("CatalogConnection").

    When we run locally (not with Bridge to Kubernetes), we'd see:

    configuration.GetConnectionString("CatalogConnection")
    "Server=(localdb)\\mssqllocaldb;Integrated Security=true;Initial Catalog=Microsoft.eShopOnWeb.CatalogDb;"

    But, with Bridge to Kubernetes we see something more like (yours will vary based on the password ):

    configuration.GetConnectionString("CatalogConnection")
    "Server=db;Database=Microsoft.eShopOnWeb.CatalogDb;User Id=sa;Password=*****************;TrustServerCertificate=True;"

    Debugging our local application connected to Kubernetes.

    We can see that the database server configured is based on our db service and the password is pulled from our secret in Azure KeyVault (via AKS).

    This helps us run our local application just like it was actually in our cluster.

    Going Further

    Bridge to Kubernetes also supports more advanced scenarios and, as you need to start routing traffic around inside your cluster, may require you to modify your application to pass along a kubernetes-route-as header to help ensure that traffic for your debugging workloads is properly handled. The docs go into much greater detail about that.

    Instrumentation

    Now that we've figured out our debugging story, we'll need to ensure that we have the right context clues to find where we need to debug or to give us a better idea of how well our microservices are running.

    Logging and Tracing

    Logging and tracing become even more critical in Kubernetes, where your application could be running in a number of pods across different nodes. When you have an issue, in addition to the normal application data, you'll want to know what pod and what node had the issue, what the state of those resources were (were you resource constrained or were shared resources unavailable?), and if autoscaling is enabled, you'll want to know if a scale event has been triggered. There are a multitude of other concerns based on your application and the environment you maintain.

    Given these informational needs, it's crucial to revisit your existing logging and instrumentation. Most frameworks and languages have extensible logging, tracing, and instrumentation libraries that you can iteratively add information to, such as pod and node states, and ensuring that requests can be traced across your microservices. This will pay you back time and time again when you have to troubleshoot issues in your existing environment.

    Centralized Logging

    To enhance the troubleshooting process further, it's important to implement centralized logging to consolidate logs from all your microservices into a single location. This makes it easier to search and analyze logs when you're troubleshooting an issue.

    Automated Alerting

    Additionally, implementing automated alerting, such as sending notifications when specific conditions occur in the logs, can help you detect issues before they escalate.

    End to end Visibility

    End-to-end visibility is also essential in understanding the flow of requests and responses between microservices in a distributed system. With end-to-end visibility, you can quickly identify bottlenecks and slowdowns in the system, helping you to resolve issues more efficiently.

    Resources

    Take the Cloud Skills Challenge!

    Enroll in the Cloud Skills Challenge!

    Don't miss out on this opportunity to level up your skills and stay ahead of the curve in the world of cloud native.

    - + \ No newline at end of file diff --git a/cnny-2023/tags/30-daysofcloudnative/page/12/index.html b/cnny-2023/tags/30-daysofcloudnative/page/12/index.html index c02b97f2c8..646fba7c0c 100644 --- a/cnny-2023/tags/30-daysofcloudnative/page/12/index.html +++ b/cnny-2023/tags/30-daysofcloudnative/page/12/index.html @@ -14,13 +14,13 @@ - +

    16 posts tagged with "30daysofcloudnative"

    View All Tags

    · 7 min read
    Nitya Narasimhan

    Welcome to Week 4 of #CloudNativeNewYear!

    This week we'll go further with Cloud-native by exploring advanced topics and best practices for the Cloud-native practitioner. We'll start with an exploration of Serverless Container Options - ranging from managed services to Azure Kubernetes Service (AKS) and Azure Container Apps (ACA), to options that allow more granular control!

    What We'll Cover

    • The Azure Compute Landscape
    • Serverless Compute on Azure
    • Comparing Container Options On Azure
    • Other Considerations
    • Exercise: Try this yourself!
    • Resources: For self-study!


    We started this series with an introduction to core concepts:

    • In Containers 101, we learned why containerization matters. Think portability, isolation, scalability, resource-efficiency and cost-effectiveness. But not all apps can be containerized.
    • In Kubernetes 101, we learned how orchestration works. Think systems to automate container deployment, scaling, and management. But using Kubernetes directly can be complex.
    • In Exploring Cloud Native Options we asked the real questions: can we containerize - and should we?. The first depends on app characteristics, the second on your requirements.

    For example:

    • Can we containerize? The answer might be no if your app has system or OS dependencies, requires access to low-level hardware, or maintains complex state across sessions.
    • Should we containerize? The answer might be yes if your app is microservices-based, is stateless by default, requires portability, or is a legaacy app that can benefit from container isolation.

    As with every technology adoption decision process, there are no clear yes/no answers - just tradeoffs that you need to evaluate based on your architecture and application requirements. In today's post, we try to look at this from two main perspectives:

    1. Should you go serverless? Think: managed services that let you focus on app, not infra.
    2. What Azure Compute should I use? Think: best fit for my architecture & technology choices.

    Azure Compute Landscape

    Let's answer the second question first by exploring all available compute options on Azure. The illustrated decision-flow below is my favorite ways to navigate the choices, with questions like:

    • Are you migrating an existing app or building a new one?
    • Can you app be containerized?
    • Does it use a specific technology (Spring Boot, Red Hat Openshift)?
    • Do you need access to the Kubernetes API?
    • What characterizes the workload? (event-driven, web app, microservices etc.)

    Read the docs to understand how your choices can be influenced by the hosting model (IaaS, PaaS, FaaS), supported features (Networking, DevOps, Scalability, Availability, Security), architectural styles (Microservices, Event-driven, High-Performance Compute, Task Automation,Web-Queue Worker) etc.

    Compute Choices

    Now that we know all available compute options, let's address the second question: why go serverless? and what are my serverless compute options on Azure?

    Azure Serverless Compute

    Serverless gets defined many ways, but from a compute perspective, we can focus on a few key characteristics that are key to influencing this decision:

    • managed services - focus on application, let cloud provider handle infrastructure.
    • pay for what you use - get cost-effective resource utilization, flexible pricing options.
    • autoscaling on demand - take advantage of built-in features like KEDA-compliant triggers.
    • use preferred languages - write code in Java, JS, C#, Python etc. (specifics based on service)
    • cloud-native architectures - can support event-driven solutions, APIs, Microservices, DevOps!

    So what are some of the key options for Serverless Compute on Azure? The article dives into serverless support for fully-managed end-to-end serverless solutions with comprehensive support for DevOps, DevTools, AI/ML, Database, Storage, Monitoring and Analytics integrations. But we'll just focus on the 4 categories of applications when we look at Compute!

    1. Serverless Containerized Microservices using Azure Container Apps. Code in your preferred language, exploit full Dapr support, scale easily with any KEDA-compliant trigger.
    2. Serverless Application Environments using Azure App Service. Suitable for hosting monolithic apps (vs. microservices) in a managed service, with built-in support for on-demand scaling.
    3. Serverless Kubernetes using Azure Kubernetes Service (AKS). Spin up pods inside container instances and deploy Kubernetes-based applications with built-in KEDA-compliant autoscaling.
    4. Serverless Functions using Azure Functions. Execute "code at the granularity of functions" in your preferred language, and scale on demand with event-driven compute.

    We'll talk about these, and other compute comparisons, at the end of the article. But let's start with the core option you might choose if you want a managed serverless compute solution with built-in features for delivering containerized microservices at scale. Hello, Azure Container Apps!.

    Azure Container Apps

    Azure Container Apps (ACA) became generally available in May 2022 - providing customers with the ability to run microservices and containerized applications on a serverless, consumption-based platform. The figure below showcases the different types of applications that can be built with ACA. Note that it comes with built-in KEDA-compliant autoscaling triggers, and other auto-scale criteria that may be better-suited to the type of application you are building.

    About ACA

    So far in the series, we've put the spotlight on Azure Kubernetes Service (AKS) - so you're probably asking yourself: How does ACA compare to AKS?. We're glad you asked. Check out our Go Cloud-native with Azure Container Apps post from the #ServerlessSeptember series last year for a deeper-dive, or review the figure below for the main comparison points.

    The key takeaway is this. Azure Container Apps (ACA) also runs on Kubernetes but abstracts away its complexity in a managed service offering that lets you get productive quickly without requiring detailed knowledge of Kubernetes workings or APIs. However, if you want full access and control over the Kubernetes API then go with Azure Kubernetes Service (AKS) instead.

    Comparison

    Other Container Options

    Azure Container Apps is the preferred Platform As a Service (PaaS) option for a fully-managed serverless solution on Azure that is purpose-built for cloud-native microservices-based application workloads. But - there are other options that may be suitable for your specific needs, from a requirements and tradeoffs perspective. Let's review them quickly:

    1. Azure Functions is the serverless Functions-as-a-Service (FaaS) option, as opposed to ACA which supports a PaaS approach. It's optimized for running event-driven applications built at the granularity of ephemeral functions that can be deployed as code or containers.
    2. Azure App Service provides fully managed hosting for web applications that may be deployed using code or containers. It can be integrated with other services including Azure Container Apps and Azure Functions. It's optimized for deploying traditional web apps.
    3. Azure Kubernetes Service provides a fully managed Kubernetes option capable of running any Kubernetes workload, with direct access to the Kubernetes API.
    4. Azure Container Instances provides a single pod of Hyper-V isolated containers on demand, making them more of a low-level "building block" option compared to ACA.

    Based on the technology choices you made for application development you may also have more specialized options you want to consider. For instance:

    1. Azure Spring Apps is ideal if you're running Spring Boot or Spring Cloud workloads on Azure,
    2. Azure Red Hat OpenShift is ideal for integrated Kubernetes-powered OpenShift on Azure.
    3. Azure Confidential Computing is ideal if you have data/code integrity and confidentiality needs.
    4. Kubernetes At The Edge is ideal for bare-metal options that extend compute to edge devices.

    This is just the tip of the iceberg in your decision-making journey - but hopefully, it gave you a good sense of the options and criteria that influences your final choices. Let's wrap this up with a look at self-study resources for skilling up further.

    Exercise

    Want to get hands on learning related to these technologies?

    TAKE THE CLOUD SKILLS CHALLENGE

    Register today and level up your skills by completing free learning modules, while competing with your peers for a place on the leaderboards!

    Resources

    - + \ No newline at end of file diff --git a/cnny-2023/tags/30-daysofcloudnative/page/13/index.html b/cnny-2023/tags/30-daysofcloudnative/page/13/index.html index ac558ede05..e704977de8 100644 --- a/cnny-2023/tags/30-daysofcloudnative/page/13/index.html +++ b/cnny-2023/tags/30-daysofcloudnative/page/13/index.html @@ -14,13 +14,13 @@ - +

    16 posts tagged with "30daysofcloudnative"

    View All Tags

    · 3 min read
    Cory Skimming

    It's the final week of #CloudNativeNewYear! This week we'll go further with Cloud-native by exploring advanced topics and best practices for the Cloud-native practitioner. In today's post, we will introduce you to the basics of the open-source project Draft and how it can be used to easily create and deploy applications to Kubernetes.

    It's not too late to sign up for and complete the Cloud Skills Challenge!

    What We'll Cover

    • What is Draft?
    • Draft basics
    • Demo: Developing to AKS with Draft
    • Resources


    What is Draft?

    Draft is an open-source tool that can be used to streamline the development and deployment of applications on Kubernetes clusters. It provides a simple and easy-to-use workflow for creating and deploying applications, making it easier for developers to focus on writing code and building features, rather than worrying about the underlying infrastructure. This is great for users who are just getting started with Kubernetes, or those who are just looking to simplify their experience.

    New to Kubernetes?

    Draft basics

    Draft streamlines Kubernetes development by taking a non-containerized application and generating the Dockerfiles, K8s manifests, Helm charts, and other artifacts associated with a containerized application. Draft can also create a GitHub Action workflow file to quickly build and deploy your application onto any Kubernetes cluster.

    1. 'draft create'': Create a new Draft project by simply running the 'draft create' command - this command will walk you through a series of questions on your application specification (such as the application language) and create a Dockerfile, Helm char, and Kubernetes
    2. 'draft generate-workflow'': Automatically build out a GitHub Action using the 'draft generate-workflow' command
    3. 'draft setup-gh'': If you are using Azure, use this command to automate the GitHub OIDC set up process to ensure that you will be able to deploy your application using your GitHub Action.

    At this point, you will have all the files needed to deploy your app onto a Kubernetes cluster (we told you it was easy!).

    You can also use the 'draft info' command if you are looking for information on supported languages and deployment types. Let's see it in action, shall we?


    Developing to AKS with Draft

    In this Microsoft Reactor session below, we'll briefly introduce Kubernetes and the Azure Kubernetes Service (AKS) and then demo how enable your applications for Kubernetes using the open-source tool Draft. We'll show how Draft can help you create the boilerplate code to containerize your applications and add routing and scaling behaviours.

    ##Conclusion

    Overall, Draft simplifies the process of building, deploying, and managing applications on Kubernetes, and can make the overall journey from code to Kubernetes significantly easier.


    Resources


    - + \ No newline at end of file diff --git a/cnny-2023/tags/30-daysofcloudnative/page/14/index.html b/cnny-2023/tags/30-daysofcloudnative/page/14/index.html index 00d36eed2a..a0c2f7c812 100644 --- a/cnny-2023/tags/30-daysofcloudnative/page/14/index.html +++ b/cnny-2023/tags/30-daysofcloudnative/page/14/index.html @@ -14,14 +14,14 @@ - +

    16 posts tagged with "30daysofcloudnative"

    View All Tags

    · 7 min read
    Vinicius Apolinario

    Welcome to Day 3 of Week 4 of #CloudNativeNewYear!

    The theme for this week is going further with Cloud Native. Yesterday we talked about using Draft to accelerate your Kubernetes adoption. Today we'll explore the topic of Windows containers.

    What We'll Cover

    • Introduction
    • Windows containers overview
    • Windows base container images
    • Isolation
    • Exercise: Try this yourself!
    • Resources: For self-study!

    Introduction

    Windows containers were launched along with Windows Server 2016, and have evolved since. In its latest release, Windows Server 2022, Windows containers have reached a great level of maturity and allow for customers to run production grade workloads.

    While suitable for new developments, Windows containers also provide developers and operations with a different approach than Linux containers. It allows for existing Windows applications to be containerized with little or no code changes. It also allows for professionals that are more comfortable with the Windows platform and OS, to leverage their skill set, while taking advantage of the containers platform.

    Windows container overview

    In essence, Windows containers are very similar to Linux. Since Windows containers use the same foundation of Docker containers, you can expect that the same architecture applies - with the specific notes of the Windows OS. For example, when running a Windows container via Docker, you use the same commands, such as docker run. To pull a container image, you can use docker pull, just like on Linux. However, to run a Windows container, you also need a Windows container host. This requirement is there because, as you might remember, a container shares the OS kernel with its container host.

    On Kubernetes, Windows containers are supported since Windows Server 2019. Just like with Docker, you can manage Windows containers like any other resource on the Kubernetes ecosystem. A Windows node can be part of a Kubernetes cluster, allowing you to run Windows container based applications on services like Azure Kubernetes Service. To deploy an Windows application to a Windows pod in Kubernetes, you can author a YAML specification much like you would for Linux. The main difference is that you would point to an image that runs on Windows, and you need to specify a node selection tag to indicate said pod needs to run on a Windows node.

    Windows base container images

    On Windows containers, you will always use a base container image provided by Microsoft. This base container image contains the OS binaries for the container to run. This image can be as large as 3GB+, or small as ~300MB. The difference in the size is a consequence of the APIs and components available in each Windows container base container image. There are primarily, three images: Nano Server, Server Core, and Server.

    Nano Server is the smallest image, ranging around 300MB. It's a base container image for new developments and cloud-native scenarios. Applications need to target Nano Server as the Windows OS, so not all frameworks will work. For example, .Net works on Nano Server, but .Net Framework doesn't. Other third-party frameworks also work on Nano Server, such as Apache, NodeJS, Phyton, Tomcat, Java runtime, JBoss, Redis, among others.

    Server Core is a much larger base container image, ranging around 1.25GB. It's larger size is compensated by it's application compatibility. Simply put, any application that meets the requirements to be run on a Windows container, can be containerized with this image.

    The Server image builds on the Server Core one. It ranges around 3.1GB and has even greater application compatibility than the Server Core image. In addition to the traditional Windows APIs and components, this image allows for scenarios such as Machine Learning via DirectX with GPU access.

    The best image for your scenario is dependent on the requirements your application has on the Windows OS inside a container. However, there are some scenarios that are not supported at all on Windows containers - such as GUI or RDP dependent applications, some Windows Server infrastructure roles, such as Active Directory, among others.

    Isolation

    When running containers, the kernel of the container host is shared with the containers running on it. While extremely convenient, this poses a potential risk for multi-tenant scenarios. If one container is compromised and has access to the host, it could potentially compromise other containers in the same system.

    For enterprise customers running on-premises (or even in the cloud), this can be mitigated by using a VM as a container host and considering the VM itself a security boundary. However, if multiple workloads from different tenants need to share the same host, Windows containers offer another option: Hyper-V isolation. While the name Hyper-V is associated with VMs, its virtualization capabilities extend to other services, including containers. Hyper-V isolated containers run on a purpose built, extremely small, highly performant VM. However, you manage a container running with Hyper-V isolation the same way you do with a process isolated one. In fact, the only notable difference is that you need to append the --isolation=hyperv tag to the docker run command.

    Exercise

    Here are a few examples of how to use Windows containers:

    Run Windows containers via Docker on your machine

    To pull a Windows base container image:

    docker pull mcr.microsoft.com/windows/servercore:ltsc2022

    To run a basic IIS container:

    #This command will pull and start a IIS container. You can access it from http://<your local IP>:8080
    docker run -d -p 8080:80 mcr.microsoft.com/windows/servercore/iis:windowsservercore-ltsc2022

    Run the same IIS container with Hyper-V isolation

    #This command will pull and start a IIS container. You can access it from http://<your local IP>:8080
    docker run -d -p 8080:80 --isolation=hyperv mcr.microsoft.com/windows/servercore/iis:windowsservercore-ltsc2022

    To run a Windows container interactively:

    docker run -it mcr.microsoft.com/windows/servercore:ltsc2022 powershell

    Run Windows containers on Kubernetes

    To prepare an AKS cluster for Windows containers: Note: Replace the values on the example below with the ones from your environment.

    echo "Please enter the username to use as administrator credentials for Windows Server nodes on your cluster: " && read WINDOWS_USERNAME
    az aks create \
    --resource-group myResourceGroup \
    --name myAKSCluster \
    --node-count 2 \
    --generate-ssh-keys \
    --windows-admin-username $WINDOWS_USERNAME \
    --vm-set-type VirtualMachineScaleSets \
    --network-plugin azure

    To add a Windows node pool for Windows containers:

    az aks nodepool add \
    --resource-group myResourceGroup \
    --cluster-name myAKSCluster \
    --os-type Windows \
    --name npwin \
    --node-count 1

    Deploy a sample ASP.Net application to the AKS cluster above using the YAML file below:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
    name: sample
    labels:
    app: sample
    spec:
    replicas: 1
    template:
    metadata:
    name: sample
    labels:
    app: sample
    spec:
    nodeSelector:
    "kubernetes.io/os": windows
    containers:
    - name: sample
    image: mcr.microsoft.com/dotnet/framework/samples:aspnetapp
    resources:
    limits:
    cpu: 1
    memory: 800M
    ports:
    - containerPort: 80
    selector:
    matchLabels:
    app: sample
    ---
    apiVersion: v1
    kind: Service
    metadata:
    name: sample
    spec:
    type: LoadBalancer
    ports:
    - protocol: TCP
    port: 80
    selector:
    app: sample

    Save the file above and run the command below on your Kubernetes cluster:

    kubectl apply -f <filename> .

    Once deployed, you can access the application by getting the IP address of your service:

    kubectl get service

    Resources

    It's not too late to sign up for and complete the Cloud Skills Challenge!
    - + \ No newline at end of file diff --git a/cnny-2023/tags/30-daysofcloudnative/page/15/index.html b/cnny-2023/tags/30-daysofcloudnative/page/15/index.html index 013573dd26..807cfe4f03 100644 --- a/cnny-2023/tags/30-daysofcloudnative/page/15/index.html +++ b/cnny-2023/tags/30-daysofcloudnative/page/15/index.html @@ -14,13 +14,13 @@ - +

    16 posts tagged with "30daysofcloudnative"

    View All Tags

    · 4 min read
    Jorge Arteiro

    Welcome to Day 4 of Week 4 of #CloudNativeNewYear!

    The theme for this week is going further with Cloud Native. Yesterday we talked about Windows Containers. Today we'll explore addons and extensions available to Azure Kubernetes Services (AKS).

    What We'll Cover

    • Introduction
    • Add-ons
    • Extensions
    • Add-ons vs Extensions
    • Resources

    Introduction

    Azure Kubernetes Service (AKS) is a fully managed container orchestration service that makes it easy to deploy and manage containerized applications on Azure. AKS offers a number of features and capabilities, including the ability to extend its supported functionality through the use of add-ons and extensions.

    There are also integrations available from open-source projects and third parties, but they are not covered by the AKS support policy.

    Add-ons

    Add-ons provide a supported way to extend AKS. Installation, configuration and lifecycle are managed by AKS following pre-determine updates rules.

    As an example, let's enable Container Insights with the monitoring addon. on an existing AKS cluster using az aks enable-addons --addons CLI command

    az aks enable-addons \
    --name MyManagedCluster \
    --resource-group MyResourceGroup \
    --addons monitoring

    or you can use az aks create --enable-addons when creating new clusters

    az aks create \
    --name MyManagedCluster \
    --resource-group MyResourceGroup \
    --enable-addons monitoring

    The current available add-ons are:

    1. http_application_routing - Configure ingress with automatic public DNS name creation. Only recommended for development.
    2. monitoring - Container Insights monitoring.
    3. virtual-node - CNCF virtual nodes open source project.
    4. azure-policy - Azure Policy for AKS.
    5. ingress-appgw - Application Gateway Ingress Controller (AGIC).
    6. open-service-mesh - CNCF Open Service Mesh project.
    7. azure-keyvault-secrets-provider - Azure Key Vault Secrets Provider for Secret Store CSI Driver.
    8. web_application_routing - Managed NGINX ingress Controller.
    9. keda - CNCF Event-driven autoscaling project.

    For more details, get the updated list of AKS Add-ons here

    Extensions

    Cluster Extensions uses Helm charts and integrates with Azure Resource Manager (ARM) to provide installation and lifecycle management of capabilities on top of AKS.

    Extensions can be auto upgraded using minor versions, but it requires extra management and configuration. Using Scope parameter, it can be installed on the whole cluster or per namespace.

    AKS Extensions requires an Azure CLI extension to be installed. To add or update this CLI extension use the following commands:

    az extension add --name k8s-extension

    and to update an existing extension

    az extension update --name k8s-extension

    There are only 3 available extensions:

    1. Dapr - CNCF Dapr project.
    2. Azure ML - Integrate Azure Machine Learning with AKS to train, inference and manage ML models.
    3. Flux (GitOps) - CNCF Flux project integrated with AKS to enable cluster configuration and application deployment using GitOps.

    As an example, you can install Azure ML using the following command:

    az k8s-extension create \
    --name aml-compute --extension-type Microsoft.AzureML.Kubernetes \
    --scope cluster --cluster-name <clusterName> \
    --resource-group <resourceGroupName> \
    --cluster-type managedClusters \
    --configuration-settings enableInference=True allowInsecureConnections=True

    For more details, get the updated list of AKS Extensions here

    Add-ons vs Extensions

    AKS Add-ons brings an advantage of been fully managed by AKS itself, and AKS Extensions are more flexible and configurable but requires extra level of management.

    Add-ons are part of the AKS resource provider in the Azure API, and AKS Extensions are a separate resource provider on the Azure API.

    Resources

    It's not too late to sign up for and complete the Cloud Skills Challenge!
    - + \ No newline at end of file diff --git a/cnny-2023/tags/30-daysofcloudnative/page/16/index.html b/cnny-2023/tags/30-daysofcloudnative/page/16/index.html index bc334220b1..5342e59787 100644 --- a/cnny-2023/tags/30-daysofcloudnative/page/16/index.html +++ b/cnny-2023/tags/30-daysofcloudnative/page/16/index.html @@ -14,13 +14,13 @@ - +

    16 posts tagged with "30daysofcloudnative"

    View All Tags

    · 6 min read
    Cory Skimming
    Steven Murawski
    Paul Yu
    Josh Duffney
    Nitya Narasimhan
    Vinicius Apolinario
    Jorge Arteiro
    Devanshi Joshi

    And that's a wrap on the inaugural #CloudNativeNewYear! Thank you for joining us to kick off the new year with this learning journey into cloud-native! In this final post of the 2023 celebration of all things cloud-native, we'll do two things:

    • Look Back - with a quick retrospective of what was covered.
    • Look Ahead - with resources and suggestions for how you can continue your skilling journey!

    We appreciate your time and attention and we hope you found this curated learning valuable. Feedback and suggestions are always welcome. From our entire team, we wish you good luck with the learning journey - now go build some apps and share your knowledge! 🎉


    What We'll Cover

    • Cloud-native fundamentals
    • Kubernetes fundamentals
    • Bringing your applications to Kubernetes
    • Go further with cloud-native
    • Resources to keep the celebration going!

    Week 1: Cloud-native Fundamentals

    In Week 1, we took a tour through the fundamentals of cloud-native technologies, including a walkthrough of the core concepts of containers, microservices, and Kubernetes.

    • Jan 23 - Cloud-native Fundamentals: The answers to life and all the universe - what is cloud-native? What makes an application cloud-native? What are the benefits? (yes, we all know it's 42, but hey, gotta start somewhere!)
    • Jan 24 - Containers 101: Containers are an essential component of cloud-native development. In this intro post, we cover how containers work and why they have become so popular.
    • Jan 25 - Kubernetes 101: Kuber-what-now? Learn the basics of Kubernetes and how it enables us to deploy and manage our applications effectively and consistently.
    A QUICKSTART GUIDE TO KUBERNETES CONCEPTS

    Missed it Live? Tune in to A Quickstart Guide to Kubernetes Concepts on demand, now!

    • Jan 26 - Microservices 101: What is a microservices architecture and how can we go about designing one?
    • Jan 27 - Exploring your Cloud Native Options: Cloud-native, while catchy, can be a very broad term. What technologies should you use? Learn some basic guidelines for when it is optimal to use different technologies for your project.

    Week 2: Kubernetes Fundamentals

    In Week 2, we took a deeper dive into the Fundamentals of Kubernetes. The posts and live demo from this week took us through how to build a simple application on Kubernetes, covering everything from deployment to networking and scaling. Note: for our samples and demo we have used Azure Kubernetes Service, but the principles apply to any Kubernetes!

    • Jan 30 - Pods and Deployments: how to use pods and deployments in Kubernetes.
    • Jan 31 - Services and Ingress: how to use services and ingress and a walk through the steps of making our containers accessible internally and externally!
    • Feb 1 - ConfigMaps and Secrets: how to of passing configuration and secrets to our applications in Kubernetes with ConfigMaps and Secrets.
    • Feb 2 - Volumes, Mounts, and Claims: how to use persistent storage on Kubernetes (and ensure your data can survive container restarts!).
    • Feb 3 - Scaling Pods and Nodes: how to scale pods and nodes in our Kubernetes cluster.
    ASK THE EXPERTS: AZURE KUBERNETES SERVICE

    Missed it Live? Tune in to Ask the Expert with Azure Kubernetes Service on demand, now!


    Week 3: Bringing your applications to Kubernetes

    So, you have learned how to build an application on Kubernetes. What about your existing applications? In Week 3, we explored how to take an existing application and set it up to run in Kubernetes:

    • Feb 6 - CI/CD: learn how to get an existing application running in Kubernetes with a full pipeline in GitHub Actions.
    • Feb 7 - Adapting Storage, Secrets, and Configuration: how to evaluate our sample application's configuration, storage, and networking requirements and implement using Kubernetes.
    • Feb 8 - Opening your Application with Ingress: how to expose the eShopOnWeb app so that customers can reach it over the internet using a custom domain name and TLS.
    • Feb 9 - Debugging and Instrumentation: how to debug and instrument your application now that it is on Kubernetes.
    • Feb 10 - CI/CD Secure Supply Chain: now that we have set up our application on Kubernetes, let's talk about container image signing and how to set up a secure supply change.

    Week 4: Go Further with Cloud-Native

    This week we have gone further with Cloud-native by exploring advanced topics and best practices for the Cloud-native practitioner.

    And today, February 17th, with this one post to rule (er, collect) them all!


    Keep the Learning Going!

    Learning is great, so why stop here? We have a host of great resources and samples for you to continue your cloud-native journey with Azure below:


    - + \ No newline at end of file diff --git a/cnny-2023/tags/30-daysofcloudnative/page/2/index.html b/cnny-2023/tags/30-daysofcloudnative/page/2/index.html index 189aa8338c..7966913f48 100644 --- a/cnny-2023/tags/30-daysofcloudnative/page/2/index.html +++ b/cnny-2023/tags/30-daysofcloudnative/page/2/index.html @@ -14,14 +14,14 @@ - +

    16 posts tagged with "30daysofcloudnative"

    View All Tags

    · 5 min read
    Cory Skimming

    Welcome to Week 1 of #CloudNativeNewYear!

    Cloud-native New Year

    You will often hear the term "cloud-native" when discussing modern application development, but even a quick online search will return a huge number of articles, tweets, and web pages with a variety of definitions. So, what does cloud-native actually mean? Also, what makes an application a cloud-native application versus a "regular" application?

    Today, we will address these questions and more as we kickstart our learning journey (and our new year!) with an introductory dive into the wonderful world of cloud-native.


    What We'll Cover

    • What is cloud-native?
    • What is a cloud-native application?
    • The benefits of cloud-native
    • The five pillars of cloud-native
    • Exercise: Take the Cloud Skills Challenge!

    1. What is cloud-native?

    The term "cloud-native" can seem pretty self-evident (yes, hello, native to the cloud?), and in a way, it is. While there are lots of definitions of cloud-native floating around, at it's core, cloud-native simply refers to a modern approach to building software that takes advantage of cloud services and environments. This includes using cloud-native technologies, such as containers, microservices, and serverless, and following best practices for deploying, scaling, and managing applications in a cloud environment.

    Official definition from the Cloud Native Computing Foundation:

    Cloud-native technologies empower organizations to build and run scalable applications in modern, dynamic environments such as public, private, and hybrid clouds. Containers, service meshes, microservices, immutable infrastructure, and declarative APIs exemplify this approach.

    These techniques enable loosely coupled systems that are resilient, manageable, and observable. Combined with robust automation, they allow engineers to make high-impact changes frequently and predictably with minimal toil. Source


    2. So, what exactly is a cloud-native application?

    Cloud-native applications are specifically designed to take advantage of the scalability, resiliency, and distributed nature of modern cloud infrastructure. But how does this differ from a "traditional" application?

    Traditional applications are generally been built, tested, and deployed as a single, monolithic unit. The monolithic nature of this type of architecture creates close dependencies between components. This complexity and interweaving only increases as an application grows and can make it difficult to evolve (not to mention troubleshoot) and challenging to operate over time.

    To contrast, in cloud-native architectures the application components are decomposed into loosely coupled services, rather than built and deployed as one block of code. This decomposition into multiple self-contained services enables teams to manage complexity and improve the speed, agility, and scale of software delivery. Many small parts enables teams to make targeted updates, deliver new features, and fix any issues without leading to broader service disruption.


    3. The benefits of cloud-native

    Cloud-native architectures can bring many benefits to an organization, including:

    1. Scalability: easily scale up or down based on demand, allowing organizations to adjust their resource usage and costs as needed.
    2. Flexibility: deploy and run on any cloud platform, and easily move between clouds and on-premises environments.
    3. High-availability: techniques such as redundancy, self-healing, and automatic failover help ensure that cloud-native applications are designed to be highly-available and fault tolerant.
    4. Reduced costs: take advantage of the pay-as-you-go model of cloud computing, reducing the need for expensive infrastructure investments.
    5. Improved security: tap in to cloud security features, such as encryption and identity management, to improve the security of the application.
    6. Increased agility: easily add new features or services to your applications to meet changing business needs and market demand.

    4. The pillars of cloud-native

    There are five areas that are generally cited as the core building blocks of cloud-native architecture:

    1. Microservices: Breaking down monolithic applications into smaller, independent, and loosely-coupled services that can be developed, deployed, and scaled independently.
    2. Containers: Packaging software in lightweight, portable, and self-sufficient containers that can run consistently across different environments.
    3. Automation: Using automation tools and DevOps processes to manage and operate the cloud-native infrastructure and applications, including deployment, scaling, monitoring, and self-healing.
    4. Service discovery: Using service discovery mechanisms, such as APIs & service meshes, to enable services to discover and communicate with each other.
    5. Observability: Collecting and analyzing data from the infrastructure and applications to understand and optimize the performance, behavior, and health of the system.

    These can (and should!) be used in combination to deliver cloud-native solutions that are highly scalable, flexible, and available.

    WHAT'S NEXT

    Stay tuned, as we will be diving deeper into these topics in the coming weeks:

    • Jan 24: Containers 101
    • Jan 25: Adopting Microservices with Kubernetes
    • Jan 26: Kubernetes 101
    • Jan 27: Exploring your Cloud-native Options

    Resources


    Don't forget to subscribe to the blog to get daily posts delivered directly to your favorite feed reader!


    - + \ No newline at end of file diff --git a/cnny-2023/tags/30-daysofcloudnative/page/3/index.html b/cnny-2023/tags/30-daysofcloudnative/page/3/index.html index ddebd0c306..9572afbd86 100644 --- a/cnny-2023/tags/30-daysofcloudnative/page/3/index.html +++ b/cnny-2023/tags/30-daysofcloudnative/page/3/index.html @@ -14,14 +14,14 @@ - +

    16 posts tagged with "30daysofcloudnative"

    View All Tags

    · 4 min read
    Steven Murawski
    Paul Yu
    Josh Duffney

    Welcome to Day 2 of Week 1 of #CloudNativeNewYear!

    Today, we'll focus on building an understanding of containers.

    What We'll Cover

    • Introduction
    • How do Containers Work?
    • Why are Containers Becoming so Popular?
    • Conclusion
    • Resources
    • Learning Path

    REGISTER & LEARN: KUBERNETES 101

    Interested in a dive into Kubernetes and a chance to talk to experts?

    🎙: Join us Jan 26 @1pm PST by registering here

    Here's what you will learn:

    • Key concepts and core principles of Kubernetes.
    • How to deploy, scale and manage containerized workloads.
    • Live Demo of the concepts explained
    • How to get started with Azure Kubernetes Service for free.

    Start your free Azure Kubernetes Trial Today!!: aka.ms/TryAKS

    Introduction

    In the beginning, we deployed our applications onto physical servers. We only had a certain number of those servers, so often they hosted multiple applications. This led to some problems when those applications shared dependencies. Upgrading one application could break another application on the same server.

    Enter virtualization. Virtualization allowed us to run our applications in an isolated operating system instance. This removed much of the risk of updating shared dependencies. However, it increased our overhead since we had to run a full operating system for each application environment.

    To address the challenges created by virtualization, containerization was created to improve isolation without duplicating kernel level resources. Containers provide efficient and consistent deployment and runtime experiences for our applications and have become very popular as a way of packaging and distributing applications.

    How do Containers Work?

    Containers build on two capabilities in the Linux operating system, namespaces and cgroups. These constructs allow the operating system to provide isolation to a process or group of processes, keeping their access to filesystem resources separate and providing controls on resource utilization. This, combined with tooling to help package, deploy, and run container images has led to their popularity in today’s operating environment. This provides us our isolation without the overhead of additional operating system resources.

    When a container host is deployed on an operating system, it works at scheduling the access to the OS (operating systems) components. This is done by providing a logical isolated group that can contain processes for a given application, called a namespace. The container host then manages /schedules access from the namespace to the host OS. The container host then uses cgroups to allocate compute resources. Together, the container host with the help of cgroups and namespaces can schedule multiple applications to access host OS resources.

    Overall, this gives the illusion of virtualizing the host OS, where each application gets its own OS. In actuality, all the applications are running on the same operating system and sharing the same kernel as the container host.

    Containers are popular in the software development industry because they provide several benefits over traditional virtualization methods. Some of these benefits include:

    • Portability: Containers make it easy to move an application from one environment to another without having to worry about compatibility issues or missing dependencies.
    • Isolation: Containers provide a level of isolation between the application and the host system, which means that the application running in the container cannot access the host system's resources.
    • Scalability: Containers make it easy to scale an application up or down as needed, which is useful for applications that experience a lot of traffic or need to handle a lot of data.
    • Resource Efficiency: Containers are more resource-efficient than traditional virtualization methods because they don't require a full operating system to be running on each virtual machine.
    • Cost-Effective: Containers are more cost-effective than traditional virtualization methods because they don't require expensive hardware or licensing fees.

    Conclusion

    Containers are a powerful technology that allows developers to package and deploy applications in a portable and isolated environment. This technology is becoming increasingly popular in the world of software development and is being used by many companies and organizations to improve their application deployment and management processes. With the benefits of portability, isolation, scalability, resource efficiency, and cost-effectiveness, containers are definitely worth considering for your next application development project.

    Containerizing applications is a key step in modernizing them, and there are many other patterns that can be adopted to achieve cloud-native architectures, including using serverless platforms, Kubernetes, and implementing DevOps practices.

    Resources

    Learning Path

    - + \ No newline at end of file diff --git a/cnny-2023/tags/30-daysofcloudnative/page/4/index.html b/cnny-2023/tags/30-daysofcloudnative/page/4/index.html index ed67a026ee..6858c5ee2b 100644 --- a/cnny-2023/tags/30-daysofcloudnative/page/4/index.html +++ b/cnny-2023/tags/30-daysofcloudnative/page/4/index.html @@ -14,14 +14,14 @@ - +

    16 posts tagged with "30daysofcloudnative"

    View All Tags

    · 3 min read
    Steven Murawski

    Welcome to Day 3 of Week 1 of #CloudNativeNewYear!

    This week we'll focus on what Kubernetes is.

    What We'll Cover

    • Introduction
    • What is Kubernetes? (Video)
    • How does Kubernetes Work? (Video)
    • Conclusion


    REGISTER & LEARN: KUBERNETES 101

    Interested in a dive into Kubernetes and a chance to talk to experts?

    🎙: Join us Jan 26 @1pm PST by registering here

    Here's what you will learn:

    • Key concepts and core principles of Kubernetes.
    • How to deploy, scale and manage containerized workloads.
    • Live Demo of the concepts explained
    • How to get started with Azure Kubernetes Service for free.

    Start your free Azure Kubernetes Trial Today!!: aka.ms/TryAKS

    Introduction

    Kubernetes is an open source container orchestration engine that can help with automated deployment, scaling, and management of our applications.

    Kubernetes takes physical (or virtual) resources and provides a consistent API over them, bringing a consistency to the management and runtime experience for our applications. Kubernetes provides us with a number of capabilities such as:

    • Container scheduling
    • Service discovery and load balancing
    • Storage orchestration
    • Automated rollouts and rollbacks
    • Automatic bin packing
    • Self-healing
    • Secret and configuration management

    We'll learn more about most of these topics as we progress through Cloud Native New Year.

    What is Kubernetes?

    Let's hear from Brendan Burns, one of the founders of Kubernetes as to what Kubernetes actually is.

    How does Kubernetes Work?

    And Brendan shares a bit more with us about how Kubernetes works.

    Conclusion

    Kubernetes allows us to deploy and manage our applications effectively and consistently.

    By providing a consistent API across many of the concerns our applications have, like load balancing, networking, storage, and compute, Kubernetes improves both our ability to build and ship new software.

    There are standards for the applications to depend on for resources needed. Deployments, metrics, and logs are provided in a standardized fashion allowing more effecient operations across our application environments.

    And since Kubernetes is an open source platform, it can be found in just about every type of operating environment - cloud, virtual machines, physical hardware, shared data centers, even small devices like Rasberry Pi's!

    Want to learn more? Join us for a webinar on Kubernetes Concepts (or catch the playback) on Thursday, January 26th at 1 PM PST and watch for the rest of this series right here!

    - + \ No newline at end of file diff --git a/cnny-2023/tags/30-daysofcloudnative/page/5/index.html b/cnny-2023/tags/30-daysofcloudnative/page/5/index.html index 6b8a9391b5..443c49a246 100644 --- a/cnny-2023/tags/30-daysofcloudnative/page/5/index.html +++ b/cnny-2023/tags/30-daysofcloudnative/page/5/index.html @@ -14,13 +14,13 @@ - +

    16 posts tagged with "30daysofcloudnative"

    View All Tags

    · 6 min read
    Josh Duffney

    Welcome to Day 4 of Week 1 of #CloudNativeNewYear!

    This week we'll focus on advanced topics and best practices for Cloud-Native practitioners, kicking off with this post on Serverless Container Options with Azure. We'll look at technologies, tools and best practices that range from managed services like Azure Kubernetes Service, to options allowing finer granularity of control and oversight.

    What We'll Cover

    • What is Microservice Architecture?
    • How do you design a Microservice?
    • What challenges do Microservices introduce?
    • Conclusion
    • Resources


    Microservices are a modern way of designing and building software that increases deployment velocity by decomposing an application into small autonomous services that can be deployed independently.

    By deploying loosely coupled microservices your applications can be developed, deployed, and scaled independently. Because each service is independent, it can be updated or replaced without having to worry about the impact on the rest of the application. This means that if a bug is found in one service, it can be fixed without having to redeploy the entire application. All of which gives an organization the ability to deliver value to their customers faster.

    In this article, we will explore the basics of microservices architecture, its benefits and challenges, and how it can help improve the development, deployment, and maintenance of software applications.

    What is Microservice Architecture?

    Before explaining what Microservice architecture is, it’s important to understand what problems microservices aim to address.

    Traditional software development is centered around building monolithic applications. Monolithic applications are built as a single, large codebase. Meaning your code is tightly coupled causing the monolithic app to suffer from the following:

    Too much Complexity: Monolithic applications can become complex and difficult to understand and maintain as they grow. This can make it hard to identify and fix bugs and add new features.

    Difficult to Scale: Monolithic applications can be difficult to scale as they often have a single point of failure, which can cause the whole application to crash if a service fails.

    Slow Deployment: Deploying a monolithic application can be risky and time-consuming, as a small change in one part of the codebase can affect the entire application.

    Microservice architecture (often called microservices) is an architecture style that addresses the challenges created by Monolithic applications. Microservices architecture is a way of designing and building software applications as a collection of small, independent services that communicate with each other through APIs. This allows for faster development and deployment cycles, as well as easier scaling and maintenance than is possible with a monolithic application.

    How do you design a Microservice?

    Building applications with Microservices architecture requires a different approach. Microservices architecture focuses on business capabilities rather than technical layers, such as data access or messaging. Doing so requires that you shift your focus away from the technical stack and model your applications based upon the various domains that exist within the business.

    Domain-driven design (DDD) is a way to design software by focusing on the business needs. You can use Domain-driven design as a framework that guides the development of well-designed microservices by building services that encapsulate knowledge in each domain and abstract that knowledge from clients.

    In Domain-driven design you start by modeling the business domain and creating a domain model. A domain model is an abstract model of the business model that distills and organizes a domain of knowledge and provides a common language for developers and domain experts. It’s the resulting domain model that microservices a best suited to be built around because it helps establish a well-defined boundary between external systems and other internal applications.

    In short, before you begin designing microservices, start by mapping the functions of the business and their connections to create a domain model for the microservice(s) to be built around.

    What challenges do Microservices introduce?

    Microservices solve a lot of problems and have several advantages, but the grass isn’t always greener on the other side.

    One of the key challenges of microservices is managing communication between services. Because services are independent, they need to communicate with each other through APIs. This can be complex and difficult to manage, especially as the number of services grows. To address this challenge, it is important to have a clear API design, with well-defined inputs and outputs for each service. It is also important to have a system for managing and monitoring communication between services, to ensure that everything is running smoothly.

    Another challenge of microservices is managing the deployment and scaling of services. Because each service is independent, it needs to be deployed and scaled separately from the rest of the application. This can be complex and difficult to manage, especially as the number of services grows. To address this challenge, it is important to have a clear and consistent deployment process, with well-defined steps for deploying and scaling each service. Furthermore, it is advisable to host them on a system with self-healing capabilities to reduce operational burden.

    It is also important to have a system for monitoring and managing the deployment and scaling of services, to ensure optimal performance.

    Each of these challenges has created fertile ground for tooling and process that exists in the cloud-native ecosystem. Kubernetes, CI CD, and other DevOps practices are part of the package of adopting the microservices architecture.

    Conclusion

    In summary, microservices architecture focuses on software applications as a collection of small, independent services that communicate with each other over well-defined APIs.

    The main advantages of microservices include:

    • increased flexibility and scalability per microservice,
    • efficient resource utilization (with help from a container orchestrator like Kubernetes),
    • and faster development cycles.

    Continue following along with this series to see how you can use Kubernetes to help adopt microservices patterns in your own environments!

    Resources

    - + \ No newline at end of file diff --git a/cnny-2023/tags/30-daysofcloudnative/page/6/index.html b/cnny-2023/tags/30-daysofcloudnative/page/6/index.html index 50243388e4..a2950b54ca 100644 --- a/cnny-2023/tags/30-daysofcloudnative/page/6/index.html +++ b/cnny-2023/tags/30-daysofcloudnative/page/6/index.html @@ -14,14 +14,14 @@ - +

    16 posts tagged with "30daysofcloudnative"

    View All Tags

    · 6 min read
    Cory Skimming

    We are excited to be wrapping up our first week of #CloudNativeNewYear! This week, we have tried to set the stage by covering the fundamentals of cloud-native practices and technologies, including primers on containerization, microservices, and Kubernetes.

    Don't forget to sign up for the the Cloud Skills Challenge!

    Today, we will do a brief recap of some of these technologies and provide some basic guidelines for when it is optimal to use each.


    What We'll Cover

    • To Containerize or not to Containerize?
    • The power of Kubernetes
    • Where does Serverless fit?
    • Resources
    • What's coming next!


    Just joining us now? Check out these other Week 1 posts:

    To Containerize or not to Containerize?

    As mentioned in our Containers 101 post earlier this week, containers can provide several benefits over traditional virtualization methods, which has made them popular within the software development community. Containers provide a consistent and predictable runtime environment, which can help reduce the risk of compatibility issues and simplify the deployment process. Additionally, containers can improve resource efficiency by allowing multiple applications to run on the same host while isolating their dependencies.

    Some types of apps that are a particularly good fit for containerization include:

    1. Microservices: Containers are particularly well-suited for microservices-based applications, as they can be used to isolate and deploy individual components of the system. This allows for more flexibility and scalability in the deployment process.
    2. Stateless applications: Applications that do not maintain state across multiple sessions, such as web applications, are well-suited for containers. Containers can be easily scaled up or down as needed and replaced with new instances, without losing data.
    3. Portable applications: Applications that need to be deployed in different environments, such as on-premises, in the cloud, or on edge devices, can benefit from containerization. The consistent and portable runtime environment of containers can make it easier to move the application between different environments.
    4. Legacy applications: Applications that are built using older technologies or that have compatibility issues can be containerized to run in an isolated environment, without impacting other applications or the host system.
    5. Dev and testing environments: Containerization can be used to create isolated development and testing environments, which can be easily created and destroyed as needed.

    While there are many types of applications that can benefit from a containerized approach, it's worth noting that containerization is not always the best option, and it's important to weigh the benefits and trade-offs before deciding to containerize an application. Additionally, some types of applications may not be a good fit for containers including:

    • Apps that require full access to host resources: Containers are isolated from the host system, so if an application needs direct access to hardware resources such as GPUs or specialized devices, it might not work well in a containerized environment.
    • Apps that require low-level system access: If an application requires deep access to the underlying operating system, it may not be suitable for running in a container.
    • Applications that have specific OS dependencies: Apps that have specific dependencies on a certain version of an operating system or libraries may not be able to run in a container.
    • Stateful applications: Apps that maintain state across multiple sessions, such as databases, may not be well suited for containers. Containers are ephemeral by design, so the data stored inside a container may not persist between restarts.

    The good news is that some of these limitations can be overcome with the use of specialized containerization technologies such as Kubernetes, and by carefully designing the architecture of the application.


    The power of Kubernetes

    Speaking of Kubernetes...

    Kubernetes is a powerful tool for managing and deploying containerized applications in production environments, particularly for applications that need to scale, handle large numbers of requests, or run in multi-cloud or hybrid environments.

    Kubernetes is well-suited for a wide variety of applications, but it is particularly well-suited for the following types of applications:

    1. Microservices-based applications: Kubernetes provides a powerful set of tools for managing and deploying microservices-based applications, making it easy to scale, update, and manage the individual components of the application.
    2. Stateful applications: Kubernetes provides support for stateful applications through the use of Persistent Volumes and StatefulSets, allowing for applications that need to maintain state across multiple instances.
    3. Large-scale, highly-available systems: Kubernetes provides built-in support for scaling, self-healing, and rolling updates, making it an ideal choice for large-scale, highly-available systems that need to handle large numbers of users and requests.
    4. Multi-cloud and hybrid environments: Kubernetes can be used to deploy and manage applications across multiple cloud providers and on-premises environments, making it a good choice for organizations that want to take advantage of the benefits of multiple cloud providers or that need to deploy applications in a hybrid environment.
    New to Kubernetes?

    Where does Serverless fit in?

    Serverless is a cloud computing model where the cloud provider (like Azure) is responsible for executing a piece of code by dynamically allocating the resources. With serverless, you only pay for the exact amount of compute time that you use, rather than paying for a fixed amount of resources. This can lead to significant cost savings, particularly for applications with variable or unpredictable workloads.

    Serverless is commonly used for building applications like web or mobile apps, IoT, data processing, and real-time streaming - apps where the workloads are variable and high scalability is required. It's important to note that serverless is not a replacement for all types of workloads - it's best suited for stateless, short-lived and small-scale workloads.

    For a detailed look into the world of Serverless and lots of great learning content, revisit #30DaysofServerless.


    Resources


    What's up next in #CloudNativeNewYear?

    Week 1 has been all about the fundamentals of cloud-native. Next week, the team will be diving in to application deployment with Azure Kubernetes Service. Don't forget to subscribe to the blog to get daily posts delivered directly to your favorite feed reader!


    - + \ No newline at end of file diff --git a/cnny-2023/tags/30-daysofcloudnative/page/7/index.html b/cnny-2023/tags/30-daysofcloudnative/page/7/index.html index 6932964b03..6ce7258c42 100644 --- a/cnny-2023/tags/30-daysofcloudnative/page/7/index.html +++ b/cnny-2023/tags/30-daysofcloudnative/page/7/index.html @@ -14,13 +14,13 @@ - +

    16 posts tagged with "30daysofcloudnative"

    View All Tags

    · 14 min read
    Steven Murawski

    Welcome to Day #1 of Week 2 of #CloudNativeNewYear!

    The theme for this week is Kubernetes fundamentals. Last week we talked about Cloud Native architectures and the Cloud Native landscape. Today we'll explore the topic of Pods and Deployments in Kubernetes.

    Ask the Experts Thursday, February 9th at 9 AM PST
    Catch the Replay of the Live Demo

    Watch the recorded demo and conversation about this week's topics.

    We were live on YouTube walking through today's (and the rest of this week's) demos.

    What We'll Cover

    • Setting Up A Kubernetes Environment in Azure
    • Running Containers in Kubernetes Pods
    • Making the Pods Resilient with Deployments
    • Exercise
    • Resources

    Setting Up A Kubernetes Environment in Azure

    For this week, we'll be working with a simple app - the Azure Voting App. My teammate Paul Yu ported the app to Rust and we tweaked it a bit to let us highlight some of the basic features of Kubernetes.

    You should be able to replicate this in just about any Kubernetes environment, but we'll use Azure Kubernetes Service (AKS) as our working environment for this week.

    To make it easier to get started, there's a Bicep template to deploy an AKS cluster, an Azure Container Registry (ACR) (to host our container image), and connect the two so that we can easily deploy our application.

    Step 0 - Prerequisites

    There are a few things you'll need if you want to work through this and the following examples this week.

    Required:

    • Git (and probably a GitHub account if you want to persist your work outside of your computer)
    • Azure CLI
    • An Azure subscription (if you want to follow along with the Azure steps)
    • Kubectl (the command line tool for managing Kubernetes)

    Helpful:

    • Visual Studio Code (or equivalent editor)

    Step 1 - Clone the application repository

    First, I forked the source repository to my account.

    $GitHubOrg = 'smurawski' # Replace this with your GitHub account name or org name
    git clone "https://github.com/$GitHubOrg/azure-voting-app-rust"
    cd azure-voting-app-rust

    Leave your shell opened with your current location inside the application repository.

    Step 2 - Set up AKS

    Running the template deployment from the demo script (I'm using the PowerShell example in cnny23-week2-day1.ps1, but there's a Bash variant at cnny23-week2-day1.sh) stands up the environment. The second, third, and fourth commands take some of the output from the Bicep deployment to set up for later commands, so don't close out your shell after you run these commands.

    az deployment sub create --template-file ./deploy/main.bicep --location eastus --parameters 'resourceGroup=cnny-week2'
    $AcrName = az deployment sub show --name main --query 'properties.outputs.acr_name.value' -o tsv
    $AksName = az deployment sub show --name main --query 'properties.outputs.aks_name.value' -o tsv
    $ResourceGroup = az deployment sub show --name main --query 'properties.outputs.resource_group_name.value' -o tsv

    az aks get-credentials --resource-group $ResourceGroup --name $AksName

    Step 3 - Build our application container

    Since we have an Azure Container Registry set up, I'll use ACR Build Tasks to build and store my container image.

    az acr build --registry $AcrName --% --image cnny2023/azure-voting-app-rust:{{.Run.ID}} .
    $BuildTag = az acr repository show-tags `
    --name $AcrName `
    --repository cnny2023/azure-voting-app-rust `
    --orderby time_desc `
    --query '[0]' -o tsv
    tip

    Wondering what the --% is in the first command line? That tells the PowerShell interpreter to pass the input after it "as is" to the command without parsing/evaluating it. Otherwise, PowerShell messes a bit with the templated {{.Run.ID}} bit.

    Running Containers in Kubernetes Pods

    Now that we have our AKS cluster and application image ready to go, let's look into how Kubernetes runs containers.

    If you've been in tech for any length of time, you've seen that every framework, runtime, orchestrator, etc.. can have their own naming scheme for their concepts. So let's get into some of what Kubernetes calls things.

    The Pod

    A container running in Kubernetes is called a Pod. A Pod is basically a running container on a Node or VM. It can be more. For example you can run multiple containers and specify some funky configuration, but we'll keep it simple for now - add the complexity when you need it.

    Our Pod definition can be created via the kubectl command imperatively from arguments or declaratively from a configuration file. We'll do a little of both. We'll use the kubectl command to help us write our configuration files. Kubernetes configuration files are YAML, so having an editor that supports and can help you syntax check YAML is really helpful.

    Creating a Pod Definition

    Let's create a few Pod definitions. Our application requires two containers to get working - the application and a database.

    Let's create the database Pod first. And before you comment, the configuration isn't secure nor best practice. We'll fix that later this week. For now, let's focus on getting up and running.

    This is a trick I learned from one of my teammates - Paul. By using the --output yaml and --dry-run=client options, we can have the command help us write our YAML. And with a bit of output redirection, we can stash it safely in a file for later use.

    kubectl run azure-voting-db `
    --image "postgres:15.0-alpine" `
    --env "POSTGRES_PASSWORD=mypassword" `
    --output yaml `
    --dry-run=client > manifests/pod-db.yaml

    This creates a file that looks like:

    apiVersion: v1
    kind: Pod
    metadata:
    creationTimestamp: null
    labels:
    run: azure-voting-db
    name: azure-voting-db
    spec:
    containers:
    - env:
    - name: POSTGRES_PASSWORD
    value: mypassword
    image: postgres:15.0-alpine
    name: azure-voting-db
    resources: {}
    dnsPolicy: ClusterFirst
    restartPolicy: Always
    status: {}

    The file, when supplied to the Kubernetes API, will identify what kind of resource to create, the API version to use, and the details of the container (as well as an environment variable to be supplied).

    We'll get that container image started with the kubectl command. Because the details of what to create are in the file, we don't need to specify much else to the kubectl command but the path to the file.

    kubectl apply -f ./manifests/pod-db.yaml

    I'm going to need the IP address of the Pod, so that my application can connect to it, so we can use kubectl to get some information about our pod. By default, kubectl get pod only displays certain information but it retrieves a lot more. We can use the JSONPath syntax to index into the response and get the information you want.

    tip

    To see what you can get, I usually run the kubectl command with the output type (-o JSON) of JSON and then I can find where the data I want is and create my JSONPath query to get it.

    $DB_IP = kubectl get pod azure-voting-db -o jsonpath='{.status.podIP}'

    Now, let's create our Pod definition for our application. We'll use the same technique as before.

    kubectl run azure-voting-app `
    --image "$AcrName.azurecr.io/cnny2023/azure-voting-app-rust:$BuildTag" `
    --env "DATABASE_SERVER=$DB_IP" `
    --env "DATABASE_PASSWORD=mypassword`
    --output yaml `
    --dry-run=client > manifests/pod-app.yaml

    That command gets us a similar YAML file to the database container - you can see the full file here

    Let's get our application container running.

    kubectl apply -f ./manifests/pod-app.yaml

    Now that the Application is Running

    We can check the status of our Pods with:

    kubectl get pods

    And we should see something like:

    azure-voting-app-rust ❯  kubectl get pods
    NAME READY STATUS RESTARTS AGE
    azure-voting-app 1/1 Running 0 36s
    azure-voting-db 1/1 Running 0 84s

    Once our pod is running, we can check to make sure everything is working by letting kubectl proxy network connections to our Pod running the application. If we get the voting web page, we'll know the application found the database and we can start voting!

    kubectl port-forward pod/azure-voting-app 8080:8080

    Azure voting website in a browser with three buttons, one for Dogs, one for Cats, and one for Reset.  The counter is Dogs - 0 and Cats - 0.

    When you are done voting, you can stop the port forwarding by using Control-C to break the command.

    Clean Up

    Let's clean up after ourselves and see if we can't get Kubernetes to help us keep our application running. We can use the same configuration files to ensure that Kubernetes only removes what we want removed.

    kubectl delete -f ./manifests/pod-app.yaml
    kubectl delete -f ./manifests/pod-db.yaml

    Summary - Pods

    A Pod is the most basic unit of work inside Kubernetes. Once the Pod is deleted, it's gone. That leads us to our next topic (and final topic for today.)

    Making the Pods Resilient with Deployments

    We've seen how easy it is to deploy a Pod and get our containers running on Nodes in our Kubernetes cluster. But there's a problem with that. Let's illustrate it.

    Breaking Stuff

    Setting Back Up

    First, let's redeploy our application environment. We'll start with our application container.

    kubectl apply -f ./manifests/pod-db.yaml
    kubectl get pod azure-voting-db -o jsonpath='{.status.podIP}'

    The second command will report out the new IP Address for our database container. Let's open ./manifests/pod-app.yaml and update the container IP to our new one.

    - name: DATABASE_SERVER
    value: YOUR_NEW_IP_HERE

    Then we can deploy the application with the information it needs to find its database. We'll also list out our pods to see what is running.

    kubectl apply -f ./manifests/pod-app.yaml
    kubectl get pods

    Feel free to look back and use the port forwarding trick to make sure your app is running if you'd like.

    Knocking It Down

    The first thing we'll try to break is our application pod. Let's delete it.

    kubectl delete pod azure-voting-app

    Then, we'll check our pod's status:

    kubectl get pods

    Which should show something like:

    azure-voting-app-rust ❯  kubectl get pods
    NAME READY STATUS RESTARTS AGE
    azure-voting-db 1/1 Running 0 50s

    We should be able to recreate our application pod deployment with no problem, since it has the current database IP address and nothing else depends on it.

    kubectl apply -f ./manifests/pod-app.yaml

    Again, feel free to do some fun port forwarding and check your site is running.

    Uncomfortable Truths

    Here's where it gets a bit stickier, what if we delete the database container?

    If we delete our database container and recreate it, it'll likely have a new IP address, which would force us to update our application configuration. We'll look at some solutions for these problems in the next three posts this week.

    Because our database problem is a bit tricky, we'll primarily focus on making our application layer more resilient and prepare our database layer for those other techniques over the next few days.

    Let's clean back up and look into making things more resilient.

    kubectl delete -f ./manifests/pod-app.yaml
    kubectl delete -f ./manifests/pod-db.yaml

    The Deployment

    One of the reasons you may want to use Kubernetes is it's ability to orchestrate workloads. Part of that orchestration includes being able to ensure that certain workloads are running (regardless of what Node they might be on).

    We saw that we could delete our application pod and then restart it from the manifest with little problem. It just meant that we had to run a command to restart it. We can use the Deployment in Kubernetes to tell the orchestrator to ensure we have our application pod running.

    The Deployment also can encompass a lot of extra configuration - controlling how many containers of a particular type should be running, how upgrades of container images should proceed, and more.

    Creating the Deployment

    First, we'll create a Deployment for our database. We'll use a technique similar to what we did for the Pod, with just a bit of difference.

    kubectl create deployment azure-voting-db `
    --image "postgres:15.0-alpine" `
    --port 5432 `
    --output yaml `
    --dry-run=client > manifests/deployment-db.yaml

    Unlike our Pod definition creation, we can't pass in environment variable configuration from the command line. We'll have to edit the YAML file to add that.

    So, let's open ./manifests/deployment-db.yaml in our editor and add the following in the spec/containers configuration.

            env:
    - name: POSTGRES_PASSWORD
    value: "mypassword"

    Your file should look like this deployment-db.yaml.

    Once we have our configuration file updated, we can deploy our database container image.

    kubectl apply -f ./manifests/deployment-db.yaml

    For our application, we'll use the same technique.

    kubectl create deployment azure-voting-app `
    --image "$AcrName.azurecr.io/cnny2023/azure-voting-app-rust:$BuildTag" `
    --port 8080 `
    --output yaml `
    --dry-run=client > manifests/deployment-app.yaml

    Next, we'll need to add an environment variable to the generated configuration. We'll also need the new IP address for the database deployment.

    Previously, we named the pod and were able to ask for the IP address with kubectl and a bit of JSONPath. Now, the deployment created the pod for us, so there's a bit of random in the naming. Check out:

    kubectl get pods

    Should return something like:

    azure-voting-app-rust ❯  kubectl get pods
    NAME READY STATUS RESTARTS AGE
    azure-voting-db-686d758fbf-8jnq8 1/1 Running 0 7s

    We can either ask for the IP with the new pod name, or we can use a selector to find our desired pod.

    kubectl get pod --selector app=azure-voting-db -o jsonpath='{.items[0].status.podIP}'

    Now, we can update our application deployment configuration file with:

            env:
    - name: DATABASE_SERVER
    value: YOUR_NEW_IP_HERE
    - name: DATABASE_PASSWORD
    value: mypassword

    Your file should look like this deployment-app.yaml (but with IPs and image names matching your environment).

    After we save those changes, we can deploy our application.

    kubectl apply -f ./manifests/deployment-app.yaml

    Let's test the resilience of our app now. First, we'll delete the pod running our application, then we'll check to make sure Kubernetes restarted our application pod.

    kubectl get pods
    azure-voting-app-rust ❯  kubectl get pods
    NAME READY STATUS RESTARTS AGE
    azure-voting-app-56c9ccc89d-skv7x 1/1 Running 0 71s
    azure-voting-db-686d758fbf-8jnq8 1/1 Running 0 12m
    kubectl delete pod azure-voting-app-56c9ccc89d-skv7x
    kubectl get pods
    azure-voting-app-rust ❯  kubectl delete pod azure-voting-app-56c9ccc89d-skv7x
    >> kubectl get pods
    pod "azure-voting-app-56c9ccc89d-skv7x" deleted
    NAME READY STATUS RESTARTS AGE
    azure-voting-app-56c9ccc89d-2b5mx 1/1 Running 0 2s
    azure-voting-db-686d758fbf-8jnq8 1/1 Running 0 14m
    info

    Your Pods will likely have different identifiers at the end, so adjust your commands to match the names in your environment.

    As you can see, by the time the kubectl get pods command was run, Kubernetes had already spun up a new pod for the application container image. Thanks Kubernetes!

    Clean up

    Since we can't just delete the pods, we have to delete the deployments.

    kubectl delete -f ./manifests/deployment-app.yaml
    kubectl delete -f ./manifests/deployment-db.yaml

    Summary - Deployments

    Deployments allow us to create more durable configuration for the workloads we deploy into Kubernetes. As we dig deeper, we'll discover more capabilities the deployments offer. Check out the Resources below for more.

    Exercise

    If you want to try these steps, head over to the source repository, fork it, clone it locally, and give it a spin!

    You can check your manifests against the manifests in the week2/day1 branch of the source repository.

    Resources

    Take the Cloud Skills Challenge!

    Enroll in the Cloud Skills Challenge!

    Don't miss out on this opportunity to level up your skills and stay ahead of the curve in the world of cloud native.

    Documentation

    Training

    - + \ No newline at end of file diff --git a/cnny-2023/tags/30-daysofcloudnative/page/8/index.html b/cnny-2023/tags/30-daysofcloudnative/page/8/index.html index 1616f30dd3..2cfba3bd64 100644 --- a/cnny-2023/tags/30-daysofcloudnative/page/8/index.html +++ b/cnny-2023/tags/30-daysofcloudnative/page/8/index.html @@ -14,14 +14,14 @@ - +

    16 posts tagged with "30daysofcloudnative"

    View All Tags

    · 6 min read
    Josh Duffney

    Welcome to Day 3 of Week 2 of #CloudNativeNewYear!

    The theme for this week is Kubernetes fundamentals. Yesterday we talked about Services and Ingress. Today we'll explore the topic of passing configuration and secrets to our applications in Kubernetes with ConfigMaps and Secrets.

    Ask the Experts Thursday, February 9th at 9 AM PST
    Catch the Replay of the Live Demo

    Watch the recorded demo and conversation about this week's topics.

    We were live on YouTube walking through today's (and the rest of this week's) demos.

    What We'll Cover

    • Decouple configurations with ConfigMaps and Secerts
    • Passing Environment Data with ConfigMaps and Secrets
    • Conclusion

    Decouple configurations with ConfigMaps and Secerts

    A ConfigMap is a Kubernetes object that decouples configuration data from pod definitions. Kubernetes secerts are similar, but were designed to decouple senstive information.

    Separating the configuration and secerts from your application promotes better organization and security of your Kubernetes environment. It also enables you to share the same configuration and different secerts across multiple pods and deployments which can simplify scaling and management. Using ConfigMaps and Secerts in Kubernetes is a best practice that can help to improve the scalability, security, and maintainability of your cluster.

    By the end of this tutorial, you'll have added a Kubernetes ConfigMap and Secret to the Azure Voting deployment.

    Passing Environment Data with ConfigMaps and Secrets

    📝 NOTE: If you don't have an AKS cluster deployed, please head over to Azure-Samples/azure-voting-app-rust, clone the repo, and follow the instructions in the README.md to execute the Azure deployment and setup your kubectl context. Check out the first post this week for more on the environment setup.

    Create the ConfigMap

    ConfigMaps can be used in one of two ways; as environment variables or volumes.

    For this tutorial you'll use a ConfigMap to create three environment variables inside the pod; DATABASE_SERVER, FISRT_VALUE, and SECOND_VALUE. The DATABASE_SERVER provides part of connection string to a Postgres. FIRST_VALUE and SECOND_VALUE are configuration options that change what voting options the application presents to the users.

    Follow the below steps to create a new ConfigMap:

    1. Create a YAML file named 'config-map.yaml'. In this file, specify the environment variables for the application.

      apiVersion: v1
      kind: ConfigMap
      metadata:
      name: azure-voting-config
      data:
      DATABASE_SERVER: azure-voting-db
      FIRST_VALUE: "Go"
      SECOND_VALUE: "Rust"
    2. Create the config map in your Kubernetes cluster by running the following command:

      kubectl create -f config-map.yaml

    Create the Secret

    The deployment-db.yaml and deployment-app.yaml are Kubernetes manifests that deploy the Azure Voting App. Currently, those deployment manifests contain the environment variables POSTGRES_PASSWORD and DATABASE_PASSWORD with the value stored as plain text. Your task is to replace that environment variable with a Kubernetes Secret.

    Create a Secret running the following commands:

    1. Encode mypassword.

      echo -n "mypassword" | base64
    2. Create a YAML file named secret.yaml. In this file, add POSTGRES_PASSWORD as the key and the encoded value returned above under as the value in the data section.

      apiVersion: v1
      kind: Secret
      metadata:
      name: azure-voting-secret
      type: Opaque
      data:
      POSTGRES_PASSWORD: bXlwYXNzd29yZA==
    3. Create the Secret in your Kubernetes cluster by running the following command:

      kubectl create -f secret.yaml

    [!WARNING] base64 encoding is a simple and widely supported way to obscure plaintext data, it is not secure, as it can easily be decoded. If you want to store sensitive data like password, you should use a more secure method like encrypting with a Key Management Service (KMS) before storing it in the Secret.

    Modify the app deployment manifest

    With the ConfigMap and Secert both created the next step is to replace the environment variables provided in the application deployment manuscript with the values stored in the ConfigMap and the Secert.

    Complete the following steps to add the ConfigMap and Secert to the deployment mainifest:

    1. Open the Kubernetes manifest file deployment-app.yaml.

    2. In the containers section, add an envFrom section and upate the env section.

      envFrom:
      - configMapRef:
      name: azure-voting-config
      env:
      - name: DATABASE_PASSWORD
      valueFrom:
      secretKeyRef:
      name: azure-voting-secret
      key: POSTGRES_PASSWORD

      Using envFrom exposes all the values witin the ConfigMap as environment variables. Making it so you don't have to list them individually.

    3. Save the changes to the deployment manifest file.

    4. Apply the changes to the deployment by running the following command:

      kubectl apply -f deployment-app.yaml

    Modify the database deployment manifest

    Next, update the database deployment manifest and replace the plain text environment variable with the Kubernetes Secert.

    1. Open the deployment-db.yaml.

    2. To add the secret to the deployment, replace the env section with the following code:

      env:
      - name: POSTGRES_PASSWORD
      valueFrom:
      secretKeyRef:
      name: azure-voting-secret
      key: POSTGRES_PASSWORD
    3. Apply the updated manifest.

      kubectl apply -f deployment-db.yaml

    Verify the ConfigMap and output environment variables

    Verify that the ConfigMap was added to your deploy by running the following command:

    ```bash
    kubectl describe deployment azure-voting-app
    ```

    Browse the output until you find the envFrom section with the config map reference.

    You can also verify that the environment variables from the config map are being passed to the container by running the command kubectl exec -it <pod-name> -- printenv. This command will show you all the environment variables passed to the pod including the one from configmap.

    By following these steps, you will have successfully added a config map to the Azure Voting App Kubernetes deployment, and the environment variables defined in the config map will be passed to the container running in the pod.

    Verify the Secret and describe the deployment

    Once the secret has been created you can verify it exists by running the following command:

    kubectl get secrets

    You can view additional information, such as labels, annotations, type, and the Data by running kubectl describe:

    kubectl describe secret azure-voting-secret

    By default, the describe command doesn't output the encoded value, but if you output the results as JSON or YAML you'll be able to see the secret's encoded value.

     kubectl get secret azure-voting-secret -o json

    Conclusion

    In conclusion, using ConfigMaps and Secrets in Kubernetes can help to improve the scalability, security, and maintainability of your cluster. By decoupling configuration data and sensitive information from pod definitions, you can promote better organization and security in your Kubernetes environment. Additionally, separating these elements allows for sharing the same configuration and different secrets across multiple pods and deployments, simplifying scaling and management.

    Take the Cloud Skills Challenge!

    Enroll in the Cloud Skills Challenge!

    Don't miss out on this opportunity to level up your skills and stay ahead of the curve in the world of cloud native.

    - + \ No newline at end of file diff --git a/cnny-2023/tags/30-daysofcloudnative/page/9/index.html b/cnny-2023/tags/30-daysofcloudnative/page/9/index.html index c32e70c426..ffabb896bf 100644 --- a/cnny-2023/tags/30-daysofcloudnative/page/9/index.html +++ b/cnny-2023/tags/30-daysofcloudnative/page/9/index.html @@ -14,13 +14,13 @@ - +

    16 posts tagged with "30daysofcloudnative"

    View All Tags

    · 10 min read
    Steven Murawski

    Welcome to Day 5 of Week 2 of #CloudNativeNewYear!

    The theme for this week is Kubernetes fundamentals. Yesterday we talked about adding persistent storage to our deployment. Today we'll explore the topic of scaling pods and nodes in our Kubernetes cluster.

    Ask the Experts Thursday, February 9th at 9 AM PST
    Catch the Replay of the Live Demo

    Watch the recorded demo and conversation about this week's topics.

    We were live on YouTube walking through today's (and the rest of this week's) demos.

    What We'll Cover

    • Scaling Our Application
    • Scaling Pods
    • Scaling Nodes
    • Exercise
    • Resources

    Scaling Our Application

    One of our primary reasons to use a service like Kubernetes to orchestrate our workloads is the ability to scale. We've approached scaling in a multitude of ways over the years, taking advantage of the ever-evolving levels of hardware and software. Kubernetes allows us to scale our units of work, Pods, and the Nodes they run on. This allows us to take advantage of both hardware and software scaling abilities. Kubernetes can help improve the utilization of existing hardware (by scheduling Pods on Nodes that have resource capacity). And, with the capabilities of virtualization and/or cloud hosting (or a bit more work, if you have a pool of physical machines), Kubernetes can expand (or contract) the number of Nodes capable of hosting Pods. Scaling is primarily driven by resource utilization, but can be triggered by a variety of other sources thanks to projects like Kubernetes Event-driven Autoscaling (KEDA).

    Scaling Pods

    Our first level of scaling is with our Pods. Earlier, when we worked on our deployment, we talked about how the Kubernetes would use the deployment configuration to ensure that we had the desired workloads running. One thing we didn't explore was running more than one instance of a pod. We can define a number of replicas of a pod in our Deployment.

    Manually Scale Pods

    So, if we wanted to define more pods right at the start (or at any point really), we could update our deployment configuration file with the number of replicas and apply that configuration file.

    spec:
    replicas: 5

    Or we could use the kubectl scale command to update the deployment with a number of pods to create.

    kubectl scale --replicas=5 deployment/azure-voting-app

    Both of these approaches modify the running configuration of our Kubernetes cluster and request that it ensure that we have that set number of replicas running. Because this was a manual change, the Kubernetes cluster won't automatically increase or decrease the number of pods. It'll just ensure that there are always the specified number of pods running.

    Autoscale Pods with the Horizontal Pod Autoscaler

    Another approach to scaling our pods is to allow the Horizontal Pod Autoscaler to help us scale in response to resources being used by the pod. This requires a bit more configuration up front. When we define our pod in our deployment, we need to include resource requests and limits. The requests help Kubernetes determine what nodes may have capacity for a new instance of a pod. The limit tells us where the node should cap utilization for a particular instance of a pod. For example, we'll update our deployment to request 0.25 CPU and set a limit of 0.5 CPU.

        spec:
    containers:
    - image: acrudavoz.azurecr.io/cnny2023/azure-voting-app-rust:ca4
    name: azure-voting-app-rust
    ports:
    - containerPort: 8080
    env:
    - name: DATABASE_URL
    value: postgres://postgres:mypassword@10.244.0.29
    resources:
    requests:
    cpu: 250m
    limits:
    cpu: 500m

    Now that we've given Kubernetes an allowed range and an idea of what free resources a node should have to place new pods, we can set up autoscaling. Because autoscaling is a persistent configuration, I like to define it in a configuration file that I'll be able to keep with the rest of my cluster configuration. We'll use the kubectl command to help us write the configuration file. We'll request that Kubernetes watch our pods and when the average CPU utilization if 50% of the requested usage (in our case if it's using more than 0.375 CPU across the current number of pods), it can grow the number of pods serving requests up to 10. If the utilization drops, Kubernetes will have the permission to deprovision pods down to the minimum (three in our example).

    kubectl autoscale deployment azure-voting-app --cpu-percent=50 --min=3 --max=10 -o YAML --dry-run=client

    Which would give us:

    apiVersion: autoscaling/v1
    kind: HorizontalPodAutoscaler
    metadata:
    creationTimestamp: null
    name: azure-voting-app
    spec:
    maxReplicas: 10
    minReplicas: 3
    scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: azure-voting-app
    targetCPUUtilizationPercentage: 50
    status:
    currentReplicas: 0
    desiredReplicas: 0

    So, how often does the autoscaler check the metrics being monitored? The autoscaler checks the Metrics API every 15 seconds, however the pods stats are only updated every 60 seconds. This means that an autoscale event may be evaluated about once a minute. Once an autoscale down event happens however, Kubernetes has a cooldown period to give the new pods a chance to distribute the workload and let the new metrics accumulate. There is no delay on scale up events.

    Application Architecture Considerations

    We've focused in this example on our front end, which is an easier scaling story. When we start talking about scaling our database layers or anything that deals with persistent storage or has primary/replica configuration requirements things get a bit more complicated. Some of these applications may have built-in leader election or could use sidecars to help use existing features in Kubernetes to perform that function. For shared storage scenarios, persistent volumes (or persistent volumes with Azure) can be of help, if the application knows how to play well with shared file access.

    Ultimately, you know your application architecture and, while Kubernetes may not have an exact match to how you are doing things today, the underlying capability is probably there under a different name. This abstraction allows you to more effectively use Kubernetes to operate a variety of workloads with the levels of controls you need.

    Scaling Nodes

    We've looked at how to scale our pods, but that assumes we have enough resources in our existing pool of nodes to accomodate those scaling requests. Kubernetes can also help scale our available nodes to ensure that our applications have the necessary resources to meet their performance requirements.

    Manually Scale Nodes

    Manually scaling nodes isn't a direct function of Kubernetes, so your operating environment instructions may vary. On Azure, it's pretty straight forward. Using the Azure CLI (or other tools), we can tell our AKS cluster to scale up or scale down the number of nodes in our node pool.

    First, we'll check out how many nodes we currently have in our working environment.

    kubectl get nodes

    This will show us

    azure-voting-app-rust ❯  kubectl get nodes
    NAME STATUS ROLES AGE VERSION
    aks-pool0-37917684-vmss000000 Ready agent 5d21h v1.24.6

    Then, we'll scale it up to three nodes.

    az aks scale --resource-group $ResourceGroup --name $AksName --node-count 3

    Then, we'll check out how many nodes we now have in our working environment.

    kubectl get nodes

    Which returns:

    azure-voting-app-rust ❯  kubectl get nodes
    NAME STATUS ROLES AGE VERSION
    aks-pool0-37917684-vmss000000 Ready agent 5d21h v1.24.6
    aks-pool0-37917684-vmss000001 Ready agent 5m27s v1.24.6
    aks-pool0-37917684-vmss000002 Ready agent 5m10s v1.24.6

    Autoscale Nodes with the Cluster Autoscaler

    Things get more interesting when we start working with the Cluster Autoscaler. The Cluster Autoscaler watches for the inability of Kubernetes to schedule the required number of pods due to resource constraints (and a few other criteria like affinity/anti-affinity). If there are insufficient resources available on the existing nodes, the autoscaler can provision new nodes into the nodepool. Likewise, the autoscaler watches to see if the existing pods could be consolidated to a smaller set of nodes and can remove excess nodes.

    Enabling the autoscaler is likewise an update that can be dependent on where and how your Kubernetes cluster is hosted. Azure makes it easy with a simple Azure CLI command.

    az aks update `
    --resource-group $ResourceGroup `
    --name $AksName `
    --update-cluster-autoscaler `
    --min-count 1 `
    --max-count 5

    There are a variety of settings that can be configured to tune how the autoscaler works.

    Scaling on Different Events

    CPU and memory utilization are the primary drivers for the Horizontal Pod Autoscaler, but those might not be the best measures as to when you might want to scale workloads. There are other options for scaling triggers and one of the more common plugins to help with that is the Kubernetes Event-driven Autoscaling (KEDA) project. The KEDA project makes it easy to plug in different event sources to help drive scaling. Find more information about using KEDA on AKS here.

    Exercise

    Let's try out the scaling configurations that we just walked through using our sample application. If you still have your environment from Day 1, you can use that.

    📝 NOTE: If you don't have an AKS cluster deployed, please head over to Azure-Samples/azure-voting-app-rust, clone the repo, and follow the instructions in the README.md to execute the Azure deployment and setup your kubectl context. Check out the first post this week for more on the environment setup.

    Configure Horizontal Pod Autoscaler

    • Edit ./manifests/deployment-app.yaml to include resource requests and limits.
            resources:
    requests:
    cpu: 250m
    limits:
    cpu: 500m
    • Apply the updated deployment configuration.
    kubectl apply -f ./manifests/deployment-app.yaml
    • Create the horizontal pod autoscaler configuration and apply it
    kubectl autoscale deployment azure-voting-app --cpu-percent=50 --min=3 --max=10 -o YAML --dry-run=client > ./manifests/scaler-app.yaml
    kubectl apply -f ./manifests/scaler-app.yaml
    • Check to see your pods scale out to the minimum.
    kubectl get pods

    Configure Cluster Autoscaler

    Configuring the basic behavior of the Cluster Autoscaler is a bit simpler. We just need to run the Azure CLI command to enable the autoscaler and define our lower and upper limits.

    • Check the current nodes available (should be 1).
    kubectl get nodes
    • Update the cluster to enable the autoscaler
    az aks update `
    --resource-group $ResourceGroup `
    --name $AksName `
    --update-cluster-autoscaler `
    --min-count 2 `
    --max-count 5
    • Check to see the current number of nodes (should be 2 now).
    kubectl get nodes

    Resources

    Take the Cloud Skills Challenge!

    Enroll in the Cloud Skills Challenge!

    Don't miss out on this opportunity to level up your skills and stay ahead of the curve in the world of cloud native.

    Documentation

    Training

    - + \ No newline at end of file diff --git a/cnny-2023/tags/addons/index.html b/cnny-2023/tags/addons/index.html index 285dab7632..22b6eaf849 100644 --- a/cnny-2023/tags/addons/index.html +++ b/cnny-2023/tags/addons/index.html @@ -14,13 +14,13 @@ - +

    One post tagged with "addons"

    View All Tags

    · 4 min read
    Jorge Arteiro

    Welcome to Day 4 of Week 4 of #CloudNativeNewYear!

    The theme for this week is going further with Cloud Native. Yesterday we talked about Windows Containers. Today we'll explore addons and extensions available to Azure Kubernetes Services (AKS).

    What We'll Cover

    • Introduction
    • Add-ons
    • Extensions
    • Add-ons vs Extensions
    • Resources

    Introduction

    Azure Kubernetes Service (AKS) is a fully managed container orchestration service that makes it easy to deploy and manage containerized applications on Azure. AKS offers a number of features and capabilities, including the ability to extend its supported functionality through the use of add-ons and extensions.

    There are also integrations available from open-source projects and third parties, but they are not covered by the AKS support policy.

    Add-ons

    Add-ons provide a supported way to extend AKS. Installation, configuration and lifecycle are managed by AKS following pre-determine updates rules.

    As an example, let's enable Container Insights with the monitoring addon. on an existing AKS cluster using az aks enable-addons --addons CLI command

    az aks enable-addons \
    --name MyManagedCluster \
    --resource-group MyResourceGroup \
    --addons monitoring

    or you can use az aks create --enable-addons when creating new clusters

    az aks create \
    --name MyManagedCluster \
    --resource-group MyResourceGroup \
    --enable-addons monitoring

    The current available add-ons are:

    1. http_application_routing - Configure ingress with automatic public DNS name creation. Only recommended for development.
    2. monitoring - Container Insights monitoring.
    3. virtual-node - CNCF virtual nodes open source project.
    4. azure-policy - Azure Policy for AKS.
    5. ingress-appgw - Application Gateway Ingress Controller (AGIC).
    6. open-service-mesh - CNCF Open Service Mesh project.
    7. azure-keyvault-secrets-provider - Azure Key Vault Secrets Provider for Secret Store CSI Driver.
    8. web_application_routing - Managed NGINX ingress Controller.
    9. keda - CNCF Event-driven autoscaling project.

    For more details, get the updated list of AKS Add-ons here

    Extensions

    Cluster Extensions uses Helm charts and integrates with Azure Resource Manager (ARM) to provide installation and lifecycle management of capabilities on top of AKS.

    Extensions can be auto upgraded using minor versions, but it requires extra management and configuration. Using Scope parameter, it can be installed on the whole cluster or per namespace.

    AKS Extensions requires an Azure CLI extension to be installed. To add or update this CLI extension use the following commands:

    az extension add --name k8s-extension

    and to update an existing extension

    az extension update --name k8s-extension

    There are only 3 available extensions:

    1. Dapr - CNCF Dapr project.
    2. Azure ML - Integrate Azure Machine Learning with AKS to train, inference and manage ML models.
    3. Flux (GitOps) - CNCF Flux project integrated with AKS to enable cluster configuration and application deployment using GitOps.

    As an example, you can install Azure ML using the following command:

    az k8s-extension create \
    --name aml-compute --extension-type Microsoft.AzureML.Kubernetes \
    --scope cluster --cluster-name <clusterName> \
    --resource-group <resourceGroupName> \
    --cluster-type managedClusters \
    --configuration-settings enableInference=True allowInsecureConnections=True

    For more details, get the updated list of AKS Extensions here

    Add-ons vs Extensions

    AKS Add-ons brings an advantage of been fully managed by AKS itself, and AKS Extensions are more flexible and configurable but requires extra level of management.

    Add-ons are part of the AKS resource provider in the Azure API, and AKS Extensions are a separate resource provider on the Azure API.

    Resources

    It's not too late to sign up for and complete the Cloud Skills Challenge!
    - + \ No newline at end of file diff --git a/cnny-2023/tags/aks/index.html b/cnny-2023/tags/aks/index.html index 6be6cb05c5..195409aa6c 100644 --- a/cnny-2023/tags/aks/index.html +++ b/cnny-2023/tags/aks/index.html @@ -14,13 +14,13 @@ - +

    5 posts tagged with "aks"

    View All Tags

    · 11 min read
    Paul Yu

    Welcome to Day 2 of Week 2 of #CloudNativeNewYear!

    The theme for this week is #Kubernetes fundamentals. Yesterday we talked about how to deploy a containerized web app workload to Azure Kubernetes Service (AKS). Today we'll explore the topic of services and ingress and walk through the steps of making our containers accessible both internally as well as over the internet so that you can share it with the world 😊

    Ask the Experts Thursday, February 9th at 9 AM PST
    Catch the Replay of the Live Demo

    Watch the recorded demo and conversation about this week's topics.

    We were live on YouTube walking through today's (and the rest of this week's) demos.

    What We'll Cover

    • Exposing Pods via Service
    • Exposing Services via Ingress
    • Takeaways
    • Resources

    Exposing Pods via Service

    There are a few ways to expose your pod in Kubernetes. One way is to take an imperative approach and use the kubectl expose command. This is probably the quickest way to achieve your goal but it isn't the best way. A better way to expose your pod by taking a declarative approach by creating a services manifest file and deploying it using the kubectl apply command.

    Don't worry if you are unsure of how to make this manifest, we'll use kubectl to help generate it.

    First, let's ensure we have the database deployed on our AKS cluster.

    📝 NOTE: If you don't have an AKS cluster deployed, please head over to Azure-Samples/azure-voting-app-rust, clone the repo, and follow the instructions in the README.md to execute the Azure deployment and setup your kubectl context. Check out the first post this week for more on the environment setup.

    kubectl apply -f ./manifests/deployment-db.yaml

    Next, let's deploy the application. If you are following along from yesterday's content, there isn't anything you need to change; however, if you are deploy the app from scratch, you'll need to modify the deployment-app.yaml manifest and update it with your image tag and database pod's IP address.

    kubectl apply -f ./manifests/deployment-app.yaml

    Now, let's expose the database using a service so that we can leverage Kubernetes' built-in service discovery to be able to reference it by name; not pod IP. Run the following command.

    kubectl expose deployment azure-voting-db \
    --port=5432 \
    --target-port=5432

    With the database exposed using service, we can update the app deployment manifest to use the service name instead of pod IP. This way, if the pod ever gets assigned a new IP, we don't have to worry about updating the IP each time and redeploying our web application. Kubernetes has internal service discovery mechanism in place that allows us to reference a service by its name.

    Let's make an update to the manifest. Replace the environment variable for DATABASE_SERVER with the following:

    - name: DATABASE_SERVER
    value: azure-voting-db

    Re-deploy the app with the updated configuration.

    kubectl apply -f ./manifests/deployment-app.yaml

    One service down, one to go. Run the following command to expose the web application.

    kubectl expose deployment azure-voting-app \
    --type=LoadBalancer \
    --port=80 \
    --target-port=8080

    Notice the --type argument has a value of LoadBalancer. This service type is implemented by the cloud-controller-manager which is part of the Kubernetes control plane. When using a managed Kubernetes cluster such as Azure Kubernetes Service, a public standard load balancer will be able to provisioned when the service type is set to LoadBalancer. The load balancer will also have a public IP assigned which will make your deployment publicly available.

    Kubernetes supports four service types:

    • ClusterIP: this is the default and limits service access to internal traffic within the cluster
    • NodePort: this assigns a port mapping on the node's IP address and allows traffic from the virtual network (outside the cluster)
    • LoadBalancer: as mentioned above, this creates a cloud-based load balancer
    • ExternalName: this is used in special case scenarios where you want to map a service to an external DNS name

    📝 NOTE: When exposing a web application to the internet, allowing external users to connect to your Service directly is not the best approach. Instead, you should use an Ingress, which we'll cover in the next section.

    Now, let's confirm you can reach the web app from the internet. You can use the following command to print the URL to your terminal.

    echo "http://$(kubectl get service azure-voting-app -o jsonpath='{.status.loadBalancer.ingress[0].ip}')"

    Great! The kubectl expose command gets the job done, but as mentioned above, it is not the best method of exposing deployments. It is better to expose deployments declaratively using a service manifest, so let's delete the services and redeploy using manifests.

    kubectl delete service azure-voting-db azure-voting-app

    To use kubectl to generate our manifest file, we can use the same kubectl expose command that we ran earlier but this time, we'll include --output=yaml and --dry-run=client. This will instruct the command to output the manifest that would be sent to the kube-api server in YAML format to the terminal.

    Generate the manifest for the database service.

    kubectl expose deployment azure-voting-db \
    --type=ClusterIP \
    --port=5432 \
    --target-port=5432 \
    --output=yaml \
    --dry-run=client > ./manifests/service-db.yaml

    Generate the manifest for the application service.

    kubectl expose deployment azure-voting-app \
    --type=LoadBalancer \
    --port=80 \
    --target-port=8080 \
    --output=yaml \
    --dry-run=client > ./manifests/service-app.yaml

    The command above redirected the YAML output to your manifests directory. Here is what the web application service looks like.

    apiVersion: v1
    kind: Service
    metadata:
    creationTimestamp: null
    labels:
    app: azure-voting-app
    name: azure-voting-app
    spec:
    ports:
    - port: 80
    protocol: TCP
    targetPort: 8080
    selector:
    app: azure-voting-app
    type: LoadBalancer
    status:
    loadBalancer: {}

    💡 TIP: To view the schema of any api-resource in Kubernetes, you can use the kubectl explain command. In this case the kubectl explain service command will tell us exactly what each of these fields do.

    Re-deploy the services using the new service manifests.

    kubectl apply -f ./manifests/service-db.yaml -f ./manifests/service-app.yaml

    # You should see TYPE is set to LoadBalancer and the EXTERNAL-IP is set
    kubectl get service azure-voting-db azure-voting-app

    Confirm again that our application is accessible again. Run the following command to print the URL to the terminal.

    echo "http://$(kubectl get service azure-voting-app -o jsonpath='{.status.loadBalancer.ingress[0].ip}')"

    That was easy, right? We just exposed both of our pods using Kubernetes services. The database only needs to be accessible from within the cluster so ClusterIP is perfect for that. For the web application, we specified the type to be LoadBalancer so that we can access the application over the public internet.

    But wait... remember that if you want to expose web applications over the public internet, a Service with a public IP is not the best way; the better approach is to use an Ingress resource.

    Exposing Services via Ingress

    If you read through the Kubernetes documentation on Ingress you will see a diagram that depicts the Ingress sitting in front of the Service resource with a routing rule between it. In order to use Ingress, you need to deploy an Ingress Controller and it can be configured with many routing rules to forward traffic to one or many backend services. So effectively, an Ingress is a load balancer for your Services.

    With that said, we no longer need a service type of LoadBalancer since the service does not need to be accessible from the internet. It only needs to be accessible from the Ingress Controller (internal to the cluster) so we can change the service type to ClusterIP.

    Update your service.yaml file to look like this:

    apiVersion: v1
    kind: Service
    metadata:
    creationTimestamp: null
    labels:
    app: azure-voting-app
    name: azure-voting-app
    spec:
    ports:
    - port: 80
    protocol: TCP
    targetPort: 8080
    selector:
    app: azure-voting-app

    📝 NOTE: The default service type is ClusterIP so we can omit the type altogether.

    Re-apply the app service manifest.

    kubectl apply -f ./manifests/service-app.yaml

    # You should see TYPE set to ClusterIP and EXTERNAL-IP set to <none>
    kubectl get service azure-voting-app

    Next, we need to install an Ingress Controller. There are quite a few options, and the Kubernetes-maintained NGINX Ingress Controller is commonly deployed.

    You could install this manually by following these instructions, but if you do that you'll be responsible for maintaining and supporting the resource.

    I like to take advantage of free maintenance and support when I can get it, so I'll opt to use the Web Application Routing add-on for AKS.

    💡 TIP: Whenever you install an AKS add-on, it will be maintained and fully supported by Azure Support.

    Enable the web application routing add-on in our AKS cluster with the following command.

    az aks addon enable \
    --name <YOUR_AKS_NAME> \
    --resource-group <YOUR_AKS_RESOURCE_GROUP>
    --addon web_application_routing

    ⚠️ WARNING: This command can take a few minutes to complete

    Now, let's use the same approach we took in creating our service to create our Ingress resource. Run the following command to generate the Ingress manifest.

    kubectl create ingress azure-voting-app \
    --class=webapprouting.kubernetes.azure.com \
    --rule="/*=azure-voting-app:80" \
    --output yaml \
    --dry-run=client > ./manifests/ingress.yaml

    The --class=webapprouting.kubernetes.azure.com option activates the AKS web application routing add-on. This AKS add-on can also integrate with other Azure services such as Azure DNS and Azure Key Vault for TLS certificate management and this special class makes it all work.

    The --rule="/*=azure-voting-app:80" option looks confusing but we can use kubectl again to help us understand how to format the value for the option.

    kubectl create ingress --help

    In the output you will see the following:

    --rule=[]:
    Rule in format host/path=service:port[,tls=secretname]. Paths containing the leading character '*' are
    considered pathType=Prefix. tls argument is optional.

    It expects a host and path separated by a forward-slash, then expects the backend service name and port separated by a colon. We're not using a hostname for this demo so we can omit it. For the path, an asterisk is used to specify a wildcard path prefix.

    So, the value of /*=azure-voting-app:80 creates a routing rule for all paths following the domain (or in our case since we don't have a hostname specified, the IP) to route traffic to our azure-voting-app backend service on port 80.

    📝 NOTE: Configuring the hostname and TLS is outside the scope of this demo but please visit this URL https://bit.ly/aks-webapp-routing for an in-depth hands-on lab centered around Web Application Routing on AKS.

    Your ingress.yaml file should look like this:

    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
    creationTimestamp: null
    name: azure-voting-app
    spec:
    ingressClassName: webapprouting.kubernetes.azure.com
    rules:
    - http:
    paths:
    - backend:
    service:
    name: azure-voting-app
    port:
    number: 80
    path: /
    pathType: Prefix
    status:
    loadBalancer: {}

    Apply the app ingress manifest.

    kubectl apply -f ./manifests/ingress.yaml

    Validate the web application is available from the internet again. You can run the following command to print the URL to the terminal.

    echo "http://$(kubectl get ingress azure-voting-app -o jsonpath='{.status.loadBalancer.ingress[0].ip}')"

    Takeaways

    Exposing your applications both internally and externally can be easily achieved using Service and Ingress resources respectively. If your service is HTTP or HTTPS based and needs to be accessible from outsie the cluster, use Ingress with an internal Service (i.e., ClusterIP or NodePort); otherwise, use the Service resource. If your TCP-based Service needs to be publicly accessible, you set the type to LoadBalancer to expose a public IP for it. To learn more about these resources, please visit the links listed below.

    Lastly, if you are unsure how to begin writing your service manifest, you can use kubectl and have it do most of the work for you 🥳

    Resources

    Take the Cloud Skills Challenge!

    Enroll in the Cloud Skills Challenge!

    Don't miss out on this opportunity to level up your skills and stay ahead of the curve in the world of cloud native.

    - + \ No newline at end of file diff --git a/cnny-2023/tags/aks/page/2/index.html b/cnny-2023/tags/aks/page/2/index.html index 273d3afa91..bfe8ce915e 100644 --- a/cnny-2023/tags/aks/page/2/index.html +++ b/cnny-2023/tags/aks/page/2/index.html @@ -14,13 +14,13 @@ - +

    5 posts tagged with "aks"

    View All Tags

    · 8 min read
    Paul Yu

    Welcome to Day 4 of Week 2 of #CloudNativeNewYear!

    The theme for this week is Kubernetes fundamentals. Yesterday we talked about how to set app configurations and secrets at runtime using Kubernetes ConfigMaps and Secrets. Today we'll explore the topic of persistent storage on Kubernetes and show you can leverage Persistent Volumes and Persistent Volume Claims to ensure your PostgreSQL data can survive container restarts.

    Ask the Experts Thursday, February 9th at 9 AM PST
    Catch the Replay of the Live Demo

    Watch the recorded demo and conversation about this week's topics.

    We were live on YouTube walking through today's (and the rest of this week's) demos.

    What We'll Cover

    • Containers are ephemeral
    • Persistent storage on Kubernetes
    • Persistent storage on AKS
    • Takeaways
    • Resources

    Containers are ephemeral

    In our sample application, the frontend UI writes vote values to a backend PostgreSQL database. By default the database container stores its data on the container's local file system, so there will be data loss when the pod is re-deployed or crashes as containers are meant to start with a clean slate each time.

    Let's re-deploy our sample app and experience the problem first hand.

    📝 NOTE: If you don't have an AKS cluster deployed, please head over to Azure-Samples/azure-voting-app-rust, clone the repo, and follow the instructions in the README.md to execute the Azure deployment and setup your kubectl context. Check out the first post this week for more on the environment setup.

    kubectl apply -f ./manifests

    Wait for the azure-voting-app service to be assigned a public IP then browse to the website and submit some votes. Use the command below to print the URL to the terminal.

    echo "http://$(kubectl get ingress azure-voting-app -o jsonpath='{.status.loadBalancer.ingress[0].ip}')"

    Now, let's delete the pods and watch Kubernetes do what it does best... that is, re-schedule pods.

    # wait for the pod to come up then ctrl+c to stop watching
    kubectl delete --all pod --wait=false && kubectl get po -w

    Once the pods have been recovered, reload the website and confirm the vote tally has been reset to zero.

    We need to fix this so that the data outlives the container.

    Persistent storage on Kubernetes

    In order for application data to survive crashes and restarts, you must implement Persistent Volumes and Persistent Volume Claims.

    A persistent volume represents storage that is available to the cluster. Storage volumes can be provisioned manually by an administrator or dynamically using Container Storage Interface (CSI) and storage classes, which includes information on how to provision CSI volumes.

    When a user needs to add persistent storage to their application, a persistent volume claim is made to allocate chunks of storage from the volume. This "claim" includes things like volume mode (e.g., file system or block storage), the amount of storage to allocate, the access mode, and optionally a storage class. Once a persistent volume claim has been deployed, users can add the volume to the pod and mount it in a container.

    In the next section, we'll demonstrate how to enable persistent storage on AKS.

    Persistent storage on AKS

    With AKS, CSI drivers and storage classes are pre-deployed into your cluster. This allows you to natively use Azure Disks, Azure Files, and Azure Blob Storage as persistent volumes. You can either bring your own Azure storage account and use it with AKS or have AKS provision an Azure storage account for you.

    To view the Storage CSI drivers that have been enabled in your AKS cluster, run the following command.

    az aks show \
    --name <YOUR_AKS_NAME> \
    --resource-group <YOUR_AKS_RESOURCE_GROUP> \
    --query storageProfile

    You should see output that looks like this.

    {
    "blobCsiDriver": null,
    "diskCsiDriver": {
    "enabled": true,
    "version": "v1"
    },
    "fileCsiDriver": {
    "enabled": true
    },
    "snapshotController": {
    "enabled": true
    }
    }

    To view the storage classes that have been installed in your cluster, run the following command.

    kubectl get storageclass

    Workload requirements will dictate which CSI driver and storage class you will need to use.

    If you need block storage, then you should use the blobCsiDriver. The driver may not be enabled by default but you can enable it by following instructions which can be found in the Resources section below.

    If you need file storage you should leverage either diskCsiDriver or fileCsiDriver. The decision between these two boils down to whether or not you need to have the underlying storage accessible by one pod or multiple pods. It is important to note that diskCsiDriver currently supports access from a single pod only. Therefore, if you need data to be accessible by multiple pods at the same time, then you should opt for fileCsiDriver.

    For our PostgreSQL deployment, we'll use the diskCsiDriver and have AKS create an Azure Disk resource for us. There is no need to create a PV resource, all we need to do to is create a PVC using the managed-csi-premium storage class.

    Run the following command to create the PVC.

    kubectl apply -f - <<EOF            
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
    name: pvc-azuredisk
    spec:
    accessModes:
    - ReadWriteOnce
    resources:
    requests:
    storage: 10Gi
    storageClassName: managed-csi-premium
    EOF

    When you check the PVC resource, you'll notice the STATUS is set to Pending. It will be set to Bound once the volume is mounted in the PostgreSQL container.

    kubectl get persistentvolumeclaim

    Let's delete the azure-voting-db deployment.

    kubectl delete deploy azure-voting-db

    Next, we need to apply an updated deployment manifest which includes our PVC.

    kubectl apply -f - <<EOF
    apiVersion: apps/v1
    kind: Deployment
    metadata:
    creationTimestamp: null
    labels:
    app: azure-voting-db
    name: azure-voting-db
    spec:
    replicas: 1
    selector:
    matchLabels:
    app: azure-voting-db
    strategy: {}
    template:
    metadata:
    creationTimestamp: null
    labels:
    app: azure-voting-db
    spec:
    containers:
    - image: postgres:15.0-alpine
    name: postgres
    ports:
    - containerPort: 5432
    env:
    - name: POSTGRES_PASSWORD
    valueFrom:
    secretKeyRef:
    name: azure-voting-secret
    key: POSTGRES_PASSWORD
    resources: {}
    volumeMounts:
    - name: mypvc
    mountPath: "/var/lib/postgresql/data"
    subPath: "data"
    volumes:
    - name: mypvc
    persistentVolumeClaim:
    claimName: pvc-azuredisk
    EOF

    In the manifest above, you'll see that we are mounting a new volume called mypvc (the name can be whatever you want) in the pod which points to a PVC named pvc-azuredisk. With the volume in place, we can mount it in the container by referencing the name of the volume mypvc and setting the mount path to /var/lib/postgresql/data (which is the default path).

    💡 IMPORTANT: When mounting a volume into a non-empty subdirectory, you must add subPath to the volume mount and point it to a subdirectory in the volume rather than mounting at root. In our case, when Azure Disk is formatted, it leaves a lost+found directory as documented here.

    Watch the pods and wait for the STATUS to show Running and the pod's READY status shows 1/1.

    # wait for the pod to come up then ctrl+c to stop watching
    kubectl get po -w

    Verify that the STATUS of the PVC is now set to Bound

    kubectl get persistentvolumeclaim

    With the new database container running, let's restart the application pod, wait for the pod's READY status to show 1/1, then head back over to our web browser and submit a few votes.

    kubectl delete pod -lapp=azure-voting-app --wait=false && kubectl get po -lapp=azure-voting-app -w

    Now the moment of truth... let's rip out the pods again, wait for the pods to be re-scheduled, and confirm our vote counts remain in tact.

    kubectl delete --all pod --wait=false && kubectl get po -w

    If you navigate back to the website, you'll find the vote are still there 🎉

    Takeaways

    By design, containers are meant to be ephemeral and stateless workloads are ideal on Kubernetes. However, there will come a time when your data needs to outlive the container. To persist data in your Kubernetes workloads, you need to leverage PV, PVC, and optionally storage classes. In our demo scenario, we leveraged CSI drivers built into AKS and created a PVC using pre-installed storage classes. From there, we updated the database deployment to mount the PVC in the container and AKS did the rest of the work in provisioning the underlying Azure Disk. If the built-in storage classes does not fit your needs; for example, you need to change the ReclaimPolicy or change the SKU for the Azure resource, then you can create your own custom storage class and configure it just the way you need it 😊

    We'll revisit this topic again next week but in the meantime, check out some of the resources listed below to learn more.

    See you in the next post!

    Resources

    Take the Cloud Skills Challenge!

    Enroll in the Cloud Skills Challenge!

    Don't miss out on this opportunity to level up your skills and stay ahead of the curve in the world of cloud native.

    - + \ No newline at end of file diff --git a/cnny-2023/tags/aks/page/3/index.html b/cnny-2023/tags/aks/page/3/index.html index 7ee80322ac..1a7f096aa5 100644 --- a/cnny-2023/tags/aks/page/3/index.html +++ b/cnny-2023/tags/aks/page/3/index.html @@ -14,13 +14,13 @@ - +

    5 posts tagged with "aks"

    View All Tags

    · 12 min read
    Paul Yu

    Welcome to Day 2 of Week 3 of #CloudNativeNewYear!

    The theme for this week is Bringing Your Application to Kubernetes. Yesterday we talked about getting an existing application running in Kubernetes with a full pipeline in GitHub Actions. Today we'll evaluate our sample application's configuration, storage, and networking requirements and implement using Kubernetes and Azure resources.

    Ask the Experts Thursday, February 9th at 9 AM PST
    Friday, February 10th at 11 AM PST

    Watch the recorded demo and conversation about this week's topics.

    We were live on YouTube walking through today's (and the rest of this week's) demos. Join us Friday, February 10th and bring your questions!

    What We'll Cover

    • Gather requirements
    • Implement environment variables using ConfigMaps
    • Implement persistent volumes using Azure Files
    • Implement secrets using Azure Key Vault
    • Re-package deployments
    • Conclusion
    • Resources
    caution

    Before you begin, make sure you've gone through yesterday's post to set up your AKS cluster.

    Gather requirements

    The eShopOnWeb application is written in .NET 7 and has two major pieces of functionality. The web UI is where customers can browse and shop. The web UI also includes an admin portal for managing the product catalog. This admin portal, is packaged as a WebAssembly application and relies on a separate REST API service. Both the web UI and the REST API connect to the same SQL Server container.

    Looking through the source code which can be found here we can identify requirements for configs, persistent storage, and secrets.

    Database server

    • Need to store the password for the sa account as a secure secret
    • Need persistent storage volume for data directory
    • Need to inject environment variables for SQL Server license type and EULA acceptance

    Web UI and REST API service

    • Need to store database connection string as a secure secret
    • Need to inject ASP.NET environment variables to override app settings
    • Need persistent storage volume for ASP.NET key storage

    Implement environment variables using ConfigMaps

    ConfigMaps are relatively straight-forward to create. If you were following along with the examples last week, this should be review 😉

    Create a ConfigMap to store database environment variables.

    kubectl apply -f - <<EOF
    apiVersion: v1
    kind: ConfigMap
    metadata:
    name: mssql-settings
    data:
    MSSQL_PID: Developer
    ACCEPT_EULA: "Y"
    EOF

    Create another ConfigMap to store ASP.NET environment variables.

    kubectl apply -f - <<EOF
    apiVersion: v1
    kind: ConfigMap
    metadata:
    name: aspnet-settings
    data:
    ASPNETCORE_ENVIRONMENT: Development
    EOF

    Implement persistent volumes using Azure Files

    Similar to last week, we'll take advantage of storage classes built into AKS. For our SQL Server data, we'll use the azurefile-csi-premium storage class and leverage an Azure Files resource as our PersistentVolume.

    Create a PersistentVolumeClaim (PVC) for persisting SQL Server data.

    kubectl apply -f - <<EOF
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
    name: mssql-data
    spec:
    accessModes:
    - ReadWriteMany
    storageClassName: azurefile-csi-premium
    resources:
    requests:
    storage: 5Gi
    EOF

    Create another PVC for persisting ASP.NET data.

    kubectl apply -f - <<EOF
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
    name: aspnet-data
    spec:
    accessModes:
    - ReadWriteMany
    storageClassName: azurefile-csi-premium
    resources:
    requests:
    storage: 5Gi
    EOF

    Implement secrets using Azure Key Vault

    It's a well known fact that Kubernetes secretes are not really secrets. They're just base64-encoded values and not secure, especially if malicious users have access to your Kubernetes cluster.

    In a production scenario, you will want to leverage an external vault like Azure Key Vault or HashiCorp Vault to encrypt and store secrets.

    With AKS, we can enable the Secrets Store CSI driver add-on which will allow us to leverage Azure Key Vault.

    # Set some variables
    RG_NAME=<YOUR_RESOURCE_GROUP_NAME>
    AKS_NAME=<YOUR_AKS_CLUSTER_NAME>
    ACR_NAME=<YOUR_ACR_NAME>

    az aks enable-addons \
    --addons azure-keyvault-secrets-provider \
    --name $AKS_NAME \
    --resource-group $RG_NAME

    With the add-on enabled, you should see aks-secrets-store-csi-driver and aks-secrets-store-provider-azure resources installed on each node in your Kubernetes cluster.

    Run the command below to verify.

    kubectl get pods \
    --namespace kube-system \
    --selector 'app in (secrets-store-csi-driver, secrets-store-provider-azure)'

    The Secrets Store CSI driver allows us to use secret stores via Container Storage Interface (CSI) volumes. This provider offers capabilities such as mounting and syncing between the secure vault and Kubernetes Secrets. On AKS, the Azure Key Vault Provider for Secrets Store CSI Driver enables integration with Azure Key Vault.

    You may not have an Azure Key Vault created yet, so let's create one and add some secrets to it.

    AKV_NAME=$(az keyvault create \
    --name akv-eshop$RANDOM \
    --resource-group $RG_NAME \
    --query name -o tsv)

    # Database server password
    az keyvault secret set \
    --vault-name $AKV_NAME \
    --name mssql-password \
    --value "@someThingComplicated1234"

    # Catalog database connection string
    az keyvault secret set \
    --vault-name $AKV_NAME \
    --name mssql-connection-catalog \
    --value "Server=db;Database=Microsoft.eShopOnWeb.CatalogDb;User Id=sa;Password=@someThingComplicated1234;TrustServerCertificate=True;"

    # Identity database connection string
    az keyvault secret set \
    --vault-name $AKV_NAME \
    --name mssql-connection-identity \
    --value "Server=db;Database=Microsoft.eShopOnWeb.Identity;User Id=sa;Password=@someThingComplicated1234;TrustServerCertificate=True;"

    Pods authentication using Azure Workload Identity

    In order for our Pods to retrieve secrets from Azure Key Vault, we'll need to set up a way for the Pod to authenticate against Azure AD. This can be achieved by implementing the new Azure Workload Identity feature of AKS.

    info

    At the time of this writing, the workload identity feature of AKS is in Preview.

    The workload identity feature within AKS allows us to leverage native Kubernetes resources and link a Kubernetes ServiceAccount to an Azure Managed Identity to authenticate against Azure AD.

    For the authentication flow, our Kubernetes cluster will act as an Open ID Connect (OIDC) issuer and will be able issue identity tokens to ServiceAccounts which will be assigned to our Pods.

    The Azure Managed Identity will be granted permission to access secrets in our Azure Key Vault and with the ServiceAccount being assigned to our Pods, they will be able to retrieve our secrets.

    For more information on how the authentication mechanism all works, check out this doc.

    To implement all this, start by enabling the new preview feature for AKS.

    az feature register \
    --namespace "Microsoft.ContainerService" \
    --name "EnableWorkloadIdentityPreview"
    caution

    This can take several minutes to complete.

    Check the status and ensure the state shows Regestered before moving forward.

    az feature show \
    --namespace "Microsoft.ContainerService" \
    --name "EnableWorkloadIdentityPreview"

    Update your AKS cluster to enable the workload identity feature and enable the OIDC issuer endpoint.

    az aks update \
    --name $AKS_NAME \
    --resource-group $RG_NAME \
    --enable-workload-identity \
    --enable-oidc-issuer

    Create an Azure Managed Identity and retrieve its client ID.

    MANAGED_IDENTITY_CLIENT_ID=$(az identity create \
    --name aks-workload-identity \
    --resource-group $RG_NAME \
    --subscription $(az account show --query id -o tsv) \
    --query 'clientId' -o tsv)

    Create the Kubernetes ServiceAccount.

    # Set namespace (this must align with the namespace that your app is deployed into)
    SERVICE_ACCOUNT_NAMESPACE=default

    # Set the service account name
    SERVICE_ACCOUNT_NAME=eshop-serviceaccount

    # Create the service account
    kubectl apply -f - <<EOF
    apiVersion: v1
    kind: ServiceAccount
    metadata:
    annotations:
    azure.workload.identity/client-id: ${MANAGED_IDENTITY_CLIENT_ID}
    labels:
    azure.workload.identity/use: "true"
    name: ${SERVICE_ACCOUNT_NAME}
    namespace: ${SERVICE_ACCOUNT_NAMESPACE}
    EOF
    info

    Note to enable this ServiceAccount to work with Azure Workload Identity, you must annotate the resource with azure.workload.identity/client-id, and add a label of azure.workload.identity/use: "true"

    That was a lot... Let's review what we just did.

    We have an Azure Managed Identity (object in Azure AD), an OIDC issuer URL (endpoint in our Kubernetes cluster), and a Kubernetes ServiceAccount.

    The next step is to "tie" these components together and establish a Federated Identity Credential so that Azure AD can trust authentication requests from your Kubernetes cluster.

    info

    This identity federation can be established between Azure AD any Kubernetes cluster; not just AKS 🤗

    To establish the federated credential, we'll need the OIDC issuer URL, and a subject which points to your Kubernetes ServiceAccount.

    # Get the OIDC issuer URL
    OIDC_ISSUER_URL=$(az aks show \
    --name $AKS_NAME \
    --resource-group $RG_NAME \
    --query "oidcIssuerProfile.issuerUrl" -o tsv)

    # Set the subject name using this format: `system:serviceaccount:<YOUR_SERVICE_ACCOUNT_NAMESPACE>:<YOUR_SERVICE_ACCOUNT_NAME>`
    SUBJECT=system:serviceaccount:$SERVICE_ACCOUNT_NAMESPACE:$SERVICE_ACCOUNT_NAME

    az identity federated-credential create \
    --name aks-federated-credential \
    --identity-name aks-workload-identity \
    --resource-group $RG_NAME \
    --issuer $OIDC_ISSUER_URL \
    --subject $SUBJECT

    With the authentication components set, we can now create a SecretProviderClass which includes details about the Azure Key Vault, the secrets to pull out from the vault, and identity used to access the vault.

    # Get the tenant id for the key vault
    TENANT_ID=$(az keyvault show \
    --name $AKV_NAME \
    --resource-group $RG_NAME \
    --query properties.tenantId -o tsv)

    # Create the secret provider for azure key vault
    kubectl apply -f - <<EOF
    apiVersion: secrets-store.csi.x-k8s.io/v1
    kind: SecretProviderClass
    metadata:
    name: eshop-azure-keyvault
    spec:
    provider: azure
    parameters:
    usePodIdentity: "false"
    useVMManagedIdentity: "false"
    clientID: "${MANAGED_IDENTITY_CLIENT_ID}"
    keyvaultName: "${AKV_NAME}"
    cloudName: ""
    objects: |
    array:
    - |
    objectName: mssql-password
    objectType: secret
    objectVersion: ""
    - |
    objectName: mssql-connection-catalog
    objectType: secret
    objectVersion: ""
    - |
    objectName: mssql-connection-identity
    objectType: secret
    objectVersion: ""
    tenantId: "${TENANT_ID}"
    secretObjects:
    - secretName: eshop-secrets
    type: Opaque
    data:
    - objectName: mssql-password
    key: mssql-password
    - objectName: mssql-connection-catalog
    key: mssql-connection-catalog
    - objectName: mssql-connection-identity
    key: mssql-connection-identity
    EOF

    Finally, lets grant the Azure Managed Identity permissions to retrieve secrets from the Azure Key Vault.

    az keyvault set-policy \
    --name $AKV_NAME \
    --secret-permissions get \
    --spn $MANAGED_IDENTITY_CLIENT_ID

    Re-package deployments

    Update your database deployment to load environment variables from our ConfigMap, attach the PVC and SecretProviderClass as volumes, mount the volumes into the Pod, and use the ServiceAccount to retrieve secrets.

    Additionally, you may notice the database Pod is set to use fsGroup:10001 as part of the securityContext. This is required as the MSSQL container runs using a non-root account called mssql and this account has the proper permissions to read/write data at the /var/opt/mssql mount path.

    kubectl apply -f - <<EOF
    apiVersion: apps/v1
    kind: Deployment
    metadata:
    name: db
    labels:
    app: db
    spec:
    replicas: 1
    selector:
    matchLabels:
    app: db
    template:
    metadata:
    labels:
    app: db
    spec:
    securityContext:
    fsGroup: 10001
    serviceAccountName: ${SERVICE_ACCOUNT_NAME}
    containers:
    - name: db
    image: mcr.microsoft.com/mssql/server:2019-latest
    ports:
    - containerPort: 1433
    envFrom:
    - configMapRef:
    name: mssql-settings
    env:
    - name: MSSQL_SA_PASSWORD
    valueFrom:
    secretKeyRef:
    name: eshop-secrets
    key: mssql-password
    resources: {}
    volumeMounts:
    - name: mssqldb
    mountPath: /var/opt/mssql
    - name: eshop-secrets
    mountPath: "/mnt/secrets-store"
    readOnly: true
    volumes:
    - name: mssqldb
    persistentVolumeClaim:
    claimName: mssql-data
    - name: eshop-secrets
    csi:
    driver: secrets-store.csi.k8s.io
    readOnly: true
    volumeAttributes:
    secretProviderClass: eshop-azure-keyvault
    EOF

    We'll update the API and Web deployments in a similar way.

    # Set the image tag
    IMAGE_TAG=<YOUR_IMAGE_TAG>

    # API deployment
    kubectl apply -f - <<EOF
    apiVersion: apps/v1
    kind: Deployment
    metadata:
    name: api
    labels:
    app: api
    spec:
    replicas: 1
    selector:
    matchLabels:
    app: api
    template:
    metadata:
    labels:
    app: api
    spec:
    serviceAccount: ${SERVICE_ACCOUNT_NAME}
    containers:
    - name: api
    image: ${ACR_NAME}.azurecr.io/api:${IMAGE_TAG}
    ports:
    - containerPort: 80
    envFrom:
    - configMapRef:
    name: aspnet-settings
    env:
    - name: ConnectionStrings__CatalogConnection
    valueFrom:
    secretKeyRef:
    name: eshop-secrets
    key: mssql-connection-catalog
    - name: ConnectionStrings__IdentityConnection
    valueFrom:
    secretKeyRef:
    name: eshop-secrets
    key: mssql-connection-identity
    resources: {}
    volumeMounts:
    - name: aspnet
    mountPath: ~/.aspnet/https:/root/.aspnet/https:ro
    - name: eshop-secrets
    mountPath: "/mnt/secrets-store"
    readOnly: true
    volumes:
    - name: aspnet
    persistentVolumeClaim:
    claimName: aspnet-data
    - name: eshop-secrets
    csi:
    driver: secrets-store.csi.k8s.io
    readOnly: true
    volumeAttributes:
    secretProviderClass: eshop-azure-keyvault
    EOF

    ## Web deployment
    kubectl apply -f - <<EOF
    apiVersion: apps/v1
    kind: Deployment
    metadata:
    name: web
    labels:
    app: web
    spec:
    replicas: 1
    selector:
    matchLabels:
    app: web
    template:
    metadata:
    labels:
    app: web
    spec:
    serviceAccount: ${SERVICE_ACCOUNT_NAME}
    containers:
    - name: web
    image: ${ACR_NAME}.azurecr.io/web:${IMAGE_TAG}
    ports:
    - containerPort: 80
    envFrom:
    - configMapRef:
    name: aspnet-settings
    env:
    - name: ConnectionStrings__CatalogConnection
    valueFrom:
    secretKeyRef:
    name: eshop-secrets
    key: mssql-connection-catalog
    - name: ConnectionStrings__IdentityConnection
    valueFrom:
    secretKeyRef:
    name: eshop-secrets
    key: mssql-connection-identity
    resources: {}
    volumeMounts:
    - name: aspnet
    mountPath: ~/.aspnet/https:/root/.aspnet/https:ro
    - name: eshop-secrets
    mountPath: "/mnt/secrets-store"
    readOnly: true
    volumes:
    - name: aspnet
    persistentVolumeClaim:
    claimName: aspnet-data
    - name: eshop-secrets
    csi:
    driver: secrets-store.csi.k8s.io
    readOnly: true
    volumeAttributes:
    secretProviderClass: eshop-azure-keyvault
    EOF

    If all went well with your deployment updates, you should be able to browse to your website and buy some merchandise again 🥳

    echo "http://$(kubectl get service web -o jsonpath='{.status.loadBalancer.ingress[0].ip}')"

    Conclusion

    Although there is no visible changes on with our website, we've made a ton of changes on the Kubernetes backend to make this application much more secure and resilient.

    We used a combination of Kubernetes resources and AKS-specific features to achieve our goal of securing our secrets and ensuring data is not lost on container crashes and restarts.

    To learn more about the components we leveraged here today, checkout the resources and additional tutorials listed below.

    You can also find manifests with all the changes made in today's post in the Azure-Samples/eShopOnAKS repository.

    See you in the next post!

    Resources

    Take the Cloud Skills Challenge!

    Enroll in the Cloud Skills Challenge!

    Don't miss out on this opportunity to level up your skills and stay ahead of the curve in the world of cloud native.

    - + \ No newline at end of file diff --git a/cnny-2023/tags/aks/page/4/index.html b/cnny-2023/tags/aks/page/4/index.html index a881613ec9..73eef9e466 100644 --- a/cnny-2023/tags/aks/page/4/index.html +++ b/cnny-2023/tags/aks/page/4/index.html @@ -14,13 +14,13 @@ - +

    5 posts tagged with "aks"

    View All Tags

    · 10 min read
    Paul Yu

    Welcome to Day 3 of Week 3 of #CloudNativeNewYear!

    The theme for this week is Bringing Your Application to Kubernetes. Yesterday we added configuration, secrets, and storage to our app. Today we'll explore how to expose the eShopOnWeb app so that customers can reach it over the internet using a custom domain name and TLS.

    Ask the Experts Thursday, February 9th at 9 AM PST
    Friday, February 10th at 11 AM PST

    Watch the recorded demo and conversation about this week's topics.

    We were live on YouTube walking through today's (and the rest of this week's) demos. Join us Friday, February 10th and bring your questions!

    What We'll Cover

    • Gather requirements
    • Generate TLS certificate and store in Azure Key Vault
    • Implement custom DNS using Azure DNS
    • Enable Web Application Routing add-on for AKS
    • Implement Ingress for the web application
    • Conclusion
    • Resources

    Gather requirements

    Currently, our eShopOnWeb app has three Kubernetes services deployed:

    1. db exposed internally via ClusterIP
    2. api exposed externally via LoadBalancer
    3. web exposed externally via LoadBalancer

    As mentioned in my post last week, Services allow applications to communicate with each other using DNS names. Kubernetes has service discovery capabilities built-in that allows Pods to resolve Services simply by using their names.

    In the case of our api and web deployments, they can simply reach the database by calling its name. The service type of ClusterIP for the db can remain as-is since it only needs to be accessed by the api and web apps.

    On the other hand, api and web both need to be accessed over the public internet. Currently, these services are using service type LoadBalancer which tells AKS to provision an Azure Load Balancer with a public IP address. No one is going to remember the IP addresses, so we need to make the app more accessible by adding a custom domain name and securing it with a TLS certificate.

    Here's what we're going to need:

    • Custom domain name for our app
    • TLS certificate for the custom domain name
    • Routing rule to ensure requests with /api/ in the URL is routed to the backend REST API
    • Routing rule to ensure requests without /api/ in the URL is routing to the web UI

    Just like last week, we will use the Web Application Routing add-on for AKS. But this time, we'll integrate it with Azure DNS and Azure Key Vault to satisfy all of our requirements above.

    info

    At the time of this writing the add-on is still in Public Preview

    Generate TLS certificate and store in Azure Key Vault

    We deployed an Azure Key Vault yesterday to store secrets. We'll use it again to store a TLS certificate too.

    Let's create and export a self-signed certificate for the custom domain.

    DNS_NAME=eshoponweb$RANDOM.com
    openssl req -new -x509 -nodes -out web-tls.crt -keyout web-tls.key -subj "/CN=${DNS_NAME}" -addext "subjectAltName=DNS:${DNS_NAME}"
    openssl pkcs12 -export -in web-tls.crt -inkey web-tls.key -out web-tls.pfx -password pass:
    info

    For learning purposes we'll use a self-signed certificate and a fake custom domain name.

    To browse to the site using the fake domain, we'll mimic a DNS lookup by adding an entry to your host file which maps the public IP address assigned to the ingress controller to the custom domain.

    In a production scenario, you will need to have a real domain delegated to Azure DNS and a valid TLS certificate for the domain.

    Grab your Azure Key Vault name and set the value in a variable for later use.

    RESOURCE_GROUP=cnny-week3

    AKV_NAME=$(az resource list \
    --resource-group $RESOURCE_GROUP \
    --resource-type Microsoft.KeyVault/vaults \
    --query "[0].name" -o tsv)

    Grant yourself permissions to get, list, and import certificates.

    MY_USER_NAME=$(az account show --query user.name -o tsv)
    MY_USER_OBJECT_ID=$(az ad user show --id $MY_USER_NAME --query id -o tsv)

    az keyvault set-policy \
    --name $AKV_NAME \
    --object-id $MY_USER_OBJECT_ID \
    --certificate-permissions get list import

    Upload the TLS certificate to Azure Key Vault and grab its certificate URI.

    WEB_TLS_CERT_ID=$(az keyvault certificate import \
    --vault-name $AKV_NAME \
    --name web-tls \
    --file web-tls.pfx \
    --query id \
    --output tsv)

    Implement custom DNS with Azure DNS

    Create a custom domain for our application and grab its Azure resource id.

    DNS_ZONE_ID=$(az network dns zone create \
    --name $DNS_NAME \
    --resource-group $RESOURCE_GROUP \
    --query id \
    --output tsv)

    Enable Web Application Routing add-on for AKS

    As we enable the Web Application Routing add-on, we'll also pass in the Azure DNS Zone resource id which triggers the installation of the external-dns controller in your Kubernetes cluster. This controller will be able to write Azure DNS zone entries on your behalf as you deploy Ingress manifests.

    AKS_NAME=$(az resource list \
    --resource-group $RESOURCE_GROUP \
    --resource-type Microsoft.ContainerService/managedClusters \
    --query "[0].name" -o tsv)

    az aks enable-addons \
    --name $AKS_NAME \
    --resource-group $RESOURCE_GROUP \
    --addons web_application_routing \
    --dns-zone-resource-id=$DNS_ZONE_ID \
    --enable-secret-rotation

    The add-on will also deploy a new Azure Managed Identity which is used by the external-dns controller when writing Azure DNS zone entries. Currently, it does not have permission to do that, so let's grant it permission.

    # This is where resources are automatically deployed by AKS
    NODE_RESOURCE_GROUP=$(az aks show \
    --name $AKS_NAME \
    --resource-group $RESOURCE_GROUP \
    --query nodeResourceGroup -o tsv)

    # This is the managed identity created by the Web Application Routing add-on
    MANAGED_IDENTTIY_OBJECT_ID=$(az resource show \
    --name webapprouting-${AKS_NAME} \
    --resource-group $NODE_RESOURCE_GROUP \
    --resource-type Microsoft.ManagedIdentity/userAssignedIdentities \
    --query properties.principalId \
    --output tsv)

    # Grant the managed identity permissions to write DNS entries
    az role assignment create \
    --role "DNS Zone Contributor" \
    --assignee $MANAGED_IDENTTIY_OBJECT_ID \
    --scope $DNS_ZONE_ID

    The Azure Managed Identity will also be used to retrieve and rotate TLS certificates from Azure Key Vault. So we'll need to grant it permission for that too.

    az keyvault set-policy \
    --name $AKV_NAME \
    --object-id $MANAGED_IDENTTIY_OBJECT_ID \
    --secret-permissions get \
    --certificate-permissions get

    Implement Ingress for the web application

    Before we create a new Ingress manifest, let's update the existing services to use ClusterIP instead of LoadBalancer. With an Ingress in place, there is no reason why we need the Service resources to be accessible from outside the cluster. The new Ingress will be the only entrypoint for external users.

    We can use the kubectl patch command to update the services

    kubectl patch service api -p '{"spec": {"type": "ClusterIP"}}'
    kubectl patch service web -p '{"spec": {"type": "ClusterIP"}}'

    Deploy a new Ingress to place in front of the web Service. Notice there is a special annotations entry for kubernetes.azure.com/tls-cert-keyvault-uri which points back to our self-signed certificate that was uploaded to Azure Key Vault.

    kubectl apply -f - <<EOF
    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
    annotations:
    kubernetes.azure.com/tls-cert-keyvault-uri: ${WEB_TLS_CERT_ID}
    name: web
    spec:
    ingressClassName: webapprouting.kubernetes.azure.com
    rules:
    - host: ${DNS_NAME}
    http:
    paths:
    - backend:
    service:
    name: web
    port:
    number: 80
    path: /
    pathType: Prefix
    - backend:
    service:
    name: api
    port:
    number: 80
    path: /api
    pathType: Prefix
    tls:
    - hosts:
    - ${DNS_NAME}
    secretName: web-tls
    EOF

    In our manifest above, we've also configured the Ingress route the traffic to either the web or api services based on the URL path requested. If the request URL includes /api/ then it will send traffic to the api backend service. Otherwise, it will send traffic to the web service.

    Within a few minutes, the external-dns controller will add an A record to Azure DNS which points to the Ingress resource's public IP. With the custom domain in place, we can simply browse using this domain name.

    info

    As mentioned above, since this is not a real domain name, we need to modify our host file to make it seem like our custom domain is resolving to the Ingress' public IP address.

    To get the ingress public IP, run the following:

    # Get the IP
    kubectl get ingress web -o jsonpath="{.status.loadBalancer.ingress[0].ip}"

    # Get the hostname
    kubectl get ingress web -o jsonpath="{.spec.tls[0].hosts[0]}"

    Next, open your host file and add an entry using the format <YOUR_PUBLIC_IP> <YOUR_CUSTOM_DOMAIN>. Below is an example of what it should look like.

    20.237.116.224 eshoponweb11265.com

    See this doc for more info on how to do this.

    When browsing to the website, you may be presented with a warning about the connection not being private. This is due to the fact that we are using a self-signed certificate. This is expected, so go ahead and proceed anyway to load up the page.

    Why is the Admin page broken?

    If you log in using the admin@microsoft.com account and browse to the Admin page, you'll notice no products are loaded on the page.

    This is because the admin page is built using Blazor and compiled as a WebAssembly application that runs in your browser. When the application was compiled, it packed the appsettings.Development.json file as an embedded resource. This file contains the base URL for the public API and it currently points to https://localhost:5099. Now that we have a domain name, we can update the base URL and point it to our custom domain.

    From the root of the eShopOnWeb repo, update the configuration file using a sed command.

    sed -i -e "s/localhost:5099/${DNS_NAME}/g" ./src/BlazorAdmin/wwwroot/appsettings.Development.json

    Rebuild and push the container to Azure Container Registry.

    # Grab the name of your Azure Container Registry
    ACR_NAME=$(az resource list \
    --resource-group $RESOURCE_GROUP \
    --resource-type Microsoft.ContainerRegistry/registries \
    --query "[0].name" -o tsv)

    # Invoke a build and publish job
    az acr build \
    --registry $ACR_NAME \
    --image $ACR_NAME.azurecr.io/web:v0.1.0 \
    --file ./src/Web/Dockerfile .

    Once the container build has completed, we can issue a kubectl patch command to quickly update the web deployment to test our change.

    kubectl patch deployment web -p "$(cat <<EOF
    {
    "spec": {
    "template": {
    "spec": {
    "containers": [
    {
    "name": "web",
    "image": "${ACR_NAME}.azurecr.io/web:v0.1.0"
    }
    ]
    }
    }
    }
    }
    EOF
    )"

    If all went well, you will be able to browse the admin page again and confirm product data is being loaded 🥳

    Conclusion

    The Web Application Routing add-on for AKS aims to streamline the process of exposing it to the public using the open-source NGINX Ingress Controller. With the add-on being managed by Azure, it natively integrates with other Azure services like Azure DNS and eliminates the need to manually create DNS entries. It can also integrate with Azure Key Vault to automatically pull in TLS certificates and rotate them as needed to further reduce operational overhead.

    We are one step closer to production and in the upcoming posts we'll further operationalize and secure our deployment, so stay tuned!

    In the meantime, check out the resources listed below for further reading.

    You can also find manifests with all the changes made in today's post in the Azure-Samples/eShopOnAKS repository.

    Resources

    Take the Cloud Skills Challenge!

    Enroll in the Cloud Skills Challenge!

    Don't miss out on this opportunity to level up your skills and stay ahead of the curve in the world of cloud native.

    - + \ No newline at end of file diff --git a/cnny-2023/tags/aks/page/5/index.html b/cnny-2023/tags/aks/page/5/index.html index 6d2bff0e97..53fd0c432e 100644 --- a/cnny-2023/tags/aks/page/5/index.html +++ b/cnny-2023/tags/aks/page/5/index.html @@ -14,13 +14,13 @@ - +

    5 posts tagged with "aks"

    View All Tags

    · 6 min read
    Josh Duffney

    Welcome to Day 5 of Week 3 of #CloudNativeNewYear!

    The theme for this week is Bringing Your Application to Kubernetes. Yesterday we talked about debugging and instrumenting our application. Today we'll explore the topic of container image signing and secure supply chain.

    Ask the Experts Thursday, February 9th at 9 AM PST
    Friday, February 10th at 11 AM PST

    Watch the recorded demo and conversation about this week's topics.

    We were live on YouTube walking through today's (and the rest of this week's) demos. Join us Friday, February 10th and bring your questions!

    What We'll Cover

    • Introduction
    • Prerequisites
    • Create a digital signing certificate
    • Generate an Azure Container Registry Token
    • Set up Notation
    • Install the Notation Azure Key Vault Plugin
    • Add the signing Certificate to Notation
    • Sign Container Images
    • Conclusion

    Introduction

    The secure supply chain is a crucial aspect of software development, delivery, and deployment, and digital signing plays a critical role in this process.

    By using digital signatures to verify the authenticity and integrity of container images, organizations can improve the security of your software supply chain and reduce the risk of security breaches and data compromise.

    In this article, you'll learn how to use Notary, an open-source project hosted by the Cloud Native Computing Foundation (CNCF) to digitally sign container images stored on Azure Container Registry.

    Prerequisites

    To follow along, you'll need an instance of:

    Create a digital signing certificate

    A digital signing certificate is a certificate that is used to digitally sign and verify the authenticity and integrity of digital artifacts. Such documents, software, and of course container images.

    Before you can implement digital signatures, you must first create a digital signing certificate.

    Run the following command to generate the certificate:

    1. Create the policy file

      cat <<EOF > ./my_policy.json
      {
      "issuerParameters": {
      "certificateTransparency": null,
      "name": "Self"
      },
      "x509CertificateProperties": {
      "ekus": [
      "1.3.6.1.5.5.7.3.3"
      ],
      "key_usage": [
      "digitalSignature"
      ],
      "subject": "CN=${keySubjectName}",
      "validityInMonths": 12
      }
      }
      EOF

      The ekus and key usage of this certificate policy dictate that the certificate can only be used for digital signatures.

    2. Create the certificate in Azure Key Vault

      az keyvault certificate create --name $keyName --vault-name $keyVaultName --policy @my_policy.json

      Replace $keyName and $keyVaultName with your desired certificate name and Azure Key Vault instance name.

    Generate a Azure Container Registry token

    Azure Container Registry tokens are used to grant access to the contents of the registry. Tokens can be used for a variety of things such as pulling images, pushing images, or managing the registry.

    As part of the container image signing workflow, you'll need a token to authenticate the Notation CLI with your Azure Container Registry.

    Run the following command to generate an ACR token:

    az acr token create \
    --name $tokenName \
    --registry $registry \
    --scope-map _repositories_admin \
    --query 'credentials.passwords[0].value' \
    --only-show-errors \
    --output tsv

    Replace $tokenName with your name for the ACR token and $registry with the name of your ACR instance.

    Setup Notation

    Notation is the command-line interface for the CNCF Notary project. You'll use it to digitally sign the api and web container images for the eShopOnWeb application.

    Run the following commands to download and install the NotationCli:

    1. Open a terminal or command prompt window

    2. Download the Notary notation release

      curl -Lo notation.tar.gz https://github.com/notaryproject/notation/releases/download/v1.0.0-rc.1/notation_1.0.0-rc.1_linux_amd64.tar.gz > /dev/null 2>&1

      If you're not using Linux, you can find the releases here.

    3. Extract the contents of the notation.tar.gz

      tar xvzf notation.tar.gz > /dev/null 2>&1
    4. Copy the notation binary to the $HOME/bin directory

      cp ./notation $HOME/bin
    5. Add the $HOME/bin directory to the PATH environment variable

      export PATH="$HOME/bin:$PATH"
    6. Remove the downloaded files

      rm notation.tar.gz LICENSE
    7. Check the notation version

      notation --version

    Install the Notation Azure Key Vault plugin

    By design the NotationCli supports plugins that extend its digital signing capabilities to remote registries. And in order to sign your container images stored in Azure Container Registry, you'll need to install the Azure Key Vault plugin for Notation.

    Run the following commands to install the azure-kv plugin:

    1. Download the plugin

      curl -Lo notation-azure-kv.tar.gz \
      https://github.com/Azure/notation-azure-kv/releases/download/v0.5.0-rc.1/notation-azure-kv_0.5.0-rc.1_linux_amd64.tar.gz > /dev/null 2>&1

      Non-Linux releases can be found here.

    2. Extract to the plugin directory & delete download files

      tar xvzf notation-azure-kv.tar.gz -C ~/.config/notation/plugins/azure-kv notation-azure-kv > /dev/null 2>&

      rm -rf notation-azure-kv.tar.gz
    3. Verify the plugin was installed

      notation plugin ls

    Add the signing certificate to Notation

    Now that you have Notation and the Azure Key Vault plugin installed, add the certificate's keyId created above to Notation.

    1. Get the Certificate Key ID from Azure Key Vault

      az keyvault certificate show \
      --vault-name $keyVaultName \
      --name $keyName \
      --query "kid" --only-show-errors --output tsv

      Replace $keyVaultName and $keyName with the appropriate information.

    2. Add the Key ID to KMS using Notation

      notation key add --plugin azure-kv --id $keyID $keyName
    3. Check the key list

      notation key ls

    Sign Container Images

    At this point, all that's left is to sign the container images.

    Run the notation sign command to sign the api and web container images:

    notation sign $registry.azurecr.io/web:$tag \
    --username $tokenName \
    --password $tokenPassword

    notation sign $registry.azurecr.io/api:$tag \
    --username $tokenName \
    --password $tokenPassword

    Replace $registry, $tag, $tokenName, and $tokenPassword with the appropriate values. To improve security, use a SHA hash for the tag.

    NOTE: If you didn't take note of the token password, you can rerun the az acr token create command to generate a new password.

    Conclusion

    Digital signing plays a critical role in ensuring the security of software supply chains.

    By signing software components, organizations can verify the authenticity and integrity of software, helping to prevent unauthorized modifications, tampering, and malware.

    And if you want to take digital signing to a whole new level by using them to prevent the deployment of unsigned container images, check out the Ratify project on GitHub!

    Take the Cloud Skills Challenge!

    Enroll in the Cloud Skills Challenge!

    Don't miss out on this opportunity to level up your skills and stay ahead of the curve in the world of cloud native.

    - + \ No newline at end of file diff --git a/cnny-2023/tags/ask-the-expert/index.html b/cnny-2023/tags/ask-the-expert/index.html index 4625c57caf..e1ab565e90 100644 --- a/cnny-2023/tags/ask-the-expert/index.html +++ b/cnny-2023/tags/ask-the-expert/index.html @@ -14,13 +14,13 @@ - +

    16 posts tagged with "ask-the-expert"

    View All Tags

    · 4 min read
    Cory Skimming
    Devanshi Joshi
    Steven Murawski
    Nitya Narasimhan

    Welcome to the Kick-off Post for #30DaysOfCloudNative - one of the core initiatives within #CloudNativeNewYear! Over the next four weeks, join us as we take you from fundamentals to functional usage of Cloud-native technologies, one blog post at a time! Read on to learn a little bit about this initiative and what you can expect to learn from this journey!

    What We'll Cover


    Cloud-native New Year

    Welcome to Week 01 of 🥳 #CloudNativeNewYear ! Today, we kick off a full month of content and activities to skill you up on all things Cloud-native on Azure with content, events, and community interactions! Read on to learn about what we have planned!


    Explore our initiatives

    We have a number of initiatives planned for the month to help you learn and skill up on relevant technologies. Click on the links to visit the relevant pages for each.

    We'll go into more details about #30DaysOfCloudNative in this post - don't forget to subscribe to the blog to get daily posts delivered directly to your preferred feed reader!


    Register for events!

    What are 3 things you can do today, to jumpstart your learning journey?


    #30DaysOfCloudNative

    #30DaysOfCloudNative is a month-long series of daily blog posts grouped into 4 themed weeks - taking you from core concepts to end-to-end solution examples in 30 days. Each article will be short (5-8 mins reading time) and provide exercises and resources to help you reinforce learnings and take next steps.

    This series focuses on the Cloud-native On Azure learning journey in four stages, each building on the previous week to help you skill up in a beginner-friendly way:

    We have a tentative weekly-themed roadmap for the topics we hope to cover and will keep this updated as we go with links to actual articles as they get published.

    Week 1: FOCUS ON CLOUD-NATIVE FUNDAMENTALS

    Here's a sneak peek at the week 1 schedule. We'll start with a broad review of cloud-native fundamentals and walkthrough the core concepts of microservices, containers and Kubernetes.

    • Jan 23: Learn Core Concepts for Cloud-native
    • Jan 24: Container 101
    • Jan 25: Adopting Microservices with Kubernetes
    • Jan 26: Kubernetes 101
    • Jan 27: Exploring your Cloud Native Options

    Let's Get Started!

    Now you know everything! We hope you are as excited as we are to dive into a full month of active learning and doing! Don't forget to subscribe for updates in your favorite feed reader! And look out for our first Cloud-native Fundamentals post on January 23rd!


    - + \ No newline at end of file diff --git a/cnny-2023/tags/ask-the-expert/page/10/index.html b/cnny-2023/tags/ask-the-expert/page/10/index.html index 8207a60114..e7c20f437e 100644 --- a/cnny-2023/tags/ask-the-expert/page/10/index.html +++ b/cnny-2023/tags/ask-the-expert/page/10/index.html @@ -14,7 +14,7 @@ - + @@ -22,7 +22,7 @@

    16 posts tagged with "ask-the-expert"

    View All Tags

    · 14 min read
    Steven Murawski

    Welcome to Day 1 of Week 3 of #CloudNativeNewYear!

    The theme for this week is Bringing Your Application to Kubernetes. Last we talked about Kubernetes Fundamentals. Today we'll explore getting an existing application running in Kubernetes with a full pipeline in GitHub Actions.

    Ask the Experts Thursday, February 9th at 9 AM PST
    Friday, February 10th at 11 AM PST

    Watch the recorded demo and conversation about this week's topics.

    We were live on YouTube walking through today's (and the rest of this week's) demos. Join us Friday, February 10th and bring your questions!

    What We'll Cover

    • Our Application
    • Adding Some Infrastructure as Code
    • Building and Publishing a Container Image
    • Deploying to Kubernetes
    • Summary
    • Resources

    Our Application

    This week we'll be taking an exisiting application - something similar to a typical line of business application - and setting it up to run in Kubernetes. Over the course of the week, we'll address different concerns. Today we'll focus on updating our CI/CD process to handle standing up (or validating that we have) an Azure Kubernetes Service (AKS) environment, building and publishing container images for our web site and API server, and getting those services running in Kubernetes.

    The application we'll be starting with is eShopOnWeb. This application has a web site and API which are backed by a SQL Server instance. It's built in .NET 7, so it's cross-platform.

    info

    For the enterprising among you, you may notice that there are a number of different eShopOn* variants on GitHub, including eShopOnContainers. We aren't using that example as it's more of an end state than a starting place. Afterwards, feel free to check out that example as what this solution could look like as a series of microservices.

    Adding Some Infrastructure as Code

    Just like last week, we need to stand up an AKS environment. This week, however, rather than running commands in our own shell, we'll set up GitHub Actions to do that for us.

    There is a LOT of plumbing in this section, but once it's set up, it'll make our lives a lot easier. This section ensures that we have an environment to deploy our application into configured the way we want. We can easily extend this to accomodate multiple environments or add additional microservices with minimal new effort.

    Federated Identity

    Setting up a federated identity will allow us a more securable and auditable way of accessing Azure from GitHub Actions. For more about setting up a federated identity, Microsoft Learn has the details on connecting GitHub Actions to Azure.

    Here, we'll just walk through the setup of the identity and configure GitHub to use that idenity to deploy our AKS environment and interact with our Azure Container Registry.

    The examples will use PowerShell, but a Bash version of the setup commands is available in the week3/day1 branch.

    Prerequisites

    To follow along, you'll need:

    • a GitHub account
    • an Azure Subscription
    • the Azure CLI
    • and the Git CLI.

    You'll need to fork the source repository under your GitHub user or organization where you can manage secrets and GitHub Actions.

    It would be helpful to have the GitHub CLI, but it's not required.

    Set Up Some Defaults

    You will need to update one or more of the variables (your user or organization, what branch you want to work off of, and possibly the Azure AD application name if there is a conflict).

    # Replace the gitHubOrganizationName value
    # with the user or organization you forked
    # the repository under.

    $githubOrganizationName = 'Azure-Samples'
    $githubRepositoryName = 'eShopOnAKS'
    $branchName = 'week3/day1'
    $applicationName = 'cnny-week3-day1'

    Create an Azure AD Application

    Next, we need to create an Azure AD application.

    # Create an Azure AD application
    $aksDeploymentApplication = New-AzADApplication -DisplayName $applicationName

    Set Up Federation for that Azure AD Application

    And configure that application to allow federated credential requests from our GitHub repository for a particular branch.

    # Create a federated identity credential for the application
    New-AzADAppFederatedCredential `
    -Name $applicationName `
    -ApplicationObjectId $aksDeploymentApplication.Id `
    -Issuer 'https://token.actions.githubusercontent.com' `
    -Audience 'api://AzureADTokenExchange' `
    -Subject "repo:$($githubOrganizationName)/$($githubRepositoryName):ref:refs/heads/$branchName"

    Create a Service Principal for the Azure AD Application

    Once the application has been created, we need a service principal tied to that application. The service principal can be granted rights in Azure.

    # Create a service principal for the application
    New-AzADServicePrincipal -AppId $($aksDeploymentApplication.AppId)

    Give that Service Principal Rights to Azure Resources

    Because our Bicep deployment exists at the subscription level and we are creating role assignments, we need to give it Owner rights. If we changed the scope of the deployment to just a resource group, we could apply more scoped permissions.

    $azureContext = Get-AzContext
    New-AzRoleAssignment `
    -ApplicationId $($aksDeploymentApplication.AppId) `
    -RoleDefinitionName Owner `
    -Scope $azureContext.Subscription.Id

    Add Secrets to GitHub Repository

    If you have the GitHub CLI, you can use that right from your shell to set the secrets needed.

    gh secret set AZURE_CLIENT_ID --body $aksDeploymentApplication.AppId
    gh secret set AZURE_TENANT_ID --body $azureContext.Tenant.Id
    gh secret set AZURE_SUBSCRIPTION_ID --body $azureContext.Subscription.Id

    Otherwise, you can create them through the web interface like I did in the Learn Live event below.

    info

    It may look like the whole video will play, but it'll stop after configuring the secrets in GitHub (after about 9 minutes)

    The video shows creating the Azure AD application, service principals, and configuring the federated identity in Azure AD and GitHub.

    Creating a Bicep Deployment

    Resuable Workflows

    We'll create our Bicep deployment in a reusable workflows. What are they? The previous link has the documentation or the video below has my colleague Brandon Martinez and I talking about them.

    This workflow is basically the same deployment we did in last week's series, just in GitHub Actions.

    Start by creating a file called deploy_aks.yml in the .github/workflows directory with the below contents.

    name: deploy

    on:
    workflow_call:
    inputs:
    resourceGroupName:
    required: true
    type: string
    secrets:
    AZURE_CLIENT_ID:
    required: true
    AZURE_TENANT_ID:
    required: true
    AZURE_SUBSCRIPTION_ID:
    required: true
    outputs:
    containerRegistryName:
    description: Container Registry Name
    value: ${{ jobs.deploy.outputs.containerRegistryName }}
    containerRegistryUrl:
    description: Container Registry Login Url
    value: ${{ jobs.deploy.outputs.containerRegistryUrl }}
    resourceGroupName:
    description: Resource Group Name
    value: ${{ jobs.deploy.outputs.resourceGroupName }}
    aksName:
    description: Azure Kubernetes Service Cluster Name
    value: ${{ jobs.deploy.outputs.aksName }}

    permissions:
    id-token: write
    contents: read

    jobs:
    validate:
    runs-on: ubuntu-latest
    steps:
    - uses: actions/checkout@v2
    - uses: azure/login@v1
    name: Sign in to Azure
    with:
    client-id: ${{ secrets.AZURE_CLIENT_ID }}
    tenant-id: ${{ secrets.AZURE_TENANT_ID }}
    subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}
    - uses: azure/arm-deploy@v1
    name: Run preflight validation
    with:
    deploymentName: ${{ github.run_number }}
    scope: subscription
    region: eastus
    template: ./deploy/main.bicep
    parameters: >
    resourceGroup=${{ inputs.resourceGroupName }}
    deploymentMode: Validate

    deploy:
    needs: validate
    runs-on: ubuntu-latest
    outputs:
    containerRegistryName: ${{ steps.deploy.outputs.acr_name }}
    containerRegistryUrl: ${{ steps.deploy.outputs.acr_login_server_url }}
    resourceGroupName: ${{ steps.deploy.outputs.resource_group_name }}
    aksName: ${{ steps.deploy.outputs.aks_name }}
    steps:
    - uses: actions/checkout@v2
    - uses: azure/login@v1
    name: Sign in to Azure
    with:
    client-id: ${{ secrets.AZURE_CLIENT_ID }}
    tenant-id: ${{ secrets.AZURE_TENANT_ID }}
    subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}
    - uses: azure/arm-deploy@v1
    id: deploy
    name: Deploy Bicep file
    with:
    failOnStdErr: false
    deploymentName: ${{ github.run_number }}
    scope: subscription
    region: eastus
    template: ./deploy/main.bicep
    parameters: >
    resourceGroup=${{ inputs.resourceGroupName }}

    Adding the Bicep Deployment

    Once we have the Bicep deployment workflow, we can add it to the primary build definition in .github/workflows/dotnetcore.yml

    Permissions

    First, we need to add a permissions block to let the workflow request our Azure AD token. This can go towards the top of the YAML file (I started it on line 5).

    permissions:
    id-token: write
    contents: read

    Deploy AKS Job

    Next, we'll add a reference to our reusable workflow. This will go after the build job.

      deploy_aks:
    needs: [build]
    uses: ./.github/workflows/deploy_aks.yml
    with:
    resourceGroupName: 'cnny-week3'
    secrets:
    AZURE_CLIENT_ID: ${{ secrets.AZURE_CLIENT_ID }}
    AZURE_TENANT_ID: ${{ secrets.AZURE_TENANT_ID }}
    AZURE_SUBSCRIPTION_ID: ${{ secrets.AZURE_SUBSCRIPTION_ID }}

    Building and Publishing a Container Image

    Now that we have our target environment in place and an Azure Container Registry, we can build and publish our container images.

    Add a Reusable Workflow

    First, we'll create a new file for our reusable workflow at .github/workflows/publish_container_image.yml.

    We'll start the file with a name, the parameters it needs to run, and the permissions requirements for the federated identity request.

    name: Publish Container Images

    on:
    workflow_call:
    inputs:
    containerRegistryName:
    required: true
    type: string
    containerRegistryUrl:
    required: true
    type: string
    githubSha:
    required: true
    type: string
    secrets:
    AZURE_CLIENT_ID:
    required: true
    AZURE_TENANT_ID:
    required: true
    AZURE_SUBSCRIPTION_ID:
    required: true

    permissions:
    id-token: write
    contents: read

    Build the Container Images

    Our next step is to build the two container images we'll need for the application, the website and the API. We'll build the container images on our build worker and tag it with the git SHA, so there'll be a direct tie between the point in time in our codebase and the container images that represent it.

    jobs:
    publish_container_image:
    runs-on: ubuntu-latest

    steps:
    - uses: actions/checkout@v2
    - name: docker build
    run: |
    docker build . -f src/Web/Dockerfile -t ${{ inputs.containerRegistryUrl }}/web:${{ inputs.githubSha }}
    docker build . -f src/PublicApi/Dockerfile -t ${{ inputs.containerRegistryUrl }}/api:${{ inputs.githubSha}}

    Scan the Container Images

    Before we publish those container images, we'll scan them for vulnerabilities and best practice violations. We can add these two steps (one scan for each image).

        - name: scan web container image
    uses: Azure/container-scan@v0
    with:
    image-name: ${{ inputs.containerRegistryUrl }}/web:${{ inputs.githubSha}}
    - name: scan api container image
    uses: Azure/container-scan@v0
    with:
    image-name: ${{ inputs.containerRegistryUrl }}/web:${{ inputs.githubSha}}

    The container images provided have a few items that'll be found. We can create an allowed list at .github/containerscan/allowedlist.yaml to define vulnerabilities or best practice violations that we'll explictly allow to not fail our build.

    general:
    vulnerabilities:
    - CVE-2022-29458
    - CVE-2022-3715
    - CVE-2022-1304
    - CVE-2021-33560
    - CVE-2020-16156
    - CVE-2019-8457
    - CVE-2018-8292
    bestPracticeViolations:
    - CIS-DI-0001
    - CIS-DI-0005
    - CIS-DI-0006
    - CIS-DI-0008

    Publish the Container Images

    Finally, we'll log in to Azure, then log in to our Azure Container Registry, and push our images.

        - uses: azure/login@v1
    name: Sign in to Azure
    with:
    client-id: ${{ secrets.AZURE_CLIENT_ID }}
    tenant-id: ${{ secrets.AZURE_TENANT_ID }}
    subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}
    - name: acr login
    run: az acr login --name ${{ inputs.containerRegistryName }}
    - name: docker push
    run: |
    docker push ${{ inputs.containerRegistryUrl }}/web:${{ inputs.githubSha}}
    docker push ${{ inputs.containerRegistryUrl }}/api:${{ inputs.githubSha}}

    Update the Build With the Image Build and Publish

    Now that we have our reusable workflow to create and publish our container images, we can include that in our primary build defnition at .github/workflows/dotnetcore.yml.

      publish_container_image:
    needs: [deploy_aks]
    uses: ./.github/workflows/publish_container_image.yml
    with:
    containerRegistryName: ${{ needs.deploy_aks.outputs.containerRegistryName }}
    containerRegistryUrl: ${{ needs.deploy_aks.outputs.containerRegistryUrl }}
    githubSha: ${{ github.sha }}
    secrets:
    AZURE_CLIENT_ID: ${{ secrets.AZURE_CLIENT_ID }}
    AZURE_TENANT_ID: ${{ secrets.AZURE_TENANT_ID }}
    AZURE_SUBSCRIPTION_ID: ${{ secrets.AZURE_SUBSCRIPTION_ID }}

    Deploying to Kubernetes

    Finally, we've gotten enough set up that a commit to the target branch will:

    • build and test our application code
    • set up (or validate) our AKS and ACR environment
    • and create, scan, and publish our container images to ACR

    Our last step will be to deploy our application to Kubernetes. We'll use the basic building blocks we worked with last week, deployments and services.

    Starting the Reusable Workflow to Deploy to AKS

    We'll start our workflow with our parameters that we need, as well as the permissions to access the token to log in to Azure.

    We'll check out our code, then log in to Azure, and use the az CLI to get credentials for our AKS cluster.

    name: deploy_to_aks

    on:
    workflow_call:
    inputs:
    aksName:
    required: true
    type: string
    resourceGroupName:
    required: true
    type: string
    containerRegistryUrl:
    required: true
    type: string
    githubSha:
    required: true
    type: string
    secrets:
    AZURE_CLIENT_ID:
    required: true
    AZURE_TENANT_ID:
    required: true
    AZURE_SUBSCRIPTION_ID:
    required: true

    permissions:
    id-token: write
    contents: read

    jobs:
    deploy:
    runs-on: ubuntu-latest
    steps:
    - uses: actions/checkout@v2
    - uses: azure/login@v1
    name: Sign in to Azure
    with:
    client-id: ${{ secrets.AZURE_CLIENT_ID }}
    tenant-id: ${{ secrets.AZURE_TENANT_ID }}
    subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}
    - name: Get AKS Credentials
    run: |
    az aks get-credentials --resource-group ${{ inputs.resourceGroupName }} --name ${{ inputs.aksName }}

    Edit the Deployment For Our Current Image Tag

    Let's add the Kubernetes manifests to our repo. This post is long enough, so you can find the content for the manifests folder in the manifests folder in the source repo under the week3/day1 branch.

    tip

    If you only forked the main branch of the source repo, you can easily get the updated manifests by using the following git commands:

    git remote add upstream https://github.com/Azure-Samples/eShopOnAks
    git fetch upstream week3/day1
    git checkout upstream/week3/day1 manifests

    This will make the week3/day1 branch available locally and then we can update the manifests directory to match the state of that branch.

    The deployments and the service defintions should be familiar from last week's content (but not the same). This week, however, there's a new file in the manifests - ./manifests/kustomization.yaml

    This file helps us more dynamically edit our kubernetes manifests and support is baked right in to the kubectl command.

    Kustomize Definition

    Kustomize allows us to specify specific resource manifests and areas of that manifest to replace. We've put some placeholders in our file as well, so we can replace those for each run of our CI/CD system.

    In ./manifests/kustomization.yaml you will see:

    resources:
    - deployment-api.yaml
    - deployment-web.yaml

    # Change the image name and version
    images:
    - name: notavalidregistry.azurecr.io/api:v0.1.0
    newName: <YOUR_ACR_SERVER>/api
    newTag: <YOUR_IMAGE_TAG>
    - name: notavalidregistry.azurecr.io/web:v0.1.0
    newName: <YOUR_ACR_SERVER>/web
    newTag: <YOUR_IMAGE_TAG>

    Replacing Values in our Build

    Now, we encounter a little problem - our deployment files need to know the tag and ACR server. We can do a bit of sed magic to edit the file on the fly.

    In .github/workflows/deploy_to_aks.yml, we'll add:

          - name: replace_placeholders_with_current_run
    run: |
    sed -i "s/<YOUR_ACR_SERVER>/${{ inputs.containerRegistryUrl }}/g" ./manifests/kustomization.yaml
    sed -i "s/<YOUR_IMAGE_TAG>/${{ inputs.githubSha }}/g" ./manifests/kustomization.yaml

    Deploying the Manifests

    We have our manifests in place and our kustomization.yaml file (with commands to update it at runtime) ready to go, we can deploy our manifests.

    First, we'll deploy our database (deployment and service). Next, we'll use the -k parameter on kubectl to tell it to look for a kustomize configuration, transform the requested manifests and apply those. Finally, we apply the service defintions for the web and API deployments.

            run: |
    kubectl apply -f ./manifests/deployment-db.yaml \
    -f ./manifests/service-db.yaml
    kubectl apply -k ./manifests
    kubectl apply -f ./manifests/service-api.yaml \
    -f ./manifests/service-web.yaml

    Summary

    We've covered a lot of ground in today's post. We set up federated credentials with GitHub. Then we added reusable workflows to deploy an AKS environment and build/scan/publish our container images, and then to deploy them into our AKS environment.

    This sets us up to start making changes to our application and Kubernetes configuration and have those changes automatically validated and deployed by our CI/CD system. Tomorrow, we'll look at updating our application environment with runtime configuration, persistent storage, and more.

    Resources

    Take the Cloud Skills Challenge!

    Enroll in the Cloud Skills Challenge!

    Don't miss out on this opportunity to level up your skills and stay ahead of the curve in the world of cloud native.

    - + \ No newline at end of file diff --git a/cnny-2023/tags/ask-the-expert/page/11/index.html b/cnny-2023/tags/ask-the-expert/page/11/index.html index c0885505d3..a47da83b86 100644 --- a/cnny-2023/tags/ask-the-expert/page/11/index.html +++ b/cnny-2023/tags/ask-the-expert/page/11/index.html @@ -14,13 +14,13 @@ - +

    16 posts tagged with "ask-the-expert"

    View All Tags

    · 9 min read
    Steven Murawski

    Welcome to Day 4 of Week 3 of #CloudNativeNewYear!

    The theme for this week is Bringing Your Application to Kubernetes. Yesterday we exposed the eShopOnWeb app so that customers can reach it over the internet using a custom domain name and TLS. Today we'll explore the topic of debugging and instrumentation.

    Ask the Experts Thursday, February 9th at 9 AM PST
    Friday, February 10th at 11 AM PST

    Watch the recorded demo and conversation about this week's topics.

    We were live on YouTube walking through today's (and the rest of this week's) demos. Join us Friday, February 10th and bring your questions!

    What We'll Cover

    • Debugging
    • Bridge To Kubernetes
    • Instrumentation
    • Resources: For self-study!

    Debugging

    Debugging applications in a Kubernetes cluster can be challenging for several reasons:

    • Complexity: Kubernetes is a complex system with many moving parts, including pods, nodes, services, and config maps, all of which can interact in unexpected ways and cause issues.
    • Distributed Environment: Applications running in a Kubernetes cluster are often distributed across multiple nodes, which makes it harder to determine the root cause of an issue.
    • Logging and Monitoring: Debugging an application in a Kubernetes cluster requires access to logs and performance metrics, which can be difficult to obtain in a large and dynamic environment.
    • Resource Management: Kubernetes manages resources such as CPU and memory, which can impact the performance and behavior of applications. Debugging resource-related issues requires a deep understanding of the Kubernetes resource model and the underlying infrastructure.
    • Dynamic Nature: Kubernetes is designed to be dynamic, with the ability to add and remove resources as needed. This dynamic nature can make it difficult to reproduce issues and debug problems.

    However, there are many tools and practices that can help make debugging applications in a Kubernetes cluster easier, such as using centralized logging, monitoring, and tracing solutions, and following best practices for managing resources and deployment configurations.

    There's also another great tool in our toolbox - Bridge to Kubernetes.

    Bridge to Kubernetes

    Bridge to Kubernetes is a great tool for microservice development and debugging applications without having to locally replicate all the required microservices.

    Bridge to Kubernetes works with Visual Studio or Visual Studio Code.

    We'll walk through using it with Visual Studio Code.

    Connecting Bridge to Kubernetes to Our Cluster

    Ensure your AKS cluster is the default for kubectl

    If you've recently spun up a new AKS cluster or you have been working with a different cluster, you may need to change what cluster credentials you have configured.

    If it's a new cluster, we can use:

    RESOURCE_GROUP=<YOUR RESOURCE GROUP NAME>
    CLUSTER_NAME=<YOUR AKS CLUSTER NAME>
    az aks get-credentials az aks get-credentials --resource-group $RESOURCE_GROUP --name $CLUSTER_NAME

    Open the command palette

    Open the command palette and find Bridge to Kubernetes: Configure. You may need to start typing the name to get it to show up.

    The command palette for Visual Studio Code is open and the first item is Bridge to Kubernetes: Configure

    Pick the service you want to debug

    Bridge to Kubernetes will redirect a service for you. Pick the service you want to redirect, in this case we'll pick web.

    Selecting the `web` service to redirect in Visual Studio Code

    Identify the port your application runs on

    Next, we'll be prompted to identify what port our application will run on locally. For this application it'll be 5001, but that's just specific to this application (and the default for ASP.NET 7, I believe).

    Setting port 5001 as the port to redirect to the `web` Kubernetes service in Visual Studio Code

    Pick a debug configuration to extend

    Bridge to Kubernetes has a couple of ways to run - it can inject it's setup and teardown to your existing debug configurations. We'll pick .NET Core Launch (web).

    Telling Bridge to Kubernetes to use the .NET Core Launch (web) debug configuration in Visual Studio Code

    Forward Traffic for All Requests

    The last prompt you'll get in the configuration is about how you want Bridge to Kubernetes to handle re-routing traffic. The default is that all requests into the service will get your local version.

    You can also redirect specific traffic. Bridge to Kubernetes will set up a subdomain and route specific traffic to your local service, while allowing other traffic to the deployed service.

    Allowing the launch of Endpoint Manager on Windows

    Using Bridge to Kubernetes to Debug Our Service

    Now that we've configured Bridge to Kubernetes, we see that tasks and a new launch configuration have been added.

    Added to .vscode/tasks.json:

            {
    "label": "bridge-to-kubernetes.resource",
    "type": "bridge-to-kubernetes.resource",
    "resource": "web",
    "resourceType": "service",
    "ports": [
    5001
    ],
    "targetCluster": "aks1",
    "targetNamespace": "default",
    "useKubernetesServiceEnvironmentVariables": false
    },
    {
    "label": "bridge-to-kubernetes.compound",
    "dependsOn": [
    "bridge-to-kubernetes.resource",
    "build"
    ],
    "dependsOrder": "sequence"
    }

    And added to .vscode/launch.json:

    {
    "name": ".NET Core Launch (web) with Kubernetes",
    "type": "coreclr",
    "request": "launch",
    "preLaunchTask": "bridge-to-kubernetes.compound",
    "program": "${workspaceFolder}/src/Web/bin/Debug/net7.0/Web.dll",
    "args": [],
    "cwd": "${workspaceFolder}/src/Web",
    "stopAtEntry": false,
    "env": {
    "ASPNETCORE_ENVIRONMENT": "Development",
    "ASPNETCORE_URLS": "http://+:5001"
    },
    "sourceFileMap": {
    "/Views": "${workspaceFolder}/Views"
    }
    }

    Launch the debug configuration

    We can start the process with the .NET Core Launch (web) with Kubernetes launch configuration in the Debug pane in Visual Studio Code.

    Launch the `.NET Core Launch (web) with Kubernetes` from the Debug pane in Visual Studio Code

    Enable the Endpoint Manager

    Part of this process includes a local service to help manage the traffic routing and your hosts file. This will require admin or sudo privileges. On Windows, you'll get a prompt like:

    Prompt to launch the endpoint manager.

    Access your Kubernetes cluster "locally"

    Bridge to Kubernetes will set up a tunnel (thanks to port forwarding) to your local workstation and create local endpoints for the other Kubernetes hosted services in your cluster, as well as pretending to be a pod in that cluster (for the application you are debugging).

    Output from Bridge To Kubernetes setup task.

    After making the connection to your Kubernetes cluster, the launch configuration will continue. In this case, we'll make a debug build of the application and attach the debugger. (This process may cause the terminal in VS Code to scroll with build output. You can find the Bridge to Kubernetes output with the local IP addresses and ports in the Output pane for Bridge to Kubernetes.)

    You can set breakpoints, use your debug console, set watches, run tests against your local version of the service.

    Exploring the Running Application Environment

    One of the cool things that Bridge to Kubernetes does for our debugging experience is bring the environment configuration that our deployed pod would inherit. When we launch our app, it'll see configuration and secrets that we'd expect our pod to be running with.

    To test this, we'll set a breakpoint in our application's start up to see what SQL Server is being used. We'll set a breakpoint at src/Infrastructure/Dependencies.cs on line 32.

    Then, we will start debugging the application with Bridge to Kubernetes. When it hits the breakpoint, we'll open the Debug pane and type configuration.GetConnectionString("CatalogConnection").

    When we run locally (not with Bridge to Kubernetes), we'd see:

    configuration.GetConnectionString("CatalogConnection")
    "Server=(localdb)\\mssqllocaldb;Integrated Security=true;Initial Catalog=Microsoft.eShopOnWeb.CatalogDb;"

    But, with Bridge to Kubernetes we see something more like (yours will vary based on the password ):

    configuration.GetConnectionString("CatalogConnection")
    "Server=db;Database=Microsoft.eShopOnWeb.CatalogDb;User Id=sa;Password=*****************;TrustServerCertificate=True;"

    Debugging our local application connected to Kubernetes.

    We can see that the database server configured is based on our db service and the password is pulled from our secret in Azure KeyVault (via AKS).

    This helps us run our local application just like it was actually in our cluster.

    Going Further

    Bridge to Kubernetes also supports more advanced scenarios and, as you need to start routing traffic around inside your cluster, may require you to modify your application to pass along a kubernetes-route-as header to help ensure that traffic for your debugging workloads is properly handled. The docs go into much greater detail about that.

    Instrumentation

    Now that we've figured out our debugging story, we'll need to ensure that we have the right context clues to find where we need to debug or to give us a better idea of how well our microservices are running.

    Logging and Tracing

    Logging and tracing become even more critical in Kubernetes, where your application could be running in a number of pods across different nodes. When you have an issue, in addition to the normal application data, you'll want to know what pod and what node had the issue, what the state of those resources were (were you resource constrained or were shared resources unavailable?), and if autoscaling is enabled, you'll want to know if a scale event has been triggered. There are a multitude of other concerns based on your application and the environment you maintain.

    Given these informational needs, it's crucial to revisit your existing logging and instrumentation. Most frameworks and languages have extensible logging, tracing, and instrumentation libraries that you can iteratively add information to, such as pod and node states, and ensuring that requests can be traced across your microservices. This will pay you back time and time again when you have to troubleshoot issues in your existing environment.

    Centralized Logging

    To enhance the troubleshooting process further, it's important to implement centralized logging to consolidate logs from all your microservices into a single location. This makes it easier to search and analyze logs when you're troubleshooting an issue.

    Automated Alerting

    Additionally, implementing automated alerting, such as sending notifications when specific conditions occur in the logs, can help you detect issues before they escalate.

    End to end Visibility

    End-to-end visibility is also essential in understanding the flow of requests and responses between microservices in a distributed system. With end-to-end visibility, you can quickly identify bottlenecks and slowdowns in the system, helping you to resolve issues more efficiently.

    Resources

    Take the Cloud Skills Challenge!

    Enroll in the Cloud Skills Challenge!

    Don't miss out on this opportunity to level up your skills and stay ahead of the curve in the world of cloud native.

    - + \ No newline at end of file diff --git a/cnny-2023/tags/ask-the-expert/page/12/index.html b/cnny-2023/tags/ask-the-expert/page/12/index.html index fe6e932c00..4f4b2eb3dc 100644 --- a/cnny-2023/tags/ask-the-expert/page/12/index.html +++ b/cnny-2023/tags/ask-the-expert/page/12/index.html @@ -14,13 +14,13 @@ - +

    16 posts tagged with "ask-the-expert"

    View All Tags

    · 7 min read
    Nitya Narasimhan

    Welcome to Week 4 of #CloudNativeNewYear!

    This week we'll go further with Cloud-native by exploring advanced topics and best practices for the Cloud-native practitioner. We'll start with an exploration of Serverless Container Options - ranging from managed services to Azure Kubernetes Service (AKS) and Azure Container Apps (ACA), to options that allow more granular control!

    What We'll Cover

    • The Azure Compute Landscape
    • Serverless Compute on Azure
    • Comparing Container Options On Azure
    • Other Considerations
    • Exercise: Try this yourself!
    • Resources: For self-study!


    We started this series with an introduction to core concepts:

    • In Containers 101, we learned why containerization matters. Think portability, isolation, scalability, resource-efficiency and cost-effectiveness. But not all apps can be containerized.
    • In Kubernetes 101, we learned how orchestration works. Think systems to automate container deployment, scaling, and management. But using Kubernetes directly can be complex.
    • In Exploring Cloud Native Options we asked the real questions: can we containerize - and should we?. The first depends on app characteristics, the second on your requirements.

    For example:

    • Can we containerize? The answer might be no if your app has system or OS dependencies, requires access to low-level hardware, or maintains complex state across sessions.
    • Should we containerize? The answer might be yes if your app is microservices-based, is stateless by default, requires portability, or is a legaacy app that can benefit from container isolation.

    As with every technology adoption decision process, there are no clear yes/no answers - just tradeoffs that you need to evaluate based on your architecture and application requirements. In today's post, we try to look at this from two main perspectives:

    1. Should you go serverless? Think: managed services that let you focus on app, not infra.
    2. What Azure Compute should I use? Think: best fit for my architecture & technology choices.

    Azure Compute Landscape

    Let's answer the second question first by exploring all available compute options on Azure. The illustrated decision-flow below is my favorite ways to navigate the choices, with questions like:

    • Are you migrating an existing app or building a new one?
    • Can you app be containerized?
    • Does it use a specific technology (Spring Boot, Red Hat Openshift)?
    • Do you need access to the Kubernetes API?
    • What characterizes the workload? (event-driven, web app, microservices etc.)

    Read the docs to understand how your choices can be influenced by the hosting model (IaaS, PaaS, FaaS), supported features (Networking, DevOps, Scalability, Availability, Security), architectural styles (Microservices, Event-driven, High-Performance Compute, Task Automation,Web-Queue Worker) etc.

    Compute Choices

    Now that we know all available compute options, let's address the second question: why go serverless? and what are my serverless compute options on Azure?

    Azure Serverless Compute

    Serverless gets defined many ways, but from a compute perspective, we can focus on a few key characteristics that are key to influencing this decision:

    • managed services - focus on application, let cloud provider handle infrastructure.
    • pay for what you use - get cost-effective resource utilization, flexible pricing options.
    • autoscaling on demand - take advantage of built-in features like KEDA-compliant triggers.
    • use preferred languages - write code in Java, JS, C#, Python etc. (specifics based on service)
    • cloud-native architectures - can support event-driven solutions, APIs, Microservices, DevOps!

    So what are some of the key options for Serverless Compute on Azure? The article dives into serverless support for fully-managed end-to-end serverless solutions with comprehensive support for DevOps, DevTools, AI/ML, Database, Storage, Monitoring and Analytics integrations. But we'll just focus on the 4 categories of applications when we look at Compute!

    1. Serverless Containerized Microservices using Azure Container Apps. Code in your preferred language, exploit full Dapr support, scale easily with any KEDA-compliant trigger.
    2. Serverless Application Environments using Azure App Service. Suitable for hosting monolithic apps (vs. microservices) in a managed service, with built-in support for on-demand scaling.
    3. Serverless Kubernetes using Azure Kubernetes Service (AKS). Spin up pods inside container instances and deploy Kubernetes-based applications with built-in KEDA-compliant autoscaling.
    4. Serverless Functions using Azure Functions. Execute "code at the granularity of functions" in your preferred language, and scale on demand with event-driven compute.

    We'll talk about these, and other compute comparisons, at the end of the article. But let's start with the core option you might choose if you want a managed serverless compute solution with built-in features for delivering containerized microservices at scale. Hello, Azure Container Apps!.

    Azure Container Apps

    Azure Container Apps (ACA) became generally available in May 2022 - providing customers with the ability to run microservices and containerized applications on a serverless, consumption-based platform. The figure below showcases the different types of applications that can be built with ACA. Note that it comes with built-in KEDA-compliant autoscaling triggers, and other auto-scale criteria that may be better-suited to the type of application you are building.

    About ACA

    So far in the series, we've put the spotlight on Azure Kubernetes Service (AKS) - so you're probably asking yourself: How does ACA compare to AKS?. We're glad you asked. Check out our Go Cloud-native with Azure Container Apps post from the #ServerlessSeptember series last year for a deeper-dive, or review the figure below for the main comparison points.

    The key takeaway is this. Azure Container Apps (ACA) also runs on Kubernetes but abstracts away its complexity in a managed service offering that lets you get productive quickly without requiring detailed knowledge of Kubernetes workings or APIs. However, if you want full access and control over the Kubernetes API then go with Azure Kubernetes Service (AKS) instead.

    Comparison

    Other Container Options

    Azure Container Apps is the preferred Platform As a Service (PaaS) option for a fully-managed serverless solution on Azure that is purpose-built for cloud-native microservices-based application workloads. But - there are other options that may be suitable for your specific needs, from a requirements and tradeoffs perspective. Let's review them quickly:

    1. Azure Functions is the serverless Functions-as-a-Service (FaaS) option, as opposed to ACA which supports a PaaS approach. It's optimized for running event-driven applications built at the granularity of ephemeral functions that can be deployed as code or containers.
    2. Azure App Service provides fully managed hosting for web applications that may be deployed using code or containers. It can be integrated with other services including Azure Container Apps and Azure Functions. It's optimized for deploying traditional web apps.
    3. Azure Kubernetes Service provides a fully managed Kubernetes option capable of running any Kubernetes workload, with direct access to the Kubernetes API.
    4. Azure Container Instances provides a single pod of Hyper-V isolated containers on demand, making them more of a low-level "building block" option compared to ACA.

    Based on the technology choices you made for application development you may also have more specialized options you want to consider. For instance:

    1. Azure Spring Apps is ideal if you're running Spring Boot or Spring Cloud workloads on Azure,
    2. Azure Red Hat OpenShift is ideal for integrated Kubernetes-powered OpenShift on Azure.
    3. Azure Confidential Computing is ideal if you have data/code integrity and confidentiality needs.
    4. Kubernetes At The Edge is ideal for bare-metal options that extend compute to edge devices.

    This is just the tip of the iceberg in your decision-making journey - but hopefully, it gave you a good sense of the options and criteria that influences your final choices. Let's wrap this up with a look at self-study resources for skilling up further.

    Exercise

    Want to get hands on learning related to these technologies?

    TAKE THE CLOUD SKILLS CHALLENGE

    Register today and level up your skills by completing free learning modules, while competing with your peers for a place on the leaderboards!

    Resources

    - + \ No newline at end of file diff --git a/cnny-2023/tags/ask-the-expert/page/13/index.html b/cnny-2023/tags/ask-the-expert/page/13/index.html index d2557f0ea5..73f4087ec0 100644 --- a/cnny-2023/tags/ask-the-expert/page/13/index.html +++ b/cnny-2023/tags/ask-the-expert/page/13/index.html @@ -14,13 +14,13 @@ - +

    16 posts tagged with "ask-the-expert"

    View All Tags

    · 3 min read
    Cory Skimming

    It's the final week of #CloudNativeNewYear! This week we'll go further with Cloud-native by exploring advanced topics and best practices for the Cloud-native practitioner. In today's post, we will introduce you to the basics of the open-source project Draft and how it can be used to easily create and deploy applications to Kubernetes.

    It's not too late to sign up for and complete the Cloud Skills Challenge!

    What We'll Cover

    • What is Draft?
    • Draft basics
    • Demo: Developing to AKS with Draft
    • Resources


    What is Draft?

    Draft is an open-source tool that can be used to streamline the development and deployment of applications on Kubernetes clusters. It provides a simple and easy-to-use workflow for creating and deploying applications, making it easier for developers to focus on writing code and building features, rather than worrying about the underlying infrastructure. This is great for users who are just getting started with Kubernetes, or those who are just looking to simplify their experience.

    New to Kubernetes?

    Draft basics

    Draft streamlines Kubernetes development by taking a non-containerized application and generating the Dockerfiles, K8s manifests, Helm charts, and other artifacts associated with a containerized application. Draft can also create a GitHub Action workflow file to quickly build and deploy your application onto any Kubernetes cluster.

    1. 'draft create'': Create a new Draft project by simply running the 'draft create' command - this command will walk you through a series of questions on your application specification (such as the application language) and create a Dockerfile, Helm char, and Kubernetes
    2. 'draft generate-workflow'': Automatically build out a GitHub Action using the 'draft generate-workflow' command
    3. 'draft setup-gh'': If you are using Azure, use this command to automate the GitHub OIDC set up process to ensure that you will be able to deploy your application using your GitHub Action.

    At this point, you will have all the files needed to deploy your app onto a Kubernetes cluster (we told you it was easy!).

    You can also use the 'draft info' command if you are looking for information on supported languages and deployment types. Let's see it in action, shall we?


    Developing to AKS with Draft

    In this Microsoft Reactor session below, we'll briefly introduce Kubernetes and the Azure Kubernetes Service (AKS) and then demo how enable your applications for Kubernetes using the open-source tool Draft. We'll show how Draft can help you create the boilerplate code to containerize your applications and add routing and scaling behaviours.

    ##Conclusion

    Overall, Draft simplifies the process of building, deploying, and managing applications on Kubernetes, and can make the overall journey from code to Kubernetes significantly easier.


    Resources


    - + \ No newline at end of file diff --git a/cnny-2023/tags/ask-the-expert/page/14/index.html b/cnny-2023/tags/ask-the-expert/page/14/index.html index bc43dd23e8..f93785cc1a 100644 --- a/cnny-2023/tags/ask-the-expert/page/14/index.html +++ b/cnny-2023/tags/ask-the-expert/page/14/index.html @@ -14,14 +14,14 @@ - +

    16 posts tagged with "ask-the-expert"

    View All Tags

    · 7 min read
    Vinicius Apolinario

    Welcome to Day 3 of Week 4 of #CloudNativeNewYear!

    The theme for this week is going further with Cloud Native. Yesterday we talked about using Draft to accelerate your Kubernetes adoption. Today we'll explore the topic of Windows containers.

    What We'll Cover

    • Introduction
    • Windows containers overview
    • Windows base container images
    • Isolation
    • Exercise: Try this yourself!
    • Resources: For self-study!

    Introduction

    Windows containers were launched along with Windows Server 2016, and have evolved since. In its latest release, Windows Server 2022, Windows containers have reached a great level of maturity and allow for customers to run production grade workloads.

    While suitable for new developments, Windows containers also provide developers and operations with a different approach than Linux containers. It allows for existing Windows applications to be containerized with little or no code changes. It also allows for professionals that are more comfortable with the Windows platform and OS, to leverage their skill set, while taking advantage of the containers platform.

    Windows container overview

    In essence, Windows containers are very similar to Linux. Since Windows containers use the same foundation of Docker containers, you can expect that the same architecture applies - with the specific notes of the Windows OS. For example, when running a Windows container via Docker, you use the same commands, such as docker run. To pull a container image, you can use docker pull, just like on Linux. However, to run a Windows container, you also need a Windows container host. This requirement is there because, as you might remember, a container shares the OS kernel with its container host.

    On Kubernetes, Windows containers are supported since Windows Server 2019. Just like with Docker, you can manage Windows containers like any other resource on the Kubernetes ecosystem. A Windows node can be part of a Kubernetes cluster, allowing you to run Windows container based applications on services like Azure Kubernetes Service. To deploy an Windows application to a Windows pod in Kubernetes, you can author a YAML specification much like you would for Linux. The main difference is that you would point to an image that runs on Windows, and you need to specify a node selection tag to indicate said pod needs to run on a Windows node.

    Windows base container images

    On Windows containers, you will always use a base container image provided by Microsoft. This base container image contains the OS binaries for the container to run. This image can be as large as 3GB+, or small as ~300MB. The difference in the size is a consequence of the APIs and components available in each Windows container base container image. There are primarily, three images: Nano Server, Server Core, and Server.

    Nano Server is the smallest image, ranging around 300MB. It's a base container image for new developments and cloud-native scenarios. Applications need to target Nano Server as the Windows OS, so not all frameworks will work. For example, .Net works on Nano Server, but .Net Framework doesn't. Other third-party frameworks also work on Nano Server, such as Apache, NodeJS, Phyton, Tomcat, Java runtime, JBoss, Redis, among others.

    Server Core is a much larger base container image, ranging around 1.25GB. It's larger size is compensated by it's application compatibility. Simply put, any application that meets the requirements to be run on a Windows container, can be containerized with this image.

    The Server image builds on the Server Core one. It ranges around 3.1GB and has even greater application compatibility than the Server Core image. In addition to the traditional Windows APIs and components, this image allows for scenarios such as Machine Learning via DirectX with GPU access.

    The best image for your scenario is dependent on the requirements your application has on the Windows OS inside a container. However, there are some scenarios that are not supported at all on Windows containers - such as GUI or RDP dependent applications, some Windows Server infrastructure roles, such as Active Directory, among others.

    Isolation

    When running containers, the kernel of the container host is shared with the containers running on it. While extremely convenient, this poses a potential risk for multi-tenant scenarios. If one container is compromised and has access to the host, it could potentially compromise other containers in the same system.

    For enterprise customers running on-premises (or even in the cloud), this can be mitigated by using a VM as a container host and considering the VM itself a security boundary. However, if multiple workloads from different tenants need to share the same host, Windows containers offer another option: Hyper-V isolation. While the name Hyper-V is associated with VMs, its virtualization capabilities extend to other services, including containers. Hyper-V isolated containers run on a purpose built, extremely small, highly performant VM. However, you manage a container running with Hyper-V isolation the same way you do with a process isolated one. In fact, the only notable difference is that you need to append the --isolation=hyperv tag to the docker run command.

    Exercise

    Here are a few examples of how to use Windows containers:

    Run Windows containers via Docker on your machine

    To pull a Windows base container image:

    docker pull mcr.microsoft.com/windows/servercore:ltsc2022

    To run a basic IIS container:

    #This command will pull and start a IIS container. You can access it from http://<your local IP>:8080
    docker run -d -p 8080:80 mcr.microsoft.com/windows/servercore/iis:windowsservercore-ltsc2022

    Run the same IIS container with Hyper-V isolation

    #This command will pull and start a IIS container. You can access it from http://<your local IP>:8080
    docker run -d -p 8080:80 --isolation=hyperv mcr.microsoft.com/windows/servercore/iis:windowsservercore-ltsc2022

    To run a Windows container interactively:

    docker run -it mcr.microsoft.com/windows/servercore:ltsc2022 powershell

    Run Windows containers on Kubernetes

    To prepare an AKS cluster for Windows containers: Note: Replace the values on the example below with the ones from your environment.

    echo "Please enter the username to use as administrator credentials for Windows Server nodes on your cluster: " && read WINDOWS_USERNAME
    az aks create \
    --resource-group myResourceGroup \
    --name myAKSCluster \
    --node-count 2 \
    --generate-ssh-keys \
    --windows-admin-username $WINDOWS_USERNAME \
    --vm-set-type VirtualMachineScaleSets \
    --network-plugin azure

    To add a Windows node pool for Windows containers:

    az aks nodepool add \
    --resource-group myResourceGroup \
    --cluster-name myAKSCluster \
    --os-type Windows \
    --name npwin \
    --node-count 1

    Deploy a sample ASP.Net application to the AKS cluster above using the YAML file below:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
    name: sample
    labels:
    app: sample
    spec:
    replicas: 1
    template:
    metadata:
    name: sample
    labels:
    app: sample
    spec:
    nodeSelector:
    "kubernetes.io/os": windows
    containers:
    - name: sample
    image: mcr.microsoft.com/dotnet/framework/samples:aspnetapp
    resources:
    limits:
    cpu: 1
    memory: 800M
    ports:
    - containerPort: 80
    selector:
    matchLabels:
    app: sample
    ---
    apiVersion: v1
    kind: Service
    metadata:
    name: sample
    spec:
    type: LoadBalancer
    ports:
    - protocol: TCP
    port: 80
    selector:
    app: sample

    Save the file above and run the command below on your Kubernetes cluster:

    kubectl apply -f <filename> .

    Once deployed, you can access the application by getting the IP address of your service:

    kubectl get service

    Resources

    It's not too late to sign up for and complete the Cloud Skills Challenge!
    - + \ No newline at end of file diff --git a/cnny-2023/tags/ask-the-expert/page/15/index.html b/cnny-2023/tags/ask-the-expert/page/15/index.html index e75d69b4f6..33061a8d5f 100644 --- a/cnny-2023/tags/ask-the-expert/page/15/index.html +++ b/cnny-2023/tags/ask-the-expert/page/15/index.html @@ -14,13 +14,13 @@ - +

    16 posts tagged with "ask-the-expert"

    View All Tags

    · 4 min read
    Jorge Arteiro

    Welcome to Day 4 of Week 4 of #CloudNativeNewYear!

    The theme for this week is going further with Cloud Native. Yesterday we talked about Windows Containers. Today we'll explore addons and extensions available to Azure Kubernetes Services (AKS).

    What We'll Cover

    • Introduction
    • Add-ons
    • Extensions
    • Add-ons vs Extensions
    • Resources

    Introduction

    Azure Kubernetes Service (AKS) is a fully managed container orchestration service that makes it easy to deploy and manage containerized applications on Azure. AKS offers a number of features and capabilities, including the ability to extend its supported functionality through the use of add-ons and extensions.

    There are also integrations available from open-source projects and third parties, but they are not covered by the AKS support policy.

    Add-ons

    Add-ons provide a supported way to extend AKS. Installation, configuration and lifecycle are managed by AKS following pre-determine updates rules.

    As an example, let's enable Container Insights with the monitoring addon. on an existing AKS cluster using az aks enable-addons --addons CLI command

    az aks enable-addons \
    --name MyManagedCluster \
    --resource-group MyResourceGroup \
    --addons monitoring

    or you can use az aks create --enable-addons when creating new clusters

    az aks create \
    --name MyManagedCluster \
    --resource-group MyResourceGroup \
    --enable-addons monitoring

    The current available add-ons are:

    1. http_application_routing - Configure ingress with automatic public DNS name creation. Only recommended for development.
    2. monitoring - Container Insights monitoring.
    3. virtual-node - CNCF virtual nodes open source project.
    4. azure-policy - Azure Policy for AKS.
    5. ingress-appgw - Application Gateway Ingress Controller (AGIC).
    6. open-service-mesh - CNCF Open Service Mesh project.
    7. azure-keyvault-secrets-provider - Azure Key Vault Secrets Provider for Secret Store CSI Driver.
    8. web_application_routing - Managed NGINX ingress Controller.
    9. keda - CNCF Event-driven autoscaling project.

    For more details, get the updated list of AKS Add-ons here

    Extensions

    Cluster Extensions uses Helm charts and integrates with Azure Resource Manager (ARM) to provide installation and lifecycle management of capabilities on top of AKS.

    Extensions can be auto upgraded using minor versions, but it requires extra management and configuration. Using Scope parameter, it can be installed on the whole cluster or per namespace.

    AKS Extensions requires an Azure CLI extension to be installed. To add or update this CLI extension use the following commands:

    az extension add --name k8s-extension

    and to update an existing extension

    az extension update --name k8s-extension

    There are only 3 available extensions:

    1. Dapr - CNCF Dapr project.
    2. Azure ML - Integrate Azure Machine Learning with AKS to train, inference and manage ML models.
    3. Flux (GitOps) - CNCF Flux project integrated with AKS to enable cluster configuration and application deployment using GitOps.

    As an example, you can install Azure ML using the following command:

    az k8s-extension create \
    --name aml-compute --extension-type Microsoft.AzureML.Kubernetes \
    --scope cluster --cluster-name <clusterName> \
    --resource-group <resourceGroupName> \
    --cluster-type managedClusters \
    --configuration-settings enableInference=True allowInsecureConnections=True

    For more details, get the updated list of AKS Extensions here

    Add-ons vs Extensions

    AKS Add-ons brings an advantage of been fully managed by AKS itself, and AKS Extensions are more flexible and configurable but requires extra level of management.

    Add-ons are part of the AKS resource provider in the Azure API, and AKS Extensions are a separate resource provider on the Azure API.

    Resources

    It's not too late to sign up for and complete the Cloud Skills Challenge!
    - + \ No newline at end of file diff --git a/cnny-2023/tags/ask-the-expert/page/16/index.html b/cnny-2023/tags/ask-the-expert/page/16/index.html index 7e32d6674a..4b8667aa9b 100644 --- a/cnny-2023/tags/ask-the-expert/page/16/index.html +++ b/cnny-2023/tags/ask-the-expert/page/16/index.html @@ -14,13 +14,13 @@ - +

    16 posts tagged with "ask-the-expert"

    View All Tags

    · 6 min read
    Cory Skimming
    Steven Murawski
    Paul Yu
    Josh Duffney
    Nitya Narasimhan
    Vinicius Apolinario
    Jorge Arteiro
    Devanshi Joshi

    And that's a wrap on the inaugural #CloudNativeNewYear! Thank you for joining us to kick off the new year with this learning journey into cloud-native! In this final post of the 2023 celebration of all things cloud-native, we'll do two things:

    • Look Back - with a quick retrospective of what was covered.
    • Look Ahead - with resources and suggestions for how you can continue your skilling journey!

    We appreciate your time and attention and we hope you found this curated learning valuable. Feedback and suggestions are always welcome. From our entire team, we wish you good luck with the learning journey - now go build some apps and share your knowledge! 🎉


    What We'll Cover

    • Cloud-native fundamentals
    • Kubernetes fundamentals
    • Bringing your applications to Kubernetes
    • Go further with cloud-native
    • Resources to keep the celebration going!

    Week 1: Cloud-native Fundamentals

    In Week 1, we took a tour through the fundamentals of cloud-native technologies, including a walkthrough of the core concepts of containers, microservices, and Kubernetes.

    • Jan 23 - Cloud-native Fundamentals: The answers to life and all the universe - what is cloud-native? What makes an application cloud-native? What are the benefits? (yes, we all know it's 42, but hey, gotta start somewhere!)
    • Jan 24 - Containers 101: Containers are an essential component of cloud-native development. In this intro post, we cover how containers work and why they have become so popular.
    • Jan 25 - Kubernetes 101: Kuber-what-now? Learn the basics of Kubernetes and how it enables us to deploy and manage our applications effectively and consistently.
    A QUICKSTART GUIDE TO KUBERNETES CONCEPTS

    Missed it Live? Tune in to A Quickstart Guide to Kubernetes Concepts on demand, now!

    • Jan 26 - Microservices 101: What is a microservices architecture and how can we go about designing one?
    • Jan 27 - Exploring your Cloud Native Options: Cloud-native, while catchy, can be a very broad term. What technologies should you use? Learn some basic guidelines for when it is optimal to use different technologies for your project.

    Week 2: Kubernetes Fundamentals

    In Week 2, we took a deeper dive into the Fundamentals of Kubernetes. The posts and live demo from this week took us through how to build a simple application on Kubernetes, covering everything from deployment to networking and scaling. Note: for our samples and demo we have used Azure Kubernetes Service, but the principles apply to any Kubernetes!

    • Jan 30 - Pods and Deployments: how to use pods and deployments in Kubernetes.
    • Jan 31 - Services and Ingress: how to use services and ingress and a walk through the steps of making our containers accessible internally and externally!
    • Feb 1 - ConfigMaps and Secrets: how to of passing configuration and secrets to our applications in Kubernetes with ConfigMaps and Secrets.
    • Feb 2 - Volumes, Mounts, and Claims: how to use persistent storage on Kubernetes (and ensure your data can survive container restarts!).
    • Feb 3 - Scaling Pods and Nodes: how to scale pods and nodes in our Kubernetes cluster.
    ASK THE EXPERTS: AZURE KUBERNETES SERVICE

    Missed it Live? Tune in to Ask the Expert with Azure Kubernetes Service on demand, now!


    Week 3: Bringing your applications to Kubernetes

    So, you have learned how to build an application on Kubernetes. What about your existing applications? In Week 3, we explored how to take an existing application and set it up to run in Kubernetes:

    • Feb 6 - CI/CD: learn how to get an existing application running in Kubernetes with a full pipeline in GitHub Actions.
    • Feb 7 - Adapting Storage, Secrets, and Configuration: how to evaluate our sample application's configuration, storage, and networking requirements and implement using Kubernetes.
    • Feb 8 - Opening your Application with Ingress: how to expose the eShopOnWeb app so that customers can reach it over the internet using a custom domain name and TLS.
    • Feb 9 - Debugging and Instrumentation: how to debug and instrument your application now that it is on Kubernetes.
    • Feb 10 - CI/CD Secure Supply Chain: now that we have set up our application on Kubernetes, let's talk about container image signing and how to set up a secure supply change.

    Week 4: Go Further with Cloud-Native

    This week we have gone further with Cloud-native by exploring advanced topics and best practices for the Cloud-native practitioner.

    And today, February 17th, with this one post to rule (er, collect) them all!


    Keep the Learning Going!

    Learning is great, so why stop here? We have a host of great resources and samples for you to continue your cloud-native journey with Azure below:


    - + \ No newline at end of file diff --git a/cnny-2023/tags/ask-the-expert/page/2/index.html b/cnny-2023/tags/ask-the-expert/page/2/index.html index 55455ee6af..705e052779 100644 --- a/cnny-2023/tags/ask-the-expert/page/2/index.html +++ b/cnny-2023/tags/ask-the-expert/page/2/index.html @@ -14,14 +14,14 @@ - +

    16 posts tagged with "ask-the-expert"

    View All Tags

    · 5 min read
    Cory Skimming

    Welcome to Week 1 of #CloudNativeNewYear!

    Cloud-native New Year

    You will often hear the term "cloud-native" when discussing modern application development, but even a quick online search will return a huge number of articles, tweets, and web pages with a variety of definitions. So, what does cloud-native actually mean? Also, what makes an application a cloud-native application versus a "regular" application?

    Today, we will address these questions and more as we kickstart our learning journey (and our new year!) with an introductory dive into the wonderful world of cloud-native.


    What We'll Cover

    • What is cloud-native?
    • What is a cloud-native application?
    • The benefits of cloud-native
    • The five pillars of cloud-native
    • Exercise: Take the Cloud Skills Challenge!

    1. What is cloud-native?

    The term "cloud-native" can seem pretty self-evident (yes, hello, native to the cloud?), and in a way, it is. While there are lots of definitions of cloud-native floating around, at it's core, cloud-native simply refers to a modern approach to building software that takes advantage of cloud services and environments. This includes using cloud-native technologies, such as containers, microservices, and serverless, and following best practices for deploying, scaling, and managing applications in a cloud environment.

    Official definition from the Cloud Native Computing Foundation:

    Cloud-native technologies empower organizations to build and run scalable applications in modern, dynamic environments such as public, private, and hybrid clouds. Containers, service meshes, microservices, immutable infrastructure, and declarative APIs exemplify this approach.

    These techniques enable loosely coupled systems that are resilient, manageable, and observable. Combined with robust automation, they allow engineers to make high-impact changes frequently and predictably with minimal toil. Source


    2. So, what exactly is a cloud-native application?

    Cloud-native applications are specifically designed to take advantage of the scalability, resiliency, and distributed nature of modern cloud infrastructure. But how does this differ from a "traditional" application?

    Traditional applications are generally been built, tested, and deployed as a single, monolithic unit. The monolithic nature of this type of architecture creates close dependencies between components. This complexity and interweaving only increases as an application grows and can make it difficult to evolve (not to mention troubleshoot) and challenging to operate over time.

    To contrast, in cloud-native architectures the application components are decomposed into loosely coupled services, rather than built and deployed as one block of code. This decomposition into multiple self-contained services enables teams to manage complexity and improve the speed, agility, and scale of software delivery. Many small parts enables teams to make targeted updates, deliver new features, and fix any issues without leading to broader service disruption.


    3. The benefits of cloud-native

    Cloud-native architectures can bring many benefits to an organization, including:

    1. Scalability: easily scale up or down based on demand, allowing organizations to adjust their resource usage and costs as needed.
    2. Flexibility: deploy and run on any cloud platform, and easily move between clouds and on-premises environments.
    3. High-availability: techniques such as redundancy, self-healing, and automatic failover help ensure that cloud-native applications are designed to be highly-available and fault tolerant.
    4. Reduced costs: take advantage of the pay-as-you-go model of cloud computing, reducing the need for expensive infrastructure investments.
    5. Improved security: tap in to cloud security features, such as encryption and identity management, to improve the security of the application.
    6. Increased agility: easily add new features or services to your applications to meet changing business needs and market demand.

    4. The pillars of cloud-native

    There are five areas that are generally cited as the core building blocks of cloud-native architecture:

    1. Microservices: Breaking down monolithic applications into smaller, independent, and loosely-coupled services that can be developed, deployed, and scaled independently.
    2. Containers: Packaging software in lightweight, portable, and self-sufficient containers that can run consistently across different environments.
    3. Automation: Using automation tools and DevOps processes to manage and operate the cloud-native infrastructure and applications, including deployment, scaling, monitoring, and self-healing.
    4. Service discovery: Using service discovery mechanisms, such as APIs & service meshes, to enable services to discover and communicate with each other.
    5. Observability: Collecting and analyzing data from the infrastructure and applications to understand and optimize the performance, behavior, and health of the system.

    These can (and should!) be used in combination to deliver cloud-native solutions that are highly scalable, flexible, and available.

    WHAT'S NEXT

    Stay tuned, as we will be diving deeper into these topics in the coming weeks:

    • Jan 24: Containers 101
    • Jan 25: Adopting Microservices with Kubernetes
    • Jan 26: Kubernetes 101
    • Jan 27: Exploring your Cloud-native Options

    Resources


    Don't forget to subscribe to the blog to get daily posts delivered directly to your favorite feed reader!


    - + \ No newline at end of file diff --git a/cnny-2023/tags/ask-the-expert/page/3/index.html b/cnny-2023/tags/ask-the-expert/page/3/index.html index 681004e8a1..696d580252 100644 --- a/cnny-2023/tags/ask-the-expert/page/3/index.html +++ b/cnny-2023/tags/ask-the-expert/page/3/index.html @@ -14,14 +14,14 @@ - +

    16 posts tagged with "ask-the-expert"

    View All Tags

    · 4 min read
    Steven Murawski
    Paul Yu
    Josh Duffney

    Welcome to Day 2 of Week 1 of #CloudNativeNewYear!

    Today, we'll focus on building an understanding of containers.

    What We'll Cover

    • Introduction
    • How do Containers Work?
    • Why are Containers Becoming so Popular?
    • Conclusion
    • Resources
    • Learning Path

    REGISTER & LEARN: KUBERNETES 101

    Interested in a dive into Kubernetes and a chance to talk to experts?

    🎙: Join us Jan 26 @1pm PST by registering here

    Here's what you will learn:

    • Key concepts and core principles of Kubernetes.
    • How to deploy, scale and manage containerized workloads.
    • Live Demo of the concepts explained
    • How to get started with Azure Kubernetes Service for free.

    Start your free Azure Kubernetes Trial Today!!: aka.ms/TryAKS

    Introduction

    In the beginning, we deployed our applications onto physical servers. We only had a certain number of those servers, so often they hosted multiple applications. This led to some problems when those applications shared dependencies. Upgrading one application could break another application on the same server.

    Enter virtualization. Virtualization allowed us to run our applications in an isolated operating system instance. This removed much of the risk of updating shared dependencies. However, it increased our overhead since we had to run a full operating system for each application environment.

    To address the challenges created by virtualization, containerization was created to improve isolation without duplicating kernel level resources. Containers provide efficient and consistent deployment and runtime experiences for our applications and have become very popular as a way of packaging and distributing applications.

    How do Containers Work?

    Containers build on two capabilities in the Linux operating system, namespaces and cgroups. These constructs allow the operating system to provide isolation to a process or group of processes, keeping their access to filesystem resources separate and providing controls on resource utilization. This, combined with tooling to help package, deploy, and run container images has led to their popularity in today’s operating environment. This provides us our isolation without the overhead of additional operating system resources.

    When a container host is deployed on an operating system, it works at scheduling the access to the OS (operating systems) components. This is done by providing a logical isolated group that can contain processes for a given application, called a namespace. The container host then manages /schedules access from the namespace to the host OS. The container host then uses cgroups to allocate compute resources. Together, the container host with the help of cgroups and namespaces can schedule multiple applications to access host OS resources.

    Overall, this gives the illusion of virtualizing the host OS, where each application gets its own OS. In actuality, all the applications are running on the same operating system and sharing the same kernel as the container host.

    Containers are popular in the software development industry because they provide several benefits over traditional virtualization methods. Some of these benefits include:

    • Portability: Containers make it easy to move an application from one environment to another without having to worry about compatibility issues or missing dependencies.
    • Isolation: Containers provide a level of isolation between the application and the host system, which means that the application running in the container cannot access the host system's resources.
    • Scalability: Containers make it easy to scale an application up or down as needed, which is useful for applications that experience a lot of traffic or need to handle a lot of data.
    • Resource Efficiency: Containers are more resource-efficient than traditional virtualization methods because they don't require a full operating system to be running on each virtual machine.
    • Cost-Effective: Containers are more cost-effective than traditional virtualization methods because they don't require expensive hardware or licensing fees.

    Conclusion

    Containers are a powerful technology that allows developers to package and deploy applications in a portable and isolated environment. This technology is becoming increasingly popular in the world of software development and is being used by many companies and organizations to improve their application deployment and management processes. With the benefits of portability, isolation, scalability, resource efficiency, and cost-effectiveness, containers are definitely worth considering for your next application development project.

    Containerizing applications is a key step in modernizing them, and there are many other patterns that can be adopted to achieve cloud-native architectures, including using serverless platforms, Kubernetes, and implementing DevOps practices.

    Resources

    Learning Path

    - + \ No newline at end of file diff --git a/cnny-2023/tags/ask-the-expert/page/4/index.html b/cnny-2023/tags/ask-the-expert/page/4/index.html index a5f1e898a0..fd2620ba60 100644 --- a/cnny-2023/tags/ask-the-expert/page/4/index.html +++ b/cnny-2023/tags/ask-the-expert/page/4/index.html @@ -14,14 +14,14 @@ - +

    16 posts tagged with "ask-the-expert"

    View All Tags

    · 3 min read
    Steven Murawski

    Welcome to Day 3 of Week 1 of #CloudNativeNewYear!

    This week we'll focus on what Kubernetes is.

    What We'll Cover

    • Introduction
    • What is Kubernetes? (Video)
    • How does Kubernetes Work? (Video)
    • Conclusion


    REGISTER & LEARN: KUBERNETES 101

    Interested in a dive into Kubernetes and a chance to talk to experts?

    🎙: Join us Jan 26 @1pm PST by registering here

    Here's what you will learn:

    • Key concepts and core principles of Kubernetes.
    • How to deploy, scale and manage containerized workloads.
    • Live Demo of the concepts explained
    • How to get started with Azure Kubernetes Service for free.

    Start your free Azure Kubernetes Trial Today!!: aka.ms/TryAKS

    Introduction

    Kubernetes is an open source container orchestration engine that can help with automated deployment, scaling, and management of our applications.

    Kubernetes takes physical (or virtual) resources and provides a consistent API over them, bringing a consistency to the management and runtime experience for our applications. Kubernetes provides us with a number of capabilities such as:

    • Container scheduling
    • Service discovery and load balancing
    • Storage orchestration
    • Automated rollouts and rollbacks
    • Automatic bin packing
    • Self-healing
    • Secret and configuration management

    We'll learn more about most of these topics as we progress through Cloud Native New Year.

    What is Kubernetes?

    Let's hear from Brendan Burns, one of the founders of Kubernetes as to what Kubernetes actually is.

    How does Kubernetes Work?

    And Brendan shares a bit more with us about how Kubernetes works.

    Conclusion

    Kubernetes allows us to deploy and manage our applications effectively and consistently.

    By providing a consistent API across many of the concerns our applications have, like load balancing, networking, storage, and compute, Kubernetes improves both our ability to build and ship new software.

    There are standards for the applications to depend on for resources needed. Deployments, metrics, and logs are provided in a standardized fashion allowing more effecient operations across our application environments.

    And since Kubernetes is an open source platform, it can be found in just about every type of operating environment - cloud, virtual machines, physical hardware, shared data centers, even small devices like Rasberry Pi's!

    Want to learn more? Join us for a webinar on Kubernetes Concepts (or catch the playback) on Thursday, January 26th at 1 PM PST and watch for the rest of this series right here!

    - + \ No newline at end of file diff --git a/cnny-2023/tags/ask-the-expert/page/5/index.html b/cnny-2023/tags/ask-the-expert/page/5/index.html index a70a18ddf0..ff0a814584 100644 --- a/cnny-2023/tags/ask-the-expert/page/5/index.html +++ b/cnny-2023/tags/ask-the-expert/page/5/index.html @@ -14,13 +14,13 @@ - +

    16 posts tagged with "ask-the-expert"

    View All Tags

    · 6 min read
    Josh Duffney

    Welcome to Day 4 of Week 1 of #CloudNativeNewYear!

    This week we'll focus on advanced topics and best practices for Cloud-Native practitioners, kicking off with this post on Serverless Container Options with Azure. We'll look at technologies, tools and best practices that range from managed services like Azure Kubernetes Service, to options allowing finer granularity of control and oversight.

    What We'll Cover

    • What is Microservice Architecture?
    • How do you design a Microservice?
    • What challenges do Microservices introduce?
    • Conclusion
    • Resources


    Microservices are a modern way of designing and building software that increases deployment velocity by decomposing an application into small autonomous services that can be deployed independently.

    By deploying loosely coupled microservices your applications can be developed, deployed, and scaled independently. Because each service is independent, it can be updated or replaced without having to worry about the impact on the rest of the application. This means that if a bug is found in one service, it can be fixed without having to redeploy the entire application. All of which gives an organization the ability to deliver value to their customers faster.

    In this article, we will explore the basics of microservices architecture, its benefits and challenges, and how it can help improve the development, deployment, and maintenance of software applications.

    What is Microservice Architecture?

    Before explaining what Microservice architecture is, it’s important to understand what problems microservices aim to address.

    Traditional software development is centered around building monolithic applications. Monolithic applications are built as a single, large codebase. Meaning your code is tightly coupled causing the monolithic app to suffer from the following:

    Too much Complexity: Monolithic applications can become complex and difficult to understand and maintain as they grow. This can make it hard to identify and fix bugs and add new features.

    Difficult to Scale: Monolithic applications can be difficult to scale as they often have a single point of failure, which can cause the whole application to crash if a service fails.

    Slow Deployment: Deploying a monolithic application can be risky and time-consuming, as a small change in one part of the codebase can affect the entire application.

    Microservice architecture (often called microservices) is an architecture style that addresses the challenges created by Monolithic applications. Microservices architecture is a way of designing and building software applications as a collection of small, independent services that communicate with each other through APIs. This allows for faster development and deployment cycles, as well as easier scaling and maintenance than is possible with a monolithic application.

    How do you design a Microservice?

    Building applications with Microservices architecture requires a different approach. Microservices architecture focuses on business capabilities rather than technical layers, such as data access or messaging. Doing so requires that you shift your focus away from the technical stack and model your applications based upon the various domains that exist within the business.

    Domain-driven design (DDD) is a way to design software by focusing on the business needs. You can use Domain-driven design as a framework that guides the development of well-designed microservices by building services that encapsulate knowledge in each domain and abstract that knowledge from clients.

    In Domain-driven design you start by modeling the business domain and creating a domain model. A domain model is an abstract model of the business model that distills and organizes a domain of knowledge and provides a common language for developers and domain experts. It’s the resulting domain model that microservices a best suited to be built around because it helps establish a well-defined boundary between external systems and other internal applications.

    In short, before you begin designing microservices, start by mapping the functions of the business and their connections to create a domain model for the microservice(s) to be built around.

    What challenges do Microservices introduce?

    Microservices solve a lot of problems and have several advantages, but the grass isn’t always greener on the other side.

    One of the key challenges of microservices is managing communication between services. Because services are independent, they need to communicate with each other through APIs. This can be complex and difficult to manage, especially as the number of services grows. To address this challenge, it is important to have a clear API design, with well-defined inputs and outputs for each service. It is also important to have a system for managing and monitoring communication between services, to ensure that everything is running smoothly.

    Another challenge of microservices is managing the deployment and scaling of services. Because each service is independent, it needs to be deployed and scaled separately from the rest of the application. This can be complex and difficult to manage, especially as the number of services grows. To address this challenge, it is important to have a clear and consistent deployment process, with well-defined steps for deploying and scaling each service. Furthermore, it is advisable to host them on a system with self-healing capabilities to reduce operational burden.

    It is also important to have a system for monitoring and managing the deployment and scaling of services, to ensure optimal performance.

    Each of these challenges has created fertile ground for tooling and process that exists in the cloud-native ecosystem. Kubernetes, CI CD, and other DevOps practices are part of the package of adopting the microservices architecture.

    Conclusion

    In summary, microservices architecture focuses on software applications as a collection of small, independent services that communicate with each other over well-defined APIs.

    The main advantages of microservices include:

    • increased flexibility and scalability per microservice,
    • efficient resource utilization (with help from a container orchestrator like Kubernetes),
    • and faster development cycles.

    Continue following along with this series to see how you can use Kubernetes to help adopt microservices patterns in your own environments!

    Resources

    - + \ No newline at end of file diff --git a/cnny-2023/tags/ask-the-expert/page/6/index.html b/cnny-2023/tags/ask-the-expert/page/6/index.html index 388f65b198..a90d9873c7 100644 --- a/cnny-2023/tags/ask-the-expert/page/6/index.html +++ b/cnny-2023/tags/ask-the-expert/page/6/index.html @@ -14,14 +14,14 @@ - +

    16 posts tagged with "ask-the-expert"

    View All Tags

    · 6 min read
    Cory Skimming

    We are excited to be wrapping up our first week of #CloudNativeNewYear! This week, we have tried to set the stage by covering the fundamentals of cloud-native practices and technologies, including primers on containerization, microservices, and Kubernetes.

    Don't forget to sign up for the the Cloud Skills Challenge!

    Today, we will do a brief recap of some of these technologies and provide some basic guidelines for when it is optimal to use each.


    What We'll Cover

    • To Containerize or not to Containerize?
    • The power of Kubernetes
    • Where does Serverless fit?
    • Resources
    • What's coming next!


    Just joining us now? Check out these other Week 1 posts:

    To Containerize or not to Containerize?

    As mentioned in our Containers 101 post earlier this week, containers can provide several benefits over traditional virtualization methods, which has made them popular within the software development community. Containers provide a consistent and predictable runtime environment, which can help reduce the risk of compatibility issues and simplify the deployment process. Additionally, containers can improve resource efficiency by allowing multiple applications to run on the same host while isolating their dependencies.

    Some types of apps that are a particularly good fit for containerization include:

    1. Microservices: Containers are particularly well-suited for microservices-based applications, as they can be used to isolate and deploy individual components of the system. This allows for more flexibility and scalability in the deployment process.
    2. Stateless applications: Applications that do not maintain state across multiple sessions, such as web applications, are well-suited for containers. Containers can be easily scaled up or down as needed and replaced with new instances, without losing data.
    3. Portable applications: Applications that need to be deployed in different environments, such as on-premises, in the cloud, or on edge devices, can benefit from containerization. The consistent and portable runtime environment of containers can make it easier to move the application between different environments.
    4. Legacy applications: Applications that are built using older technologies or that have compatibility issues can be containerized to run in an isolated environment, without impacting other applications or the host system.
    5. Dev and testing environments: Containerization can be used to create isolated development and testing environments, which can be easily created and destroyed as needed.

    While there are many types of applications that can benefit from a containerized approach, it's worth noting that containerization is not always the best option, and it's important to weigh the benefits and trade-offs before deciding to containerize an application. Additionally, some types of applications may not be a good fit for containers including:

    • Apps that require full access to host resources: Containers are isolated from the host system, so if an application needs direct access to hardware resources such as GPUs or specialized devices, it might not work well in a containerized environment.
    • Apps that require low-level system access: If an application requires deep access to the underlying operating system, it may not be suitable for running in a container.
    • Applications that have specific OS dependencies: Apps that have specific dependencies on a certain version of an operating system or libraries may not be able to run in a container.
    • Stateful applications: Apps that maintain state across multiple sessions, such as databases, may not be well suited for containers. Containers are ephemeral by design, so the data stored inside a container may not persist between restarts.

    The good news is that some of these limitations can be overcome with the use of specialized containerization technologies such as Kubernetes, and by carefully designing the architecture of the application.


    The power of Kubernetes

    Speaking of Kubernetes...

    Kubernetes is a powerful tool for managing and deploying containerized applications in production environments, particularly for applications that need to scale, handle large numbers of requests, or run in multi-cloud or hybrid environments.

    Kubernetes is well-suited for a wide variety of applications, but it is particularly well-suited for the following types of applications:

    1. Microservices-based applications: Kubernetes provides a powerful set of tools for managing and deploying microservices-based applications, making it easy to scale, update, and manage the individual components of the application.
    2. Stateful applications: Kubernetes provides support for stateful applications through the use of Persistent Volumes and StatefulSets, allowing for applications that need to maintain state across multiple instances.
    3. Large-scale, highly-available systems: Kubernetes provides built-in support for scaling, self-healing, and rolling updates, making it an ideal choice for large-scale, highly-available systems that need to handle large numbers of users and requests.
    4. Multi-cloud and hybrid environments: Kubernetes can be used to deploy and manage applications across multiple cloud providers and on-premises environments, making it a good choice for organizations that want to take advantage of the benefits of multiple cloud providers or that need to deploy applications in a hybrid environment.
    New to Kubernetes?

    Where does Serverless fit in?

    Serverless is a cloud computing model where the cloud provider (like Azure) is responsible for executing a piece of code by dynamically allocating the resources. With serverless, you only pay for the exact amount of compute time that you use, rather than paying for a fixed amount of resources. This can lead to significant cost savings, particularly for applications with variable or unpredictable workloads.

    Serverless is commonly used for building applications like web or mobile apps, IoT, data processing, and real-time streaming - apps where the workloads are variable and high scalability is required. It's important to note that serverless is not a replacement for all types of workloads - it's best suited for stateless, short-lived and small-scale workloads.

    For a detailed look into the world of Serverless and lots of great learning content, revisit #30DaysofServerless.


    Resources


    What's up next in #CloudNativeNewYear?

    Week 1 has been all about the fundamentals of cloud-native. Next week, the team will be diving in to application deployment with Azure Kubernetes Service. Don't forget to subscribe to the blog to get daily posts delivered directly to your favorite feed reader!


    - + \ No newline at end of file diff --git a/cnny-2023/tags/ask-the-expert/page/7/index.html b/cnny-2023/tags/ask-the-expert/page/7/index.html index fad59de6ef..479f5ce2ad 100644 --- a/cnny-2023/tags/ask-the-expert/page/7/index.html +++ b/cnny-2023/tags/ask-the-expert/page/7/index.html @@ -14,13 +14,13 @@ - +

    16 posts tagged with "ask-the-expert"

    View All Tags

    · 14 min read
    Steven Murawski

    Welcome to Day #1 of Week 2 of #CloudNativeNewYear!

    The theme for this week is Kubernetes fundamentals. Last week we talked about Cloud Native architectures and the Cloud Native landscape. Today we'll explore the topic of Pods and Deployments in Kubernetes.

    Ask the Experts Thursday, February 9th at 9 AM PST
    Catch the Replay of the Live Demo

    Watch the recorded demo and conversation about this week's topics.

    We were live on YouTube walking through today's (and the rest of this week's) demos.

    What We'll Cover

    • Setting Up A Kubernetes Environment in Azure
    • Running Containers in Kubernetes Pods
    • Making the Pods Resilient with Deployments
    • Exercise
    • Resources

    Setting Up A Kubernetes Environment in Azure

    For this week, we'll be working with a simple app - the Azure Voting App. My teammate Paul Yu ported the app to Rust and we tweaked it a bit to let us highlight some of the basic features of Kubernetes.

    You should be able to replicate this in just about any Kubernetes environment, but we'll use Azure Kubernetes Service (AKS) as our working environment for this week.

    To make it easier to get started, there's a Bicep template to deploy an AKS cluster, an Azure Container Registry (ACR) (to host our container image), and connect the two so that we can easily deploy our application.

    Step 0 - Prerequisites

    There are a few things you'll need if you want to work through this and the following examples this week.

    Required:

    • Git (and probably a GitHub account if you want to persist your work outside of your computer)
    • Azure CLI
    • An Azure subscription (if you want to follow along with the Azure steps)
    • Kubectl (the command line tool for managing Kubernetes)

    Helpful:

    • Visual Studio Code (or equivalent editor)

    Step 1 - Clone the application repository

    First, I forked the source repository to my account.

    $GitHubOrg = 'smurawski' # Replace this with your GitHub account name or org name
    git clone "https://github.com/$GitHubOrg/azure-voting-app-rust"
    cd azure-voting-app-rust

    Leave your shell opened with your current location inside the application repository.

    Step 2 - Set up AKS

    Running the template deployment from the demo script (I'm using the PowerShell example in cnny23-week2-day1.ps1, but there's a Bash variant at cnny23-week2-day1.sh) stands up the environment. The second, third, and fourth commands take some of the output from the Bicep deployment to set up for later commands, so don't close out your shell after you run these commands.

    az deployment sub create --template-file ./deploy/main.bicep --location eastus --parameters 'resourceGroup=cnny-week2'
    $AcrName = az deployment sub show --name main --query 'properties.outputs.acr_name.value' -o tsv
    $AksName = az deployment sub show --name main --query 'properties.outputs.aks_name.value' -o tsv
    $ResourceGroup = az deployment sub show --name main --query 'properties.outputs.resource_group_name.value' -o tsv

    az aks get-credentials --resource-group $ResourceGroup --name $AksName

    Step 3 - Build our application container

    Since we have an Azure Container Registry set up, I'll use ACR Build Tasks to build and store my container image.

    az acr build --registry $AcrName --% --image cnny2023/azure-voting-app-rust:{{.Run.ID}} .
    $BuildTag = az acr repository show-tags `
    --name $AcrName `
    --repository cnny2023/azure-voting-app-rust `
    --orderby time_desc `
    --query '[0]' -o tsv
    tip

    Wondering what the --% is in the first command line? That tells the PowerShell interpreter to pass the input after it "as is" to the command without parsing/evaluating it. Otherwise, PowerShell messes a bit with the templated {{.Run.ID}} bit.

    Running Containers in Kubernetes Pods

    Now that we have our AKS cluster and application image ready to go, let's look into how Kubernetes runs containers.

    If you've been in tech for any length of time, you've seen that every framework, runtime, orchestrator, etc.. can have their own naming scheme for their concepts. So let's get into some of what Kubernetes calls things.

    The Pod

    A container running in Kubernetes is called a Pod. A Pod is basically a running container on a Node or VM. It can be more. For example you can run multiple containers and specify some funky configuration, but we'll keep it simple for now - add the complexity when you need it.

    Our Pod definition can be created via the kubectl command imperatively from arguments or declaratively from a configuration file. We'll do a little of both. We'll use the kubectl command to help us write our configuration files. Kubernetes configuration files are YAML, so having an editor that supports and can help you syntax check YAML is really helpful.

    Creating a Pod Definition

    Let's create a few Pod definitions. Our application requires two containers to get working - the application and a database.

    Let's create the database Pod first. And before you comment, the configuration isn't secure nor best practice. We'll fix that later this week. For now, let's focus on getting up and running.

    This is a trick I learned from one of my teammates - Paul. By using the --output yaml and --dry-run=client options, we can have the command help us write our YAML. And with a bit of output redirection, we can stash it safely in a file for later use.

    kubectl run azure-voting-db `
    --image "postgres:15.0-alpine" `
    --env "POSTGRES_PASSWORD=mypassword" `
    --output yaml `
    --dry-run=client > manifests/pod-db.yaml

    This creates a file that looks like:

    apiVersion: v1
    kind: Pod
    metadata:
    creationTimestamp: null
    labels:
    run: azure-voting-db
    name: azure-voting-db
    spec:
    containers:
    - env:
    - name: POSTGRES_PASSWORD
    value: mypassword
    image: postgres:15.0-alpine
    name: azure-voting-db
    resources: {}
    dnsPolicy: ClusterFirst
    restartPolicy: Always
    status: {}

    The file, when supplied to the Kubernetes API, will identify what kind of resource to create, the API version to use, and the details of the container (as well as an environment variable to be supplied).

    We'll get that container image started with the kubectl command. Because the details of what to create are in the file, we don't need to specify much else to the kubectl command but the path to the file.

    kubectl apply -f ./manifests/pod-db.yaml

    I'm going to need the IP address of the Pod, so that my application can connect to it, so we can use kubectl to get some information about our pod. By default, kubectl get pod only displays certain information but it retrieves a lot more. We can use the JSONPath syntax to index into the response and get the information you want.

    tip

    To see what you can get, I usually run the kubectl command with the output type (-o JSON) of JSON and then I can find where the data I want is and create my JSONPath query to get it.

    $DB_IP = kubectl get pod azure-voting-db -o jsonpath='{.status.podIP}'

    Now, let's create our Pod definition for our application. We'll use the same technique as before.

    kubectl run azure-voting-app `
    --image "$AcrName.azurecr.io/cnny2023/azure-voting-app-rust:$BuildTag" `
    --env "DATABASE_SERVER=$DB_IP" `
    --env "DATABASE_PASSWORD=mypassword`
    --output yaml `
    --dry-run=client > manifests/pod-app.yaml

    That command gets us a similar YAML file to the database container - you can see the full file here

    Let's get our application container running.

    kubectl apply -f ./manifests/pod-app.yaml

    Now that the Application is Running

    We can check the status of our Pods with:

    kubectl get pods

    And we should see something like:

    azure-voting-app-rust ❯  kubectl get pods
    NAME READY STATUS RESTARTS AGE
    azure-voting-app 1/1 Running 0 36s
    azure-voting-db 1/1 Running 0 84s

    Once our pod is running, we can check to make sure everything is working by letting kubectl proxy network connections to our Pod running the application. If we get the voting web page, we'll know the application found the database and we can start voting!

    kubectl port-forward pod/azure-voting-app 8080:8080

    Azure voting website in a browser with three buttons, one for Dogs, one for Cats, and one for Reset.  The counter is Dogs - 0 and Cats - 0.

    When you are done voting, you can stop the port forwarding by using Control-C to break the command.

    Clean Up

    Let's clean up after ourselves and see if we can't get Kubernetes to help us keep our application running. We can use the same configuration files to ensure that Kubernetes only removes what we want removed.

    kubectl delete -f ./manifests/pod-app.yaml
    kubectl delete -f ./manifests/pod-db.yaml

    Summary - Pods

    A Pod is the most basic unit of work inside Kubernetes. Once the Pod is deleted, it's gone. That leads us to our next topic (and final topic for today.)

    Making the Pods Resilient with Deployments

    We've seen how easy it is to deploy a Pod and get our containers running on Nodes in our Kubernetes cluster. But there's a problem with that. Let's illustrate it.

    Breaking Stuff

    Setting Back Up

    First, let's redeploy our application environment. We'll start with our application container.

    kubectl apply -f ./manifests/pod-db.yaml
    kubectl get pod azure-voting-db -o jsonpath='{.status.podIP}'

    The second command will report out the new IP Address for our database container. Let's open ./manifests/pod-app.yaml and update the container IP to our new one.

    - name: DATABASE_SERVER
    value: YOUR_NEW_IP_HERE

    Then we can deploy the application with the information it needs to find its database. We'll also list out our pods to see what is running.

    kubectl apply -f ./manifests/pod-app.yaml
    kubectl get pods

    Feel free to look back and use the port forwarding trick to make sure your app is running if you'd like.

    Knocking It Down

    The first thing we'll try to break is our application pod. Let's delete it.

    kubectl delete pod azure-voting-app

    Then, we'll check our pod's status:

    kubectl get pods

    Which should show something like:

    azure-voting-app-rust ❯  kubectl get pods
    NAME READY STATUS RESTARTS AGE
    azure-voting-db 1/1 Running 0 50s

    We should be able to recreate our application pod deployment with no problem, since it has the current database IP address and nothing else depends on it.

    kubectl apply -f ./manifests/pod-app.yaml

    Again, feel free to do some fun port forwarding and check your site is running.

    Uncomfortable Truths

    Here's where it gets a bit stickier, what if we delete the database container?

    If we delete our database container and recreate it, it'll likely have a new IP address, which would force us to update our application configuration. We'll look at some solutions for these problems in the next three posts this week.

    Because our database problem is a bit tricky, we'll primarily focus on making our application layer more resilient and prepare our database layer for those other techniques over the next few days.

    Let's clean back up and look into making things more resilient.

    kubectl delete -f ./manifests/pod-app.yaml
    kubectl delete -f ./manifests/pod-db.yaml

    The Deployment

    One of the reasons you may want to use Kubernetes is it's ability to orchestrate workloads. Part of that orchestration includes being able to ensure that certain workloads are running (regardless of what Node they might be on).

    We saw that we could delete our application pod and then restart it from the manifest with little problem. It just meant that we had to run a command to restart it. We can use the Deployment in Kubernetes to tell the orchestrator to ensure we have our application pod running.

    The Deployment also can encompass a lot of extra configuration - controlling how many containers of a particular type should be running, how upgrades of container images should proceed, and more.

    Creating the Deployment

    First, we'll create a Deployment for our database. We'll use a technique similar to what we did for the Pod, with just a bit of difference.

    kubectl create deployment azure-voting-db `
    --image "postgres:15.0-alpine" `
    --port 5432 `
    --output yaml `
    --dry-run=client > manifests/deployment-db.yaml

    Unlike our Pod definition creation, we can't pass in environment variable configuration from the command line. We'll have to edit the YAML file to add that.

    So, let's open ./manifests/deployment-db.yaml in our editor and add the following in the spec/containers configuration.

            env:
    - name: POSTGRES_PASSWORD
    value: "mypassword"

    Your file should look like this deployment-db.yaml.

    Once we have our configuration file updated, we can deploy our database container image.

    kubectl apply -f ./manifests/deployment-db.yaml

    For our application, we'll use the same technique.

    kubectl create deployment azure-voting-app `
    --image "$AcrName.azurecr.io/cnny2023/azure-voting-app-rust:$BuildTag" `
    --port 8080 `
    --output yaml `
    --dry-run=client > manifests/deployment-app.yaml

    Next, we'll need to add an environment variable to the generated configuration. We'll also need the new IP address for the database deployment.

    Previously, we named the pod and were able to ask for the IP address with kubectl and a bit of JSONPath. Now, the deployment created the pod for us, so there's a bit of random in the naming. Check out:

    kubectl get pods

    Should return something like:

    azure-voting-app-rust ❯  kubectl get pods
    NAME READY STATUS RESTARTS AGE
    azure-voting-db-686d758fbf-8jnq8 1/1 Running 0 7s

    We can either ask for the IP with the new pod name, or we can use a selector to find our desired pod.

    kubectl get pod --selector app=azure-voting-db -o jsonpath='{.items[0].status.podIP}'

    Now, we can update our application deployment configuration file with:

            env:
    - name: DATABASE_SERVER
    value: YOUR_NEW_IP_HERE
    - name: DATABASE_PASSWORD
    value: mypassword

    Your file should look like this deployment-app.yaml (but with IPs and image names matching your environment).

    After we save those changes, we can deploy our application.

    kubectl apply -f ./manifests/deployment-app.yaml

    Let's test the resilience of our app now. First, we'll delete the pod running our application, then we'll check to make sure Kubernetes restarted our application pod.

    kubectl get pods
    azure-voting-app-rust ❯  kubectl get pods
    NAME READY STATUS RESTARTS AGE
    azure-voting-app-56c9ccc89d-skv7x 1/1 Running 0 71s
    azure-voting-db-686d758fbf-8jnq8 1/1 Running 0 12m
    kubectl delete pod azure-voting-app-56c9ccc89d-skv7x
    kubectl get pods
    azure-voting-app-rust ❯  kubectl delete pod azure-voting-app-56c9ccc89d-skv7x
    >> kubectl get pods
    pod "azure-voting-app-56c9ccc89d-skv7x" deleted
    NAME READY STATUS RESTARTS AGE
    azure-voting-app-56c9ccc89d-2b5mx 1/1 Running 0 2s
    azure-voting-db-686d758fbf-8jnq8 1/1 Running 0 14m
    info

    Your Pods will likely have different identifiers at the end, so adjust your commands to match the names in your environment.

    As you can see, by the time the kubectl get pods command was run, Kubernetes had already spun up a new pod for the application container image. Thanks Kubernetes!

    Clean up

    Since we can't just delete the pods, we have to delete the deployments.

    kubectl delete -f ./manifests/deployment-app.yaml
    kubectl delete -f ./manifests/deployment-db.yaml

    Summary - Deployments

    Deployments allow us to create more durable configuration for the workloads we deploy into Kubernetes. As we dig deeper, we'll discover more capabilities the deployments offer. Check out the Resources below for more.

    Exercise

    If you want to try these steps, head over to the source repository, fork it, clone it locally, and give it a spin!

    You can check your manifests against the manifests in the week2/day1 branch of the source repository.

    Resources

    Take the Cloud Skills Challenge!

    Enroll in the Cloud Skills Challenge!

    Don't miss out on this opportunity to level up your skills and stay ahead of the curve in the world of cloud native.

    Documentation

    Training

    - + \ No newline at end of file diff --git a/cnny-2023/tags/ask-the-expert/page/8/index.html b/cnny-2023/tags/ask-the-expert/page/8/index.html index eaadc75b15..e790f2d7c3 100644 --- a/cnny-2023/tags/ask-the-expert/page/8/index.html +++ b/cnny-2023/tags/ask-the-expert/page/8/index.html @@ -14,14 +14,14 @@ - +

    16 posts tagged with "ask-the-expert"

    View All Tags

    · 6 min read
    Josh Duffney

    Welcome to Day 3 of Week 2 of #CloudNativeNewYear!

    The theme for this week is Kubernetes fundamentals. Yesterday we talked about Services and Ingress. Today we'll explore the topic of passing configuration and secrets to our applications in Kubernetes with ConfigMaps and Secrets.

    Ask the Experts Thursday, February 9th at 9 AM PST
    Catch the Replay of the Live Demo

    Watch the recorded demo and conversation about this week's topics.

    We were live on YouTube walking through today's (and the rest of this week's) demos.

    What We'll Cover

    • Decouple configurations with ConfigMaps and Secerts
    • Passing Environment Data with ConfigMaps and Secrets
    • Conclusion

    Decouple configurations with ConfigMaps and Secerts

    A ConfigMap is a Kubernetes object that decouples configuration data from pod definitions. Kubernetes secerts are similar, but were designed to decouple senstive information.

    Separating the configuration and secerts from your application promotes better organization and security of your Kubernetes environment. It also enables you to share the same configuration and different secerts across multiple pods and deployments which can simplify scaling and management. Using ConfigMaps and Secerts in Kubernetes is a best practice that can help to improve the scalability, security, and maintainability of your cluster.

    By the end of this tutorial, you'll have added a Kubernetes ConfigMap and Secret to the Azure Voting deployment.

    Passing Environment Data with ConfigMaps and Secrets

    📝 NOTE: If you don't have an AKS cluster deployed, please head over to Azure-Samples/azure-voting-app-rust, clone the repo, and follow the instructions in the README.md to execute the Azure deployment and setup your kubectl context. Check out the first post this week for more on the environment setup.

    Create the ConfigMap

    ConfigMaps can be used in one of two ways; as environment variables or volumes.

    For this tutorial you'll use a ConfigMap to create three environment variables inside the pod; DATABASE_SERVER, FISRT_VALUE, and SECOND_VALUE. The DATABASE_SERVER provides part of connection string to a Postgres. FIRST_VALUE and SECOND_VALUE are configuration options that change what voting options the application presents to the users.

    Follow the below steps to create a new ConfigMap:

    1. Create a YAML file named 'config-map.yaml'. In this file, specify the environment variables for the application.

      apiVersion: v1
      kind: ConfigMap
      metadata:
      name: azure-voting-config
      data:
      DATABASE_SERVER: azure-voting-db
      FIRST_VALUE: "Go"
      SECOND_VALUE: "Rust"
    2. Create the config map in your Kubernetes cluster by running the following command:

      kubectl create -f config-map.yaml

    Create the Secret

    The deployment-db.yaml and deployment-app.yaml are Kubernetes manifests that deploy the Azure Voting App. Currently, those deployment manifests contain the environment variables POSTGRES_PASSWORD and DATABASE_PASSWORD with the value stored as plain text. Your task is to replace that environment variable with a Kubernetes Secret.

    Create a Secret running the following commands:

    1. Encode mypassword.

      echo -n "mypassword" | base64
    2. Create a YAML file named secret.yaml. In this file, add POSTGRES_PASSWORD as the key and the encoded value returned above under as the value in the data section.

      apiVersion: v1
      kind: Secret
      metadata:
      name: azure-voting-secret
      type: Opaque
      data:
      POSTGRES_PASSWORD: bXlwYXNzd29yZA==
    3. Create the Secret in your Kubernetes cluster by running the following command:

      kubectl create -f secret.yaml

    [!WARNING] base64 encoding is a simple and widely supported way to obscure plaintext data, it is not secure, as it can easily be decoded. If you want to store sensitive data like password, you should use a more secure method like encrypting with a Key Management Service (KMS) before storing it in the Secret.

    Modify the app deployment manifest

    With the ConfigMap and Secert both created the next step is to replace the environment variables provided in the application deployment manuscript with the values stored in the ConfigMap and the Secert.

    Complete the following steps to add the ConfigMap and Secert to the deployment mainifest:

    1. Open the Kubernetes manifest file deployment-app.yaml.

    2. In the containers section, add an envFrom section and upate the env section.

      envFrom:
      - configMapRef:
      name: azure-voting-config
      env:
      - name: DATABASE_PASSWORD
      valueFrom:
      secretKeyRef:
      name: azure-voting-secret
      key: POSTGRES_PASSWORD

      Using envFrom exposes all the values witin the ConfigMap as environment variables. Making it so you don't have to list them individually.

    3. Save the changes to the deployment manifest file.

    4. Apply the changes to the deployment by running the following command:

      kubectl apply -f deployment-app.yaml

    Modify the database deployment manifest

    Next, update the database deployment manifest and replace the plain text environment variable with the Kubernetes Secert.

    1. Open the deployment-db.yaml.

    2. To add the secret to the deployment, replace the env section with the following code:

      env:
      - name: POSTGRES_PASSWORD
      valueFrom:
      secretKeyRef:
      name: azure-voting-secret
      key: POSTGRES_PASSWORD
    3. Apply the updated manifest.

      kubectl apply -f deployment-db.yaml

    Verify the ConfigMap and output environment variables

    Verify that the ConfigMap was added to your deploy by running the following command:

    ```bash
    kubectl describe deployment azure-voting-app
    ```

    Browse the output until you find the envFrom section with the config map reference.

    You can also verify that the environment variables from the config map are being passed to the container by running the command kubectl exec -it <pod-name> -- printenv. This command will show you all the environment variables passed to the pod including the one from configmap.

    By following these steps, you will have successfully added a config map to the Azure Voting App Kubernetes deployment, and the environment variables defined in the config map will be passed to the container running in the pod.

    Verify the Secret and describe the deployment

    Once the secret has been created you can verify it exists by running the following command:

    kubectl get secrets

    You can view additional information, such as labels, annotations, type, and the Data by running kubectl describe:

    kubectl describe secret azure-voting-secret

    By default, the describe command doesn't output the encoded value, but if you output the results as JSON or YAML you'll be able to see the secret's encoded value.

     kubectl get secret azure-voting-secret -o json

    Conclusion

    In conclusion, using ConfigMaps and Secrets in Kubernetes can help to improve the scalability, security, and maintainability of your cluster. By decoupling configuration data and sensitive information from pod definitions, you can promote better organization and security in your Kubernetes environment. Additionally, separating these elements allows for sharing the same configuration and different secrets across multiple pods and deployments, simplifying scaling and management.

    Take the Cloud Skills Challenge!

    Enroll in the Cloud Skills Challenge!

    Don't miss out on this opportunity to level up your skills and stay ahead of the curve in the world of cloud native.

    - + \ No newline at end of file diff --git a/cnny-2023/tags/ask-the-expert/page/9/index.html b/cnny-2023/tags/ask-the-expert/page/9/index.html index 304ab94a28..9a0e0cc4f0 100644 --- a/cnny-2023/tags/ask-the-expert/page/9/index.html +++ b/cnny-2023/tags/ask-the-expert/page/9/index.html @@ -14,13 +14,13 @@ - +

    16 posts tagged with "ask-the-expert"

    View All Tags

    · 10 min read
    Steven Murawski

    Welcome to Day 5 of Week 2 of #CloudNativeNewYear!

    The theme for this week is Kubernetes fundamentals. Yesterday we talked about adding persistent storage to our deployment. Today we'll explore the topic of scaling pods and nodes in our Kubernetes cluster.

    Ask the Experts Thursday, February 9th at 9 AM PST
    Catch the Replay of the Live Demo

    Watch the recorded demo and conversation about this week's topics.

    We were live on YouTube walking through today's (and the rest of this week's) demos.

    What We'll Cover

    • Scaling Our Application
    • Scaling Pods
    • Scaling Nodes
    • Exercise
    • Resources

    Scaling Our Application

    One of our primary reasons to use a service like Kubernetes to orchestrate our workloads is the ability to scale. We've approached scaling in a multitude of ways over the years, taking advantage of the ever-evolving levels of hardware and software. Kubernetes allows us to scale our units of work, Pods, and the Nodes they run on. This allows us to take advantage of both hardware and software scaling abilities. Kubernetes can help improve the utilization of existing hardware (by scheduling Pods on Nodes that have resource capacity). And, with the capabilities of virtualization and/or cloud hosting (or a bit more work, if you have a pool of physical machines), Kubernetes can expand (or contract) the number of Nodes capable of hosting Pods. Scaling is primarily driven by resource utilization, but can be triggered by a variety of other sources thanks to projects like Kubernetes Event-driven Autoscaling (KEDA).

    Scaling Pods

    Our first level of scaling is with our Pods. Earlier, when we worked on our deployment, we talked about how the Kubernetes would use the deployment configuration to ensure that we had the desired workloads running. One thing we didn't explore was running more than one instance of a pod. We can define a number of replicas of a pod in our Deployment.

    Manually Scale Pods

    So, if we wanted to define more pods right at the start (or at any point really), we could update our deployment configuration file with the number of replicas and apply that configuration file.

    spec:
    replicas: 5

    Or we could use the kubectl scale command to update the deployment with a number of pods to create.

    kubectl scale --replicas=5 deployment/azure-voting-app

    Both of these approaches modify the running configuration of our Kubernetes cluster and request that it ensure that we have that set number of replicas running. Because this was a manual change, the Kubernetes cluster won't automatically increase or decrease the number of pods. It'll just ensure that there are always the specified number of pods running.

    Autoscale Pods with the Horizontal Pod Autoscaler

    Another approach to scaling our pods is to allow the Horizontal Pod Autoscaler to help us scale in response to resources being used by the pod. This requires a bit more configuration up front. When we define our pod in our deployment, we need to include resource requests and limits. The requests help Kubernetes determine what nodes may have capacity for a new instance of a pod. The limit tells us where the node should cap utilization for a particular instance of a pod. For example, we'll update our deployment to request 0.25 CPU and set a limit of 0.5 CPU.

        spec:
    containers:
    - image: acrudavoz.azurecr.io/cnny2023/azure-voting-app-rust:ca4
    name: azure-voting-app-rust
    ports:
    - containerPort: 8080
    env:
    - name: DATABASE_URL
    value: postgres://postgres:mypassword@10.244.0.29
    resources:
    requests:
    cpu: 250m
    limits:
    cpu: 500m

    Now that we've given Kubernetes an allowed range and an idea of what free resources a node should have to place new pods, we can set up autoscaling. Because autoscaling is a persistent configuration, I like to define it in a configuration file that I'll be able to keep with the rest of my cluster configuration. We'll use the kubectl command to help us write the configuration file. We'll request that Kubernetes watch our pods and when the average CPU utilization if 50% of the requested usage (in our case if it's using more than 0.375 CPU across the current number of pods), it can grow the number of pods serving requests up to 10. If the utilization drops, Kubernetes will have the permission to deprovision pods down to the minimum (three in our example).

    kubectl autoscale deployment azure-voting-app --cpu-percent=50 --min=3 --max=10 -o YAML --dry-run=client

    Which would give us:

    apiVersion: autoscaling/v1
    kind: HorizontalPodAutoscaler
    metadata:
    creationTimestamp: null
    name: azure-voting-app
    spec:
    maxReplicas: 10
    minReplicas: 3
    scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: azure-voting-app
    targetCPUUtilizationPercentage: 50
    status:
    currentReplicas: 0
    desiredReplicas: 0

    So, how often does the autoscaler check the metrics being monitored? The autoscaler checks the Metrics API every 15 seconds, however the pods stats are only updated every 60 seconds. This means that an autoscale event may be evaluated about once a minute. Once an autoscale down event happens however, Kubernetes has a cooldown period to give the new pods a chance to distribute the workload and let the new metrics accumulate. There is no delay on scale up events.

    Application Architecture Considerations

    We've focused in this example on our front end, which is an easier scaling story. When we start talking about scaling our database layers or anything that deals with persistent storage or has primary/replica configuration requirements things get a bit more complicated. Some of these applications may have built-in leader election or could use sidecars to help use existing features in Kubernetes to perform that function. For shared storage scenarios, persistent volumes (or persistent volumes with Azure) can be of help, if the application knows how to play well with shared file access.

    Ultimately, you know your application architecture and, while Kubernetes may not have an exact match to how you are doing things today, the underlying capability is probably there under a different name. This abstraction allows you to more effectively use Kubernetes to operate a variety of workloads with the levels of controls you need.

    Scaling Nodes

    We've looked at how to scale our pods, but that assumes we have enough resources in our existing pool of nodes to accomodate those scaling requests. Kubernetes can also help scale our available nodes to ensure that our applications have the necessary resources to meet their performance requirements.

    Manually Scale Nodes

    Manually scaling nodes isn't a direct function of Kubernetes, so your operating environment instructions may vary. On Azure, it's pretty straight forward. Using the Azure CLI (or other tools), we can tell our AKS cluster to scale up or scale down the number of nodes in our node pool.

    First, we'll check out how many nodes we currently have in our working environment.

    kubectl get nodes

    This will show us

    azure-voting-app-rust ❯  kubectl get nodes
    NAME STATUS ROLES AGE VERSION
    aks-pool0-37917684-vmss000000 Ready agent 5d21h v1.24.6

    Then, we'll scale it up to three nodes.

    az aks scale --resource-group $ResourceGroup --name $AksName --node-count 3

    Then, we'll check out how many nodes we now have in our working environment.

    kubectl get nodes

    Which returns:

    azure-voting-app-rust ❯  kubectl get nodes
    NAME STATUS ROLES AGE VERSION
    aks-pool0-37917684-vmss000000 Ready agent 5d21h v1.24.6
    aks-pool0-37917684-vmss000001 Ready agent 5m27s v1.24.6
    aks-pool0-37917684-vmss000002 Ready agent 5m10s v1.24.6

    Autoscale Nodes with the Cluster Autoscaler

    Things get more interesting when we start working with the Cluster Autoscaler. The Cluster Autoscaler watches for the inability of Kubernetes to schedule the required number of pods due to resource constraints (and a few other criteria like affinity/anti-affinity). If there are insufficient resources available on the existing nodes, the autoscaler can provision new nodes into the nodepool. Likewise, the autoscaler watches to see if the existing pods could be consolidated to a smaller set of nodes and can remove excess nodes.

    Enabling the autoscaler is likewise an update that can be dependent on where and how your Kubernetes cluster is hosted. Azure makes it easy with a simple Azure CLI command.

    az aks update `
    --resource-group $ResourceGroup `
    --name $AksName `
    --update-cluster-autoscaler `
    --min-count 1 `
    --max-count 5

    There are a variety of settings that can be configured to tune how the autoscaler works.

    Scaling on Different Events

    CPU and memory utilization are the primary drivers for the Horizontal Pod Autoscaler, but those might not be the best measures as to when you might want to scale workloads. There are other options for scaling triggers and one of the more common plugins to help with that is the Kubernetes Event-driven Autoscaling (KEDA) project. The KEDA project makes it easy to plug in different event sources to help drive scaling. Find more information about using KEDA on AKS here.

    Exercise

    Let's try out the scaling configurations that we just walked through using our sample application. If you still have your environment from Day 1, you can use that.

    📝 NOTE: If you don't have an AKS cluster deployed, please head over to Azure-Samples/azure-voting-app-rust, clone the repo, and follow the instructions in the README.md to execute the Azure deployment and setup your kubectl context. Check out the first post this week for more on the environment setup.

    Configure Horizontal Pod Autoscaler

    • Edit ./manifests/deployment-app.yaml to include resource requests and limits.
            resources:
    requests:
    cpu: 250m
    limits:
    cpu: 500m
    • Apply the updated deployment configuration.
    kubectl apply -f ./manifests/deployment-app.yaml
    • Create the horizontal pod autoscaler configuration and apply it
    kubectl autoscale deployment azure-voting-app --cpu-percent=50 --min=3 --max=10 -o YAML --dry-run=client > ./manifests/scaler-app.yaml
    kubectl apply -f ./manifests/scaler-app.yaml
    • Check to see your pods scale out to the minimum.
    kubectl get pods

    Configure Cluster Autoscaler

    Configuring the basic behavior of the Cluster Autoscaler is a bit simpler. We just need to run the Azure CLI command to enable the autoscaler and define our lower and upper limits.

    • Check the current nodes available (should be 1).
    kubectl get nodes
    • Update the cluster to enable the autoscaler
    az aks update `
    --resource-group $ResourceGroup `
    --name $AksName `
    --update-cluster-autoscaler `
    --min-count 2 `
    --max-count 5
    • Check to see the current number of nodes (should be 2 now).
    kubectl get nodes

    Resources

    Take the Cloud Skills Challenge!

    Enroll in the Cloud Skills Challenge!

    Don't miss out on this opportunity to level up your skills and stay ahead of the curve in the world of cloud native.

    Documentation

    Training

    - + \ No newline at end of file diff --git a/cnny-2023/tags/azure-dns/index.html b/cnny-2023/tags/azure-dns/index.html index 5704c9aa02..a9fa1880ab 100644 --- a/cnny-2023/tags/azure-dns/index.html +++ b/cnny-2023/tags/azure-dns/index.html @@ -14,13 +14,13 @@ - +

    One post tagged with "azure-dns"

    View All Tags

    · 10 min read
    Paul Yu

    Welcome to Day 3 of Week 3 of #CloudNativeNewYear!

    The theme for this week is Bringing Your Application to Kubernetes. Yesterday we added configuration, secrets, and storage to our app. Today we'll explore how to expose the eShopOnWeb app so that customers can reach it over the internet using a custom domain name and TLS.

    Ask the Experts Thursday, February 9th at 9 AM PST
    Friday, February 10th at 11 AM PST

    Watch the recorded demo and conversation about this week's topics.

    We were live on YouTube walking through today's (and the rest of this week's) demos. Join us Friday, February 10th and bring your questions!

    What We'll Cover

    • Gather requirements
    • Generate TLS certificate and store in Azure Key Vault
    • Implement custom DNS using Azure DNS
    • Enable Web Application Routing add-on for AKS
    • Implement Ingress for the web application
    • Conclusion
    • Resources

    Gather requirements

    Currently, our eShopOnWeb app has three Kubernetes services deployed:

    1. db exposed internally via ClusterIP
    2. api exposed externally via LoadBalancer
    3. web exposed externally via LoadBalancer

    As mentioned in my post last week, Services allow applications to communicate with each other using DNS names. Kubernetes has service discovery capabilities built-in that allows Pods to resolve Services simply by using their names.

    In the case of our api and web deployments, they can simply reach the database by calling its name. The service type of ClusterIP for the db can remain as-is since it only needs to be accessed by the api and web apps.

    On the other hand, api and web both need to be accessed over the public internet. Currently, these services are using service type LoadBalancer which tells AKS to provision an Azure Load Balancer with a public IP address. No one is going to remember the IP addresses, so we need to make the app more accessible by adding a custom domain name and securing it with a TLS certificate.

    Here's what we're going to need:

    • Custom domain name for our app
    • TLS certificate for the custom domain name
    • Routing rule to ensure requests with /api/ in the URL is routed to the backend REST API
    • Routing rule to ensure requests without /api/ in the URL is routing to the web UI

    Just like last week, we will use the Web Application Routing add-on for AKS. But this time, we'll integrate it with Azure DNS and Azure Key Vault to satisfy all of our requirements above.

    info

    At the time of this writing the add-on is still in Public Preview

    Generate TLS certificate and store in Azure Key Vault

    We deployed an Azure Key Vault yesterday to store secrets. We'll use it again to store a TLS certificate too.

    Let's create and export a self-signed certificate for the custom domain.

    DNS_NAME=eshoponweb$RANDOM.com
    openssl req -new -x509 -nodes -out web-tls.crt -keyout web-tls.key -subj "/CN=${DNS_NAME}" -addext "subjectAltName=DNS:${DNS_NAME}"
    openssl pkcs12 -export -in web-tls.crt -inkey web-tls.key -out web-tls.pfx -password pass:
    info

    For learning purposes we'll use a self-signed certificate and a fake custom domain name.

    To browse to the site using the fake domain, we'll mimic a DNS lookup by adding an entry to your host file which maps the public IP address assigned to the ingress controller to the custom domain.

    In a production scenario, you will need to have a real domain delegated to Azure DNS and a valid TLS certificate for the domain.

    Grab your Azure Key Vault name and set the value in a variable for later use.

    RESOURCE_GROUP=cnny-week3

    AKV_NAME=$(az resource list \
    --resource-group $RESOURCE_GROUP \
    --resource-type Microsoft.KeyVault/vaults \
    --query "[0].name" -o tsv)

    Grant yourself permissions to get, list, and import certificates.

    MY_USER_NAME=$(az account show --query user.name -o tsv)
    MY_USER_OBJECT_ID=$(az ad user show --id $MY_USER_NAME --query id -o tsv)

    az keyvault set-policy \
    --name $AKV_NAME \
    --object-id $MY_USER_OBJECT_ID \
    --certificate-permissions get list import

    Upload the TLS certificate to Azure Key Vault and grab its certificate URI.

    WEB_TLS_CERT_ID=$(az keyvault certificate import \
    --vault-name $AKV_NAME \
    --name web-tls \
    --file web-tls.pfx \
    --query id \
    --output tsv)

    Implement custom DNS with Azure DNS

    Create a custom domain for our application and grab its Azure resource id.

    DNS_ZONE_ID=$(az network dns zone create \
    --name $DNS_NAME \
    --resource-group $RESOURCE_GROUP \
    --query id \
    --output tsv)

    Enable Web Application Routing add-on for AKS

    As we enable the Web Application Routing add-on, we'll also pass in the Azure DNS Zone resource id which triggers the installation of the external-dns controller in your Kubernetes cluster. This controller will be able to write Azure DNS zone entries on your behalf as you deploy Ingress manifests.

    AKS_NAME=$(az resource list \
    --resource-group $RESOURCE_GROUP \
    --resource-type Microsoft.ContainerService/managedClusters \
    --query "[0].name" -o tsv)

    az aks enable-addons \
    --name $AKS_NAME \
    --resource-group $RESOURCE_GROUP \
    --addons web_application_routing \
    --dns-zone-resource-id=$DNS_ZONE_ID \
    --enable-secret-rotation

    The add-on will also deploy a new Azure Managed Identity which is used by the external-dns controller when writing Azure DNS zone entries. Currently, it does not have permission to do that, so let's grant it permission.

    # This is where resources are automatically deployed by AKS
    NODE_RESOURCE_GROUP=$(az aks show \
    --name $AKS_NAME \
    --resource-group $RESOURCE_GROUP \
    --query nodeResourceGroup -o tsv)

    # This is the managed identity created by the Web Application Routing add-on
    MANAGED_IDENTTIY_OBJECT_ID=$(az resource show \
    --name webapprouting-${AKS_NAME} \
    --resource-group $NODE_RESOURCE_GROUP \
    --resource-type Microsoft.ManagedIdentity/userAssignedIdentities \
    --query properties.principalId \
    --output tsv)

    # Grant the managed identity permissions to write DNS entries
    az role assignment create \
    --role "DNS Zone Contributor" \
    --assignee $MANAGED_IDENTTIY_OBJECT_ID \
    --scope $DNS_ZONE_ID

    The Azure Managed Identity will also be used to retrieve and rotate TLS certificates from Azure Key Vault. So we'll need to grant it permission for that too.

    az keyvault set-policy \
    --name $AKV_NAME \
    --object-id $MANAGED_IDENTTIY_OBJECT_ID \
    --secret-permissions get \
    --certificate-permissions get

    Implement Ingress for the web application

    Before we create a new Ingress manifest, let's update the existing services to use ClusterIP instead of LoadBalancer. With an Ingress in place, there is no reason why we need the Service resources to be accessible from outside the cluster. The new Ingress will be the only entrypoint for external users.

    We can use the kubectl patch command to update the services

    kubectl patch service api -p '{"spec": {"type": "ClusterIP"}}'
    kubectl patch service web -p '{"spec": {"type": "ClusterIP"}}'

    Deploy a new Ingress to place in front of the web Service. Notice there is a special annotations entry for kubernetes.azure.com/tls-cert-keyvault-uri which points back to our self-signed certificate that was uploaded to Azure Key Vault.

    kubectl apply -f - <<EOF
    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
    annotations:
    kubernetes.azure.com/tls-cert-keyvault-uri: ${WEB_TLS_CERT_ID}
    name: web
    spec:
    ingressClassName: webapprouting.kubernetes.azure.com
    rules:
    - host: ${DNS_NAME}
    http:
    paths:
    - backend:
    service:
    name: web
    port:
    number: 80
    path: /
    pathType: Prefix
    - backend:
    service:
    name: api
    port:
    number: 80
    path: /api
    pathType: Prefix
    tls:
    - hosts:
    - ${DNS_NAME}
    secretName: web-tls
    EOF

    In our manifest above, we've also configured the Ingress route the traffic to either the web or api services based on the URL path requested. If the request URL includes /api/ then it will send traffic to the api backend service. Otherwise, it will send traffic to the web service.

    Within a few minutes, the external-dns controller will add an A record to Azure DNS which points to the Ingress resource's public IP. With the custom domain in place, we can simply browse using this domain name.

    info

    As mentioned above, since this is not a real domain name, we need to modify our host file to make it seem like our custom domain is resolving to the Ingress' public IP address.

    To get the ingress public IP, run the following:

    # Get the IP
    kubectl get ingress web -o jsonpath="{.status.loadBalancer.ingress[0].ip}"

    # Get the hostname
    kubectl get ingress web -o jsonpath="{.spec.tls[0].hosts[0]}"

    Next, open your host file and add an entry using the format <YOUR_PUBLIC_IP> <YOUR_CUSTOM_DOMAIN>. Below is an example of what it should look like.

    20.237.116.224 eshoponweb11265.com

    See this doc for more info on how to do this.

    When browsing to the website, you may be presented with a warning about the connection not being private. This is due to the fact that we are using a self-signed certificate. This is expected, so go ahead and proceed anyway to load up the page.

    Why is the Admin page broken?

    If you log in using the admin@microsoft.com account and browse to the Admin page, you'll notice no products are loaded on the page.

    This is because the admin page is built using Blazor and compiled as a WebAssembly application that runs in your browser. When the application was compiled, it packed the appsettings.Development.json file as an embedded resource. This file contains the base URL for the public API and it currently points to https://localhost:5099. Now that we have a domain name, we can update the base URL and point it to our custom domain.

    From the root of the eShopOnWeb repo, update the configuration file using a sed command.

    sed -i -e "s/localhost:5099/${DNS_NAME}/g" ./src/BlazorAdmin/wwwroot/appsettings.Development.json

    Rebuild and push the container to Azure Container Registry.

    # Grab the name of your Azure Container Registry
    ACR_NAME=$(az resource list \
    --resource-group $RESOURCE_GROUP \
    --resource-type Microsoft.ContainerRegistry/registries \
    --query "[0].name" -o tsv)

    # Invoke a build and publish job
    az acr build \
    --registry $ACR_NAME \
    --image $ACR_NAME.azurecr.io/web:v0.1.0 \
    --file ./src/Web/Dockerfile .

    Once the container build has completed, we can issue a kubectl patch command to quickly update the web deployment to test our change.

    kubectl patch deployment web -p "$(cat <<EOF
    {
    "spec": {
    "template": {
    "spec": {
    "containers": [
    {
    "name": "web",
    "image": "${ACR_NAME}.azurecr.io/web:v0.1.0"
    }
    ]
    }
    }
    }
    }
    EOF
    )"

    If all went well, you will be able to browse the admin page again and confirm product data is being loaded 🥳

    Conclusion

    The Web Application Routing add-on for AKS aims to streamline the process of exposing it to the public using the open-source NGINX Ingress Controller. With the add-on being managed by Azure, it natively integrates with other Azure services like Azure DNS and eliminates the need to manually create DNS entries. It can also integrate with Azure Key Vault to automatically pull in TLS certificates and rotate them as needed to further reduce operational overhead.

    We are one step closer to production and in the upcoming posts we'll further operationalize and secure our deployment, so stay tuned!

    In the meantime, check out the resources listed below for further reading.

    You can also find manifests with all the changes made in today's post in the Azure-Samples/eShopOnAKS repository.

    Resources

    Take the Cloud Skills Challenge!

    Enroll in the Cloud Skills Challenge!

    Don't miss out on this opportunity to level up your skills and stay ahead of the curve in the world of cloud native.

    - + \ No newline at end of file diff --git a/cnny-2023/tags/azure-key-vault/index.html b/cnny-2023/tags/azure-key-vault/index.html index 39fe1af349..dde8c99351 100644 --- a/cnny-2023/tags/azure-key-vault/index.html +++ b/cnny-2023/tags/azure-key-vault/index.html @@ -14,13 +14,13 @@ - +

    2 posts tagged with "azure-key-vault"

    View All Tags

    · 10 min read
    Paul Yu

    Welcome to Day 3 of Week 3 of #CloudNativeNewYear!

    The theme for this week is Bringing Your Application to Kubernetes. Yesterday we added configuration, secrets, and storage to our app. Today we'll explore how to expose the eShopOnWeb app so that customers can reach it over the internet using a custom domain name and TLS.

    Ask the Experts Thursday, February 9th at 9 AM PST
    Friday, February 10th at 11 AM PST

    Watch the recorded demo and conversation about this week's topics.

    We were live on YouTube walking through today's (and the rest of this week's) demos. Join us Friday, February 10th and bring your questions!

    What We'll Cover

    • Gather requirements
    • Generate TLS certificate and store in Azure Key Vault
    • Implement custom DNS using Azure DNS
    • Enable Web Application Routing add-on for AKS
    • Implement Ingress for the web application
    • Conclusion
    • Resources

    Gather requirements

    Currently, our eShopOnWeb app has three Kubernetes services deployed:

    1. db exposed internally via ClusterIP
    2. api exposed externally via LoadBalancer
    3. web exposed externally via LoadBalancer

    As mentioned in my post last week, Services allow applications to communicate with each other using DNS names. Kubernetes has service discovery capabilities built-in that allows Pods to resolve Services simply by using their names.

    In the case of our api and web deployments, they can simply reach the database by calling its name. The service type of ClusterIP for the db can remain as-is since it only needs to be accessed by the api and web apps.

    On the other hand, api and web both need to be accessed over the public internet. Currently, these services are using service type LoadBalancer which tells AKS to provision an Azure Load Balancer with a public IP address. No one is going to remember the IP addresses, so we need to make the app more accessible by adding a custom domain name and securing it with a TLS certificate.

    Here's what we're going to need:

    • Custom domain name for our app
    • TLS certificate for the custom domain name
    • Routing rule to ensure requests with /api/ in the URL is routed to the backend REST API
    • Routing rule to ensure requests without /api/ in the URL is routing to the web UI

    Just like last week, we will use the Web Application Routing add-on for AKS. But this time, we'll integrate it with Azure DNS and Azure Key Vault to satisfy all of our requirements above.

    info

    At the time of this writing the add-on is still in Public Preview

    Generate TLS certificate and store in Azure Key Vault

    We deployed an Azure Key Vault yesterday to store secrets. We'll use it again to store a TLS certificate too.

    Let's create and export a self-signed certificate for the custom domain.

    DNS_NAME=eshoponweb$RANDOM.com
    openssl req -new -x509 -nodes -out web-tls.crt -keyout web-tls.key -subj "/CN=${DNS_NAME}" -addext "subjectAltName=DNS:${DNS_NAME}"
    openssl pkcs12 -export -in web-tls.crt -inkey web-tls.key -out web-tls.pfx -password pass:
    info

    For learning purposes we'll use a self-signed certificate and a fake custom domain name.

    To browse to the site using the fake domain, we'll mimic a DNS lookup by adding an entry to your host file which maps the public IP address assigned to the ingress controller to the custom domain.

    In a production scenario, you will need to have a real domain delegated to Azure DNS and a valid TLS certificate for the domain.

    Grab your Azure Key Vault name and set the value in a variable for later use.

    RESOURCE_GROUP=cnny-week3

    AKV_NAME=$(az resource list \
    --resource-group $RESOURCE_GROUP \
    --resource-type Microsoft.KeyVault/vaults \
    --query "[0].name" -o tsv)

    Grant yourself permissions to get, list, and import certificates.

    MY_USER_NAME=$(az account show --query user.name -o tsv)
    MY_USER_OBJECT_ID=$(az ad user show --id $MY_USER_NAME --query id -o tsv)

    az keyvault set-policy \
    --name $AKV_NAME \
    --object-id $MY_USER_OBJECT_ID \
    --certificate-permissions get list import

    Upload the TLS certificate to Azure Key Vault and grab its certificate URI.

    WEB_TLS_CERT_ID=$(az keyvault certificate import \
    --vault-name $AKV_NAME \
    --name web-tls \
    --file web-tls.pfx \
    --query id \
    --output tsv)

    Implement custom DNS with Azure DNS

    Create a custom domain for our application and grab its Azure resource id.

    DNS_ZONE_ID=$(az network dns zone create \
    --name $DNS_NAME \
    --resource-group $RESOURCE_GROUP \
    --query id \
    --output tsv)

    Enable Web Application Routing add-on for AKS

    As we enable the Web Application Routing add-on, we'll also pass in the Azure DNS Zone resource id which triggers the installation of the external-dns controller in your Kubernetes cluster. This controller will be able to write Azure DNS zone entries on your behalf as you deploy Ingress manifests.

    AKS_NAME=$(az resource list \
    --resource-group $RESOURCE_GROUP \
    --resource-type Microsoft.ContainerService/managedClusters \
    --query "[0].name" -o tsv)

    az aks enable-addons \
    --name $AKS_NAME \
    --resource-group $RESOURCE_GROUP \
    --addons web_application_routing \
    --dns-zone-resource-id=$DNS_ZONE_ID \
    --enable-secret-rotation

    The add-on will also deploy a new Azure Managed Identity which is used by the external-dns controller when writing Azure DNS zone entries. Currently, it does not have permission to do that, so let's grant it permission.

    # This is where resources are automatically deployed by AKS
    NODE_RESOURCE_GROUP=$(az aks show \
    --name $AKS_NAME \
    --resource-group $RESOURCE_GROUP \
    --query nodeResourceGroup -o tsv)

    # This is the managed identity created by the Web Application Routing add-on
    MANAGED_IDENTTIY_OBJECT_ID=$(az resource show \
    --name webapprouting-${AKS_NAME} \
    --resource-group $NODE_RESOURCE_GROUP \
    --resource-type Microsoft.ManagedIdentity/userAssignedIdentities \
    --query properties.principalId \
    --output tsv)

    # Grant the managed identity permissions to write DNS entries
    az role assignment create \
    --role "DNS Zone Contributor" \
    --assignee $MANAGED_IDENTTIY_OBJECT_ID \
    --scope $DNS_ZONE_ID

    The Azure Managed Identity will also be used to retrieve and rotate TLS certificates from Azure Key Vault. So we'll need to grant it permission for that too.

    az keyvault set-policy \
    --name $AKV_NAME \
    --object-id $MANAGED_IDENTTIY_OBJECT_ID \
    --secret-permissions get \
    --certificate-permissions get

    Implement Ingress for the web application

    Before we create a new Ingress manifest, let's update the existing services to use ClusterIP instead of LoadBalancer. With an Ingress in place, there is no reason why we need the Service resources to be accessible from outside the cluster. The new Ingress will be the only entrypoint for external users.

    We can use the kubectl patch command to update the services

    kubectl patch service api -p '{"spec": {"type": "ClusterIP"}}'
    kubectl patch service web -p '{"spec": {"type": "ClusterIP"}}'

    Deploy a new Ingress to place in front of the web Service. Notice there is a special annotations entry for kubernetes.azure.com/tls-cert-keyvault-uri which points back to our self-signed certificate that was uploaded to Azure Key Vault.

    kubectl apply -f - <<EOF
    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
    annotations:
    kubernetes.azure.com/tls-cert-keyvault-uri: ${WEB_TLS_CERT_ID}
    name: web
    spec:
    ingressClassName: webapprouting.kubernetes.azure.com
    rules:
    - host: ${DNS_NAME}
    http:
    paths:
    - backend:
    service:
    name: web
    port:
    number: 80
    path: /
    pathType: Prefix
    - backend:
    service:
    name: api
    port:
    number: 80
    path: /api
    pathType: Prefix
    tls:
    - hosts:
    - ${DNS_NAME}
    secretName: web-tls
    EOF

    In our manifest above, we've also configured the Ingress route the traffic to either the web or api services based on the URL path requested. If the request URL includes /api/ then it will send traffic to the api backend service. Otherwise, it will send traffic to the web service.

    Within a few minutes, the external-dns controller will add an A record to Azure DNS which points to the Ingress resource's public IP. With the custom domain in place, we can simply browse using this domain name.

    info

    As mentioned above, since this is not a real domain name, we need to modify our host file to make it seem like our custom domain is resolving to the Ingress' public IP address.

    To get the ingress public IP, run the following:

    # Get the IP
    kubectl get ingress web -o jsonpath="{.status.loadBalancer.ingress[0].ip}"

    # Get the hostname
    kubectl get ingress web -o jsonpath="{.spec.tls[0].hosts[0]}"

    Next, open your host file and add an entry using the format <YOUR_PUBLIC_IP> <YOUR_CUSTOM_DOMAIN>. Below is an example of what it should look like.

    20.237.116.224 eshoponweb11265.com

    See this doc for more info on how to do this.

    When browsing to the website, you may be presented with a warning about the connection not being private. This is due to the fact that we are using a self-signed certificate. This is expected, so go ahead and proceed anyway to load up the page.

    Why is the Admin page broken?

    If you log in using the admin@microsoft.com account and browse to the Admin page, you'll notice no products are loaded on the page.

    This is because the admin page is built using Blazor and compiled as a WebAssembly application that runs in your browser. When the application was compiled, it packed the appsettings.Development.json file as an embedded resource. This file contains the base URL for the public API and it currently points to https://localhost:5099. Now that we have a domain name, we can update the base URL and point it to our custom domain.

    From the root of the eShopOnWeb repo, update the configuration file using a sed command.

    sed -i -e "s/localhost:5099/${DNS_NAME}/g" ./src/BlazorAdmin/wwwroot/appsettings.Development.json

    Rebuild and push the container to Azure Container Registry.

    # Grab the name of your Azure Container Registry
    ACR_NAME=$(az resource list \
    --resource-group $RESOURCE_GROUP \
    --resource-type Microsoft.ContainerRegistry/registries \
    --query "[0].name" -o tsv)

    # Invoke a build and publish job
    az acr build \
    --registry $ACR_NAME \
    --image $ACR_NAME.azurecr.io/web:v0.1.0 \
    --file ./src/Web/Dockerfile .

    Once the container build has completed, we can issue a kubectl patch command to quickly update the web deployment to test our change.

    kubectl patch deployment web -p "$(cat <<EOF
    {
    "spec": {
    "template": {
    "spec": {
    "containers": [
    {
    "name": "web",
    "image": "${ACR_NAME}.azurecr.io/web:v0.1.0"
    }
    ]
    }
    }
    }
    }
    EOF
    )"

    If all went well, you will be able to browse the admin page again and confirm product data is being loaded 🥳

    Conclusion

    The Web Application Routing add-on for AKS aims to streamline the process of exposing it to the public using the open-source NGINX Ingress Controller. With the add-on being managed by Azure, it natively integrates with other Azure services like Azure DNS and eliminates the need to manually create DNS entries. It can also integrate with Azure Key Vault to automatically pull in TLS certificates and rotate them as needed to further reduce operational overhead.

    We are one step closer to production and in the upcoming posts we'll further operationalize and secure our deployment, so stay tuned!

    In the meantime, check out the resources listed below for further reading.

    You can also find manifests with all the changes made in today's post in the Azure-Samples/eShopOnAKS repository.

    Resources

    Take the Cloud Skills Challenge!

    Enroll in the Cloud Skills Challenge!

    Don't miss out on this opportunity to level up your skills and stay ahead of the curve in the world of cloud native.

    - + \ No newline at end of file diff --git a/cnny-2023/tags/azure-key-vault/page/2/index.html b/cnny-2023/tags/azure-key-vault/page/2/index.html index f4d5ea36cb..5fba9cbf52 100644 --- a/cnny-2023/tags/azure-key-vault/page/2/index.html +++ b/cnny-2023/tags/azure-key-vault/page/2/index.html @@ -14,13 +14,13 @@ - +

    2 posts tagged with "azure-key-vault"

    View All Tags

    · 6 min read
    Josh Duffney

    Welcome to Day 5 of Week 3 of #CloudNativeNewYear!

    The theme for this week is Bringing Your Application to Kubernetes. Yesterday we talked about debugging and instrumenting our application. Today we'll explore the topic of container image signing and secure supply chain.

    Ask the Experts Thursday, February 9th at 9 AM PST
    Friday, February 10th at 11 AM PST

    Watch the recorded demo and conversation about this week's topics.

    We were live on YouTube walking through today's (and the rest of this week's) demos. Join us Friday, February 10th and bring your questions!

    What We'll Cover

    • Introduction
    • Prerequisites
    • Create a digital signing certificate
    • Generate an Azure Container Registry Token
    • Set up Notation
    • Install the Notation Azure Key Vault Plugin
    • Add the signing Certificate to Notation
    • Sign Container Images
    • Conclusion

    Introduction

    The secure supply chain is a crucial aspect of software development, delivery, and deployment, and digital signing plays a critical role in this process.

    By using digital signatures to verify the authenticity and integrity of container images, organizations can improve the security of your software supply chain and reduce the risk of security breaches and data compromise.

    In this article, you'll learn how to use Notary, an open-source project hosted by the Cloud Native Computing Foundation (CNCF) to digitally sign container images stored on Azure Container Registry.

    Prerequisites

    To follow along, you'll need an instance of:

    Create a digital signing certificate

    A digital signing certificate is a certificate that is used to digitally sign and verify the authenticity and integrity of digital artifacts. Such documents, software, and of course container images.

    Before you can implement digital signatures, you must first create a digital signing certificate.

    Run the following command to generate the certificate:

    1. Create the policy file

      cat <<EOF > ./my_policy.json
      {
      "issuerParameters": {
      "certificateTransparency": null,
      "name": "Self"
      },
      "x509CertificateProperties": {
      "ekus": [
      "1.3.6.1.5.5.7.3.3"
      ],
      "key_usage": [
      "digitalSignature"
      ],
      "subject": "CN=${keySubjectName}",
      "validityInMonths": 12
      }
      }
      EOF

      The ekus and key usage of this certificate policy dictate that the certificate can only be used for digital signatures.

    2. Create the certificate in Azure Key Vault

      az keyvault certificate create --name $keyName --vault-name $keyVaultName --policy @my_policy.json

      Replace $keyName and $keyVaultName with your desired certificate name and Azure Key Vault instance name.

    Generate a Azure Container Registry token

    Azure Container Registry tokens are used to grant access to the contents of the registry. Tokens can be used for a variety of things such as pulling images, pushing images, or managing the registry.

    As part of the container image signing workflow, you'll need a token to authenticate the Notation CLI with your Azure Container Registry.

    Run the following command to generate an ACR token:

    az acr token create \
    --name $tokenName \
    --registry $registry \
    --scope-map _repositories_admin \
    --query 'credentials.passwords[0].value' \
    --only-show-errors \
    --output tsv

    Replace $tokenName with your name for the ACR token and $registry with the name of your ACR instance.

    Setup Notation

    Notation is the command-line interface for the CNCF Notary project. You'll use it to digitally sign the api and web container images for the eShopOnWeb application.

    Run the following commands to download and install the NotationCli:

    1. Open a terminal or command prompt window

    2. Download the Notary notation release

      curl -Lo notation.tar.gz https://github.com/notaryproject/notation/releases/download/v1.0.0-rc.1/notation_1.0.0-rc.1_linux_amd64.tar.gz > /dev/null 2>&1

      If you're not using Linux, you can find the releases here.

    3. Extract the contents of the notation.tar.gz

      tar xvzf notation.tar.gz > /dev/null 2>&1
    4. Copy the notation binary to the $HOME/bin directory

      cp ./notation $HOME/bin
    5. Add the $HOME/bin directory to the PATH environment variable

      export PATH="$HOME/bin:$PATH"
    6. Remove the downloaded files

      rm notation.tar.gz LICENSE
    7. Check the notation version

      notation --version

    Install the Notation Azure Key Vault plugin

    By design the NotationCli supports plugins that extend its digital signing capabilities to remote registries. And in order to sign your container images stored in Azure Container Registry, you'll need to install the Azure Key Vault plugin for Notation.

    Run the following commands to install the azure-kv plugin:

    1. Download the plugin

      curl -Lo notation-azure-kv.tar.gz \
      https://github.com/Azure/notation-azure-kv/releases/download/v0.5.0-rc.1/notation-azure-kv_0.5.0-rc.1_linux_amd64.tar.gz > /dev/null 2>&1

      Non-Linux releases can be found here.

    2. Extract to the plugin directory & delete download files

      tar xvzf notation-azure-kv.tar.gz -C ~/.config/notation/plugins/azure-kv notation-azure-kv > /dev/null 2>&

      rm -rf notation-azure-kv.tar.gz
    3. Verify the plugin was installed

      notation plugin ls

    Add the signing certificate to Notation

    Now that you have Notation and the Azure Key Vault plugin installed, add the certificate's keyId created above to Notation.

    1. Get the Certificate Key ID from Azure Key Vault

      az keyvault certificate show \
      --vault-name $keyVaultName \
      --name $keyName \
      --query "kid" --only-show-errors --output tsv

      Replace $keyVaultName and $keyName with the appropriate information.

    2. Add the Key ID to KMS using Notation

      notation key add --plugin azure-kv --id $keyID $keyName
    3. Check the key list

      notation key ls

    Sign Container Images

    At this point, all that's left is to sign the container images.

    Run the notation sign command to sign the api and web container images:

    notation sign $registry.azurecr.io/web:$tag \
    --username $tokenName \
    --password $tokenPassword

    notation sign $registry.azurecr.io/api:$tag \
    --username $tokenName \
    --password $tokenPassword

    Replace $registry, $tag, $tokenName, and $tokenPassword with the appropriate values. To improve security, use a SHA hash for the tag.

    NOTE: If you didn't take note of the token password, you can rerun the az acr token create command to generate a new password.

    Conclusion

    Digital signing plays a critical role in ensuring the security of software supply chains.

    By signing software components, organizations can verify the authenticity and integrity of software, helping to prevent unauthorized modifications, tampering, and malware.

    And if you want to take digital signing to a whole new level by using them to prevent the deployment of unsigned container images, check out the Ratify project on GitHub!

    Take the Cloud Skills Challenge!

    Enroll in the Cloud Skills Challenge!

    Don't miss out on this opportunity to level up your skills and stay ahead of the curve in the world of cloud native.

    - + \ No newline at end of file diff --git a/cnny-2023/tags/azure-kubernetes-service/index.html b/cnny-2023/tags/azure-kubernetes-service/index.html index 4e027eebe6..7782422e57 100644 --- a/cnny-2023/tags/azure-kubernetes-service/index.html +++ b/cnny-2023/tags/azure-kubernetes-service/index.html @@ -14,13 +14,13 @@ - +

    21 posts tagged with "azure-kubernetes-service"

    View All Tags

    · 4 min read
    Cory Skimming
    Devanshi Joshi
    Steven Murawski
    Nitya Narasimhan

    Welcome to the Kick-off Post for #30DaysOfCloudNative - one of the core initiatives within #CloudNativeNewYear! Over the next four weeks, join us as we take you from fundamentals to functional usage of Cloud-native technologies, one blog post at a time! Read on to learn a little bit about this initiative and what you can expect to learn from this journey!

    What We'll Cover


    Cloud-native New Year

    Welcome to Week 01 of 🥳 #CloudNativeNewYear ! Today, we kick off a full month of content and activities to skill you up on all things Cloud-native on Azure with content, events, and community interactions! Read on to learn about what we have planned!


    Explore our initiatives

    We have a number of initiatives planned for the month to help you learn and skill up on relevant technologies. Click on the links to visit the relevant pages for each.

    We'll go into more details about #30DaysOfCloudNative in this post - don't forget to subscribe to the blog to get daily posts delivered directly to your preferred feed reader!


    Register for events!

    What are 3 things you can do today, to jumpstart your learning journey?


    #30DaysOfCloudNative

    #30DaysOfCloudNative is a month-long series of daily blog posts grouped into 4 themed weeks - taking you from core concepts to end-to-end solution examples in 30 days. Each article will be short (5-8 mins reading time) and provide exercises and resources to help you reinforce learnings and take next steps.

    This series focuses on the Cloud-native On Azure learning journey in four stages, each building on the previous week to help you skill up in a beginner-friendly way:

    We have a tentative weekly-themed roadmap for the topics we hope to cover and will keep this updated as we go with links to actual articles as they get published.

    Week 1: FOCUS ON CLOUD-NATIVE FUNDAMENTALS

    Here's a sneak peek at the week 1 schedule. We'll start with a broad review of cloud-native fundamentals and walkthrough the core concepts of microservices, containers and Kubernetes.

    • Jan 23: Learn Core Concepts for Cloud-native
    • Jan 24: Container 101
    • Jan 25: Adopting Microservices with Kubernetes
    • Jan 26: Kubernetes 101
    • Jan 27: Exploring your Cloud Native Options

    Let's Get Started!

    Now you know everything! We hope you are as excited as we are to dive into a full month of active learning and doing! Don't forget to subscribe for updates in your favorite feed reader! And look out for our first Cloud-native Fundamentals post on January 23rd!


    - + \ No newline at end of file diff --git a/cnny-2023/tags/azure-kubernetes-service/page/10/index.html b/cnny-2023/tags/azure-kubernetes-service/page/10/index.html index 57df299c33..d4e6b70ba9 100644 --- a/cnny-2023/tags/azure-kubernetes-service/page/10/index.html +++ b/cnny-2023/tags/azure-kubernetes-service/page/10/index.html @@ -14,13 +14,13 @@ - +

    21 posts tagged with "azure-kubernetes-service"

    View All Tags

    · 8 min read
    Paul Yu

    Welcome to Day 4 of Week 2 of #CloudNativeNewYear!

    The theme for this week is Kubernetes fundamentals. Yesterday we talked about how to set app configurations and secrets at runtime using Kubernetes ConfigMaps and Secrets. Today we'll explore the topic of persistent storage on Kubernetes and show you can leverage Persistent Volumes and Persistent Volume Claims to ensure your PostgreSQL data can survive container restarts.

    Ask the Experts Thursday, February 9th at 9 AM PST
    Catch the Replay of the Live Demo

    Watch the recorded demo and conversation about this week's topics.

    We were live on YouTube walking through today's (and the rest of this week's) demos.

    What We'll Cover

    • Containers are ephemeral
    • Persistent storage on Kubernetes
    • Persistent storage on AKS
    • Takeaways
    • Resources

    Containers are ephemeral

    In our sample application, the frontend UI writes vote values to a backend PostgreSQL database. By default the database container stores its data on the container's local file system, so there will be data loss when the pod is re-deployed or crashes as containers are meant to start with a clean slate each time.

    Let's re-deploy our sample app and experience the problem first hand.

    📝 NOTE: If you don't have an AKS cluster deployed, please head over to Azure-Samples/azure-voting-app-rust, clone the repo, and follow the instructions in the README.md to execute the Azure deployment and setup your kubectl context. Check out the first post this week for more on the environment setup.

    kubectl apply -f ./manifests

    Wait for the azure-voting-app service to be assigned a public IP then browse to the website and submit some votes. Use the command below to print the URL to the terminal.

    echo "http://$(kubectl get ingress azure-voting-app -o jsonpath='{.status.loadBalancer.ingress[0].ip}')"

    Now, let's delete the pods and watch Kubernetes do what it does best... that is, re-schedule pods.

    # wait for the pod to come up then ctrl+c to stop watching
    kubectl delete --all pod --wait=false && kubectl get po -w

    Once the pods have been recovered, reload the website and confirm the vote tally has been reset to zero.

    We need to fix this so that the data outlives the container.

    Persistent storage on Kubernetes

    In order for application data to survive crashes and restarts, you must implement Persistent Volumes and Persistent Volume Claims.

    A persistent volume represents storage that is available to the cluster. Storage volumes can be provisioned manually by an administrator or dynamically using Container Storage Interface (CSI) and storage classes, which includes information on how to provision CSI volumes.

    When a user needs to add persistent storage to their application, a persistent volume claim is made to allocate chunks of storage from the volume. This "claim" includes things like volume mode (e.g., file system or block storage), the amount of storage to allocate, the access mode, and optionally a storage class. Once a persistent volume claim has been deployed, users can add the volume to the pod and mount it in a container.

    In the next section, we'll demonstrate how to enable persistent storage on AKS.

    Persistent storage on AKS

    With AKS, CSI drivers and storage classes are pre-deployed into your cluster. This allows you to natively use Azure Disks, Azure Files, and Azure Blob Storage as persistent volumes. You can either bring your own Azure storage account and use it with AKS or have AKS provision an Azure storage account for you.

    To view the Storage CSI drivers that have been enabled in your AKS cluster, run the following command.

    az aks show \
    --name <YOUR_AKS_NAME> \
    --resource-group <YOUR_AKS_RESOURCE_GROUP> \
    --query storageProfile

    You should see output that looks like this.

    {
    "blobCsiDriver": null,
    "diskCsiDriver": {
    "enabled": true,
    "version": "v1"
    },
    "fileCsiDriver": {
    "enabled": true
    },
    "snapshotController": {
    "enabled": true
    }
    }

    To view the storage classes that have been installed in your cluster, run the following command.

    kubectl get storageclass

    Workload requirements will dictate which CSI driver and storage class you will need to use.

    If you need block storage, then you should use the blobCsiDriver. The driver may not be enabled by default but you can enable it by following instructions which can be found in the Resources section below.

    If you need file storage you should leverage either diskCsiDriver or fileCsiDriver. The decision between these two boils down to whether or not you need to have the underlying storage accessible by one pod or multiple pods. It is important to note that diskCsiDriver currently supports access from a single pod only. Therefore, if you need data to be accessible by multiple pods at the same time, then you should opt for fileCsiDriver.

    For our PostgreSQL deployment, we'll use the diskCsiDriver and have AKS create an Azure Disk resource for us. There is no need to create a PV resource, all we need to do to is create a PVC using the managed-csi-premium storage class.

    Run the following command to create the PVC.

    kubectl apply -f - <<EOF            
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
    name: pvc-azuredisk
    spec:
    accessModes:
    - ReadWriteOnce
    resources:
    requests:
    storage: 10Gi
    storageClassName: managed-csi-premium
    EOF

    When you check the PVC resource, you'll notice the STATUS is set to Pending. It will be set to Bound once the volume is mounted in the PostgreSQL container.

    kubectl get persistentvolumeclaim

    Let's delete the azure-voting-db deployment.

    kubectl delete deploy azure-voting-db

    Next, we need to apply an updated deployment manifest which includes our PVC.

    kubectl apply -f - <<EOF
    apiVersion: apps/v1
    kind: Deployment
    metadata:
    creationTimestamp: null
    labels:
    app: azure-voting-db
    name: azure-voting-db
    spec:
    replicas: 1
    selector:
    matchLabels:
    app: azure-voting-db
    strategy: {}
    template:
    metadata:
    creationTimestamp: null
    labels:
    app: azure-voting-db
    spec:
    containers:
    - image: postgres:15.0-alpine
    name: postgres
    ports:
    - containerPort: 5432
    env:
    - name: POSTGRES_PASSWORD
    valueFrom:
    secretKeyRef:
    name: azure-voting-secret
    key: POSTGRES_PASSWORD
    resources: {}
    volumeMounts:
    - name: mypvc
    mountPath: "/var/lib/postgresql/data"
    subPath: "data"
    volumes:
    - name: mypvc
    persistentVolumeClaim:
    claimName: pvc-azuredisk
    EOF

    In the manifest above, you'll see that we are mounting a new volume called mypvc (the name can be whatever you want) in the pod which points to a PVC named pvc-azuredisk. With the volume in place, we can mount it in the container by referencing the name of the volume mypvc and setting the mount path to /var/lib/postgresql/data (which is the default path).

    💡 IMPORTANT: When mounting a volume into a non-empty subdirectory, you must add subPath to the volume mount and point it to a subdirectory in the volume rather than mounting at root. In our case, when Azure Disk is formatted, it leaves a lost+found directory as documented here.

    Watch the pods and wait for the STATUS to show Running and the pod's READY status shows 1/1.

    # wait for the pod to come up then ctrl+c to stop watching
    kubectl get po -w

    Verify that the STATUS of the PVC is now set to Bound

    kubectl get persistentvolumeclaim

    With the new database container running, let's restart the application pod, wait for the pod's READY status to show 1/1, then head back over to our web browser and submit a few votes.

    kubectl delete pod -lapp=azure-voting-app --wait=false && kubectl get po -lapp=azure-voting-app -w

    Now the moment of truth... let's rip out the pods again, wait for the pods to be re-scheduled, and confirm our vote counts remain in tact.

    kubectl delete --all pod --wait=false && kubectl get po -w

    If you navigate back to the website, you'll find the vote are still there 🎉

    Takeaways

    By design, containers are meant to be ephemeral and stateless workloads are ideal on Kubernetes. However, there will come a time when your data needs to outlive the container. To persist data in your Kubernetes workloads, you need to leverage PV, PVC, and optionally storage classes. In our demo scenario, we leveraged CSI drivers built into AKS and created a PVC using pre-installed storage classes. From there, we updated the database deployment to mount the PVC in the container and AKS did the rest of the work in provisioning the underlying Azure Disk. If the built-in storage classes does not fit your needs; for example, you need to change the ReclaimPolicy or change the SKU for the Azure resource, then you can create your own custom storage class and configure it just the way you need it 😊

    We'll revisit this topic again next week but in the meantime, check out some of the resources listed below to learn more.

    See you in the next post!

    Resources

    Take the Cloud Skills Challenge!

    Enroll in the Cloud Skills Challenge!

    Don't miss out on this opportunity to level up your skills and stay ahead of the curve in the world of cloud native.

    - + \ No newline at end of file diff --git a/cnny-2023/tags/azure-kubernetes-service/page/11/index.html b/cnny-2023/tags/azure-kubernetes-service/page/11/index.html index c9588ce57c..61e1f08a9d 100644 --- a/cnny-2023/tags/azure-kubernetes-service/page/11/index.html +++ b/cnny-2023/tags/azure-kubernetes-service/page/11/index.html @@ -14,13 +14,13 @@ - +

    21 posts tagged with "azure-kubernetes-service"

    View All Tags

    · 10 min read
    Steven Murawski

    Welcome to Day 5 of Week 2 of #CloudNativeNewYear!

    The theme for this week is Kubernetes fundamentals. Yesterday we talked about adding persistent storage to our deployment. Today we'll explore the topic of scaling pods and nodes in our Kubernetes cluster.

    Ask the Experts Thursday, February 9th at 9 AM PST
    Catch the Replay of the Live Demo

    Watch the recorded demo and conversation about this week's topics.

    We were live on YouTube walking through today's (and the rest of this week's) demos.

    What We'll Cover

    • Scaling Our Application
    • Scaling Pods
    • Scaling Nodes
    • Exercise
    • Resources

    Scaling Our Application

    One of our primary reasons to use a service like Kubernetes to orchestrate our workloads is the ability to scale. We've approached scaling in a multitude of ways over the years, taking advantage of the ever-evolving levels of hardware and software. Kubernetes allows us to scale our units of work, Pods, and the Nodes they run on. This allows us to take advantage of both hardware and software scaling abilities. Kubernetes can help improve the utilization of existing hardware (by scheduling Pods on Nodes that have resource capacity). And, with the capabilities of virtualization and/or cloud hosting (or a bit more work, if you have a pool of physical machines), Kubernetes can expand (or contract) the number of Nodes capable of hosting Pods. Scaling is primarily driven by resource utilization, but can be triggered by a variety of other sources thanks to projects like Kubernetes Event-driven Autoscaling (KEDA).

    Scaling Pods

    Our first level of scaling is with our Pods. Earlier, when we worked on our deployment, we talked about how the Kubernetes would use the deployment configuration to ensure that we had the desired workloads running. One thing we didn't explore was running more than one instance of a pod. We can define a number of replicas of a pod in our Deployment.

    Manually Scale Pods

    So, if we wanted to define more pods right at the start (or at any point really), we could update our deployment configuration file with the number of replicas and apply that configuration file.

    spec:
    replicas: 5

    Or we could use the kubectl scale command to update the deployment with a number of pods to create.

    kubectl scale --replicas=5 deployment/azure-voting-app

    Both of these approaches modify the running configuration of our Kubernetes cluster and request that it ensure that we have that set number of replicas running. Because this was a manual change, the Kubernetes cluster won't automatically increase or decrease the number of pods. It'll just ensure that there are always the specified number of pods running.

    Autoscale Pods with the Horizontal Pod Autoscaler

    Another approach to scaling our pods is to allow the Horizontal Pod Autoscaler to help us scale in response to resources being used by the pod. This requires a bit more configuration up front. When we define our pod in our deployment, we need to include resource requests and limits. The requests help Kubernetes determine what nodes may have capacity for a new instance of a pod. The limit tells us where the node should cap utilization for a particular instance of a pod. For example, we'll update our deployment to request 0.25 CPU and set a limit of 0.5 CPU.

        spec:
    containers:
    - image: acrudavoz.azurecr.io/cnny2023/azure-voting-app-rust:ca4
    name: azure-voting-app-rust
    ports:
    - containerPort: 8080
    env:
    - name: DATABASE_URL
    value: postgres://postgres:mypassword@10.244.0.29
    resources:
    requests:
    cpu: 250m
    limits:
    cpu: 500m

    Now that we've given Kubernetes an allowed range and an idea of what free resources a node should have to place new pods, we can set up autoscaling. Because autoscaling is a persistent configuration, I like to define it in a configuration file that I'll be able to keep with the rest of my cluster configuration. We'll use the kubectl command to help us write the configuration file. We'll request that Kubernetes watch our pods and when the average CPU utilization if 50% of the requested usage (in our case if it's using more than 0.375 CPU across the current number of pods), it can grow the number of pods serving requests up to 10. If the utilization drops, Kubernetes will have the permission to deprovision pods down to the minimum (three in our example).

    kubectl autoscale deployment azure-voting-app --cpu-percent=50 --min=3 --max=10 -o YAML --dry-run=client

    Which would give us:

    apiVersion: autoscaling/v1
    kind: HorizontalPodAutoscaler
    metadata:
    creationTimestamp: null
    name: azure-voting-app
    spec:
    maxReplicas: 10
    minReplicas: 3
    scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: azure-voting-app
    targetCPUUtilizationPercentage: 50
    status:
    currentReplicas: 0
    desiredReplicas: 0

    So, how often does the autoscaler check the metrics being monitored? The autoscaler checks the Metrics API every 15 seconds, however the pods stats are only updated every 60 seconds. This means that an autoscale event may be evaluated about once a minute. Once an autoscale down event happens however, Kubernetes has a cooldown period to give the new pods a chance to distribute the workload and let the new metrics accumulate. There is no delay on scale up events.

    Application Architecture Considerations

    We've focused in this example on our front end, which is an easier scaling story. When we start talking about scaling our database layers or anything that deals with persistent storage or has primary/replica configuration requirements things get a bit more complicated. Some of these applications may have built-in leader election or could use sidecars to help use existing features in Kubernetes to perform that function. For shared storage scenarios, persistent volumes (or persistent volumes with Azure) can be of help, if the application knows how to play well with shared file access.

    Ultimately, you know your application architecture and, while Kubernetes may not have an exact match to how you are doing things today, the underlying capability is probably there under a different name. This abstraction allows you to more effectively use Kubernetes to operate a variety of workloads with the levels of controls you need.

    Scaling Nodes

    We've looked at how to scale our pods, but that assumes we have enough resources in our existing pool of nodes to accomodate those scaling requests. Kubernetes can also help scale our available nodes to ensure that our applications have the necessary resources to meet their performance requirements.

    Manually Scale Nodes

    Manually scaling nodes isn't a direct function of Kubernetes, so your operating environment instructions may vary. On Azure, it's pretty straight forward. Using the Azure CLI (or other tools), we can tell our AKS cluster to scale up or scale down the number of nodes in our node pool.

    First, we'll check out how many nodes we currently have in our working environment.

    kubectl get nodes

    This will show us

    azure-voting-app-rust ❯  kubectl get nodes
    NAME STATUS ROLES AGE VERSION
    aks-pool0-37917684-vmss000000 Ready agent 5d21h v1.24.6

    Then, we'll scale it up to three nodes.

    az aks scale --resource-group $ResourceGroup --name $AksName --node-count 3

    Then, we'll check out how many nodes we now have in our working environment.

    kubectl get nodes

    Which returns:

    azure-voting-app-rust ❯  kubectl get nodes
    NAME STATUS ROLES AGE VERSION
    aks-pool0-37917684-vmss000000 Ready agent 5d21h v1.24.6
    aks-pool0-37917684-vmss000001 Ready agent 5m27s v1.24.6
    aks-pool0-37917684-vmss000002 Ready agent 5m10s v1.24.6

    Autoscale Nodes with the Cluster Autoscaler

    Things get more interesting when we start working with the Cluster Autoscaler. The Cluster Autoscaler watches for the inability of Kubernetes to schedule the required number of pods due to resource constraints (and a few other criteria like affinity/anti-affinity). If there are insufficient resources available on the existing nodes, the autoscaler can provision new nodes into the nodepool. Likewise, the autoscaler watches to see if the existing pods could be consolidated to a smaller set of nodes and can remove excess nodes.

    Enabling the autoscaler is likewise an update that can be dependent on where and how your Kubernetes cluster is hosted. Azure makes it easy with a simple Azure CLI command.

    az aks update `
    --resource-group $ResourceGroup `
    --name $AksName `
    --update-cluster-autoscaler `
    --min-count 1 `
    --max-count 5

    There are a variety of settings that can be configured to tune how the autoscaler works.

    Scaling on Different Events

    CPU and memory utilization are the primary drivers for the Horizontal Pod Autoscaler, but those might not be the best measures as to when you might want to scale workloads. There are other options for scaling triggers and one of the more common plugins to help with that is the Kubernetes Event-driven Autoscaling (KEDA) project. The KEDA project makes it easy to plug in different event sources to help drive scaling. Find more information about using KEDA on AKS here.

    Exercise

    Let's try out the scaling configurations that we just walked through using our sample application. If you still have your environment from Day 1, you can use that.

    📝 NOTE: If you don't have an AKS cluster deployed, please head over to Azure-Samples/azure-voting-app-rust, clone the repo, and follow the instructions in the README.md to execute the Azure deployment and setup your kubectl context. Check out the first post this week for more on the environment setup.

    Configure Horizontal Pod Autoscaler

    • Edit ./manifests/deployment-app.yaml to include resource requests and limits.
            resources:
    requests:
    cpu: 250m
    limits:
    cpu: 500m
    • Apply the updated deployment configuration.
    kubectl apply -f ./manifests/deployment-app.yaml
    • Create the horizontal pod autoscaler configuration and apply it
    kubectl autoscale deployment azure-voting-app --cpu-percent=50 --min=3 --max=10 -o YAML --dry-run=client > ./manifests/scaler-app.yaml
    kubectl apply -f ./manifests/scaler-app.yaml
    • Check to see your pods scale out to the minimum.
    kubectl get pods

    Configure Cluster Autoscaler

    Configuring the basic behavior of the Cluster Autoscaler is a bit simpler. We just need to run the Azure CLI command to enable the autoscaler and define our lower and upper limits.

    • Check the current nodes available (should be 1).
    kubectl get nodes
    • Update the cluster to enable the autoscaler
    az aks update `
    --resource-group $ResourceGroup `
    --name $AksName `
    --update-cluster-autoscaler `
    --min-count 2 `
    --max-count 5
    • Check to see the current number of nodes (should be 2 now).
    kubectl get nodes

    Resources

    Take the Cloud Skills Challenge!

    Enroll in the Cloud Skills Challenge!

    Don't miss out on this opportunity to level up your skills and stay ahead of the curve in the world of cloud native.

    Documentation

    Training

    - + \ No newline at end of file diff --git a/cnny-2023/tags/azure-kubernetes-service/page/12/index.html b/cnny-2023/tags/azure-kubernetes-service/page/12/index.html index 78df1d9752..c9770d3870 100644 --- a/cnny-2023/tags/azure-kubernetes-service/page/12/index.html +++ b/cnny-2023/tags/azure-kubernetes-service/page/12/index.html @@ -14,7 +14,7 @@ - + @@ -22,7 +22,7 @@

    21 posts tagged with "azure-kubernetes-service"

    View All Tags

    · 14 min read
    Steven Murawski

    Welcome to Day 1 of Week 3 of #CloudNativeNewYear!

    The theme for this week is Bringing Your Application to Kubernetes. Last we talked about Kubernetes Fundamentals. Today we'll explore getting an existing application running in Kubernetes with a full pipeline in GitHub Actions.

    Ask the Experts Thursday, February 9th at 9 AM PST
    Friday, February 10th at 11 AM PST

    Watch the recorded demo and conversation about this week's topics.

    We were live on YouTube walking through today's (and the rest of this week's) demos. Join us Friday, February 10th and bring your questions!

    What We'll Cover

    • Our Application
    • Adding Some Infrastructure as Code
    • Building and Publishing a Container Image
    • Deploying to Kubernetes
    • Summary
    • Resources

    Our Application

    This week we'll be taking an exisiting application - something similar to a typical line of business application - and setting it up to run in Kubernetes. Over the course of the week, we'll address different concerns. Today we'll focus on updating our CI/CD process to handle standing up (or validating that we have) an Azure Kubernetes Service (AKS) environment, building and publishing container images for our web site and API server, and getting those services running in Kubernetes.

    The application we'll be starting with is eShopOnWeb. This application has a web site and API which are backed by a SQL Server instance. It's built in .NET 7, so it's cross-platform.

    info

    For the enterprising among you, you may notice that there are a number of different eShopOn* variants on GitHub, including eShopOnContainers. We aren't using that example as it's more of an end state than a starting place. Afterwards, feel free to check out that example as what this solution could look like as a series of microservices.

    Adding Some Infrastructure as Code

    Just like last week, we need to stand up an AKS environment. This week, however, rather than running commands in our own shell, we'll set up GitHub Actions to do that for us.

    There is a LOT of plumbing in this section, but once it's set up, it'll make our lives a lot easier. This section ensures that we have an environment to deploy our application into configured the way we want. We can easily extend this to accomodate multiple environments or add additional microservices with minimal new effort.

    Federated Identity

    Setting up a federated identity will allow us a more securable and auditable way of accessing Azure from GitHub Actions. For more about setting up a federated identity, Microsoft Learn has the details on connecting GitHub Actions to Azure.

    Here, we'll just walk through the setup of the identity and configure GitHub to use that idenity to deploy our AKS environment and interact with our Azure Container Registry.

    The examples will use PowerShell, but a Bash version of the setup commands is available in the week3/day1 branch.

    Prerequisites

    To follow along, you'll need:

    • a GitHub account
    • an Azure Subscription
    • the Azure CLI
    • and the Git CLI.

    You'll need to fork the source repository under your GitHub user or organization where you can manage secrets and GitHub Actions.

    It would be helpful to have the GitHub CLI, but it's not required.

    Set Up Some Defaults

    You will need to update one or more of the variables (your user or organization, what branch you want to work off of, and possibly the Azure AD application name if there is a conflict).

    # Replace the gitHubOrganizationName value
    # with the user or organization you forked
    # the repository under.

    $githubOrganizationName = 'Azure-Samples'
    $githubRepositoryName = 'eShopOnAKS'
    $branchName = 'week3/day1'
    $applicationName = 'cnny-week3-day1'

    Create an Azure AD Application

    Next, we need to create an Azure AD application.

    # Create an Azure AD application
    $aksDeploymentApplication = New-AzADApplication -DisplayName $applicationName

    Set Up Federation for that Azure AD Application

    And configure that application to allow federated credential requests from our GitHub repository for a particular branch.

    # Create a federated identity credential for the application
    New-AzADAppFederatedCredential `
    -Name $applicationName `
    -ApplicationObjectId $aksDeploymentApplication.Id `
    -Issuer 'https://token.actions.githubusercontent.com' `
    -Audience 'api://AzureADTokenExchange' `
    -Subject "repo:$($githubOrganizationName)/$($githubRepositoryName):ref:refs/heads/$branchName"

    Create a Service Principal for the Azure AD Application

    Once the application has been created, we need a service principal tied to that application. The service principal can be granted rights in Azure.

    # Create a service principal for the application
    New-AzADServicePrincipal -AppId $($aksDeploymentApplication.AppId)

    Give that Service Principal Rights to Azure Resources

    Because our Bicep deployment exists at the subscription level and we are creating role assignments, we need to give it Owner rights. If we changed the scope of the deployment to just a resource group, we could apply more scoped permissions.

    $azureContext = Get-AzContext
    New-AzRoleAssignment `
    -ApplicationId $($aksDeploymentApplication.AppId) `
    -RoleDefinitionName Owner `
    -Scope $azureContext.Subscription.Id

    Add Secrets to GitHub Repository

    If you have the GitHub CLI, you can use that right from your shell to set the secrets needed.

    gh secret set AZURE_CLIENT_ID --body $aksDeploymentApplication.AppId
    gh secret set AZURE_TENANT_ID --body $azureContext.Tenant.Id
    gh secret set AZURE_SUBSCRIPTION_ID --body $azureContext.Subscription.Id

    Otherwise, you can create them through the web interface like I did in the Learn Live event below.

    info

    It may look like the whole video will play, but it'll stop after configuring the secrets in GitHub (after about 9 minutes)

    The video shows creating the Azure AD application, service principals, and configuring the federated identity in Azure AD and GitHub.

    Creating a Bicep Deployment

    Resuable Workflows

    We'll create our Bicep deployment in a reusable workflows. What are they? The previous link has the documentation or the video below has my colleague Brandon Martinez and I talking about them.

    This workflow is basically the same deployment we did in last week's series, just in GitHub Actions.

    Start by creating a file called deploy_aks.yml in the .github/workflows directory with the below contents.

    name: deploy

    on:
    workflow_call:
    inputs:
    resourceGroupName:
    required: true
    type: string
    secrets:
    AZURE_CLIENT_ID:
    required: true
    AZURE_TENANT_ID:
    required: true
    AZURE_SUBSCRIPTION_ID:
    required: true
    outputs:
    containerRegistryName:
    description: Container Registry Name
    value: ${{ jobs.deploy.outputs.containerRegistryName }}
    containerRegistryUrl:
    description: Container Registry Login Url
    value: ${{ jobs.deploy.outputs.containerRegistryUrl }}
    resourceGroupName:
    description: Resource Group Name
    value: ${{ jobs.deploy.outputs.resourceGroupName }}
    aksName:
    description: Azure Kubernetes Service Cluster Name
    value: ${{ jobs.deploy.outputs.aksName }}

    permissions:
    id-token: write
    contents: read

    jobs:
    validate:
    runs-on: ubuntu-latest
    steps:
    - uses: actions/checkout@v2
    - uses: azure/login@v1
    name: Sign in to Azure
    with:
    client-id: ${{ secrets.AZURE_CLIENT_ID }}
    tenant-id: ${{ secrets.AZURE_TENANT_ID }}
    subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}
    - uses: azure/arm-deploy@v1
    name: Run preflight validation
    with:
    deploymentName: ${{ github.run_number }}
    scope: subscription
    region: eastus
    template: ./deploy/main.bicep
    parameters: >
    resourceGroup=${{ inputs.resourceGroupName }}
    deploymentMode: Validate

    deploy:
    needs: validate
    runs-on: ubuntu-latest
    outputs:
    containerRegistryName: ${{ steps.deploy.outputs.acr_name }}
    containerRegistryUrl: ${{ steps.deploy.outputs.acr_login_server_url }}
    resourceGroupName: ${{ steps.deploy.outputs.resource_group_name }}
    aksName: ${{ steps.deploy.outputs.aks_name }}
    steps:
    - uses: actions/checkout@v2
    - uses: azure/login@v1
    name: Sign in to Azure
    with:
    client-id: ${{ secrets.AZURE_CLIENT_ID }}
    tenant-id: ${{ secrets.AZURE_TENANT_ID }}
    subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}
    - uses: azure/arm-deploy@v1
    id: deploy
    name: Deploy Bicep file
    with:
    failOnStdErr: false
    deploymentName: ${{ github.run_number }}
    scope: subscription
    region: eastus
    template: ./deploy/main.bicep
    parameters: >
    resourceGroup=${{ inputs.resourceGroupName }}

    Adding the Bicep Deployment

    Once we have the Bicep deployment workflow, we can add it to the primary build definition in .github/workflows/dotnetcore.yml

    Permissions

    First, we need to add a permissions block to let the workflow request our Azure AD token. This can go towards the top of the YAML file (I started it on line 5).

    permissions:
    id-token: write
    contents: read

    Deploy AKS Job

    Next, we'll add a reference to our reusable workflow. This will go after the build job.

      deploy_aks:
    needs: [build]
    uses: ./.github/workflows/deploy_aks.yml
    with:
    resourceGroupName: 'cnny-week3'
    secrets:
    AZURE_CLIENT_ID: ${{ secrets.AZURE_CLIENT_ID }}
    AZURE_TENANT_ID: ${{ secrets.AZURE_TENANT_ID }}
    AZURE_SUBSCRIPTION_ID: ${{ secrets.AZURE_SUBSCRIPTION_ID }}

    Building and Publishing a Container Image

    Now that we have our target environment in place and an Azure Container Registry, we can build and publish our container images.

    Add a Reusable Workflow

    First, we'll create a new file for our reusable workflow at .github/workflows/publish_container_image.yml.

    We'll start the file with a name, the parameters it needs to run, and the permissions requirements for the federated identity request.

    name: Publish Container Images

    on:
    workflow_call:
    inputs:
    containerRegistryName:
    required: true
    type: string
    containerRegistryUrl:
    required: true
    type: string
    githubSha:
    required: true
    type: string
    secrets:
    AZURE_CLIENT_ID:
    required: true
    AZURE_TENANT_ID:
    required: true
    AZURE_SUBSCRIPTION_ID:
    required: true

    permissions:
    id-token: write
    contents: read

    Build the Container Images

    Our next step is to build the two container images we'll need for the application, the website and the API. We'll build the container images on our build worker and tag it with the git SHA, so there'll be a direct tie between the point in time in our codebase and the container images that represent it.

    jobs:
    publish_container_image:
    runs-on: ubuntu-latest

    steps:
    - uses: actions/checkout@v2
    - name: docker build
    run: |
    docker build . -f src/Web/Dockerfile -t ${{ inputs.containerRegistryUrl }}/web:${{ inputs.githubSha }}
    docker build . -f src/PublicApi/Dockerfile -t ${{ inputs.containerRegistryUrl }}/api:${{ inputs.githubSha}}

    Scan the Container Images

    Before we publish those container images, we'll scan them for vulnerabilities and best practice violations. We can add these two steps (one scan for each image).

        - name: scan web container image
    uses: Azure/container-scan@v0
    with:
    image-name: ${{ inputs.containerRegistryUrl }}/web:${{ inputs.githubSha}}
    - name: scan api container image
    uses: Azure/container-scan@v0
    with:
    image-name: ${{ inputs.containerRegistryUrl }}/web:${{ inputs.githubSha}}

    The container images provided have a few items that'll be found. We can create an allowed list at .github/containerscan/allowedlist.yaml to define vulnerabilities or best practice violations that we'll explictly allow to not fail our build.

    general:
    vulnerabilities:
    - CVE-2022-29458
    - CVE-2022-3715
    - CVE-2022-1304
    - CVE-2021-33560
    - CVE-2020-16156
    - CVE-2019-8457
    - CVE-2018-8292
    bestPracticeViolations:
    - CIS-DI-0001
    - CIS-DI-0005
    - CIS-DI-0006
    - CIS-DI-0008

    Publish the Container Images

    Finally, we'll log in to Azure, then log in to our Azure Container Registry, and push our images.

        - uses: azure/login@v1
    name: Sign in to Azure
    with:
    client-id: ${{ secrets.AZURE_CLIENT_ID }}
    tenant-id: ${{ secrets.AZURE_TENANT_ID }}
    subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}
    - name: acr login
    run: az acr login --name ${{ inputs.containerRegistryName }}
    - name: docker push
    run: |
    docker push ${{ inputs.containerRegistryUrl }}/web:${{ inputs.githubSha}}
    docker push ${{ inputs.containerRegistryUrl }}/api:${{ inputs.githubSha}}

    Update the Build With the Image Build and Publish

    Now that we have our reusable workflow to create and publish our container images, we can include that in our primary build defnition at .github/workflows/dotnetcore.yml.

      publish_container_image:
    needs: [deploy_aks]
    uses: ./.github/workflows/publish_container_image.yml
    with:
    containerRegistryName: ${{ needs.deploy_aks.outputs.containerRegistryName }}
    containerRegistryUrl: ${{ needs.deploy_aks.outputs.containerRegistryUrl }}
    githubSha: ${{ github.sha }}
    secrets:
    AZURE_CLIENT_ID: ${{ secrets.AZURE_CLIENT_ID }}
    AZURE_TENANT_ID: ${{ secrets.AZURE_TENANT_ID }}
    AZURE_SUBSCRIPTION_ID: ${{ secrets.AZURE_SUBSCRIPTION_ID }}

    Deploying to Kubernetes

    Finally, we've gotten enough set up that a commit to the target branch will:

    • build and test our application code
    • set up (or validate) our AKS and ACR environment
    • and create, scan, and publish our container images to ACR

    Our last step will be to deploy our application to Kubernetes. We'll use the basic building blocks we worked with last week, deployments and services.

    Starting the Reusable Workflow to Deploy to AKS

    We'll start our workflow with our parameters that we need, as well as the permissions to access the token to log in to Azure.

    We'll check out our code, then log in to Azure, and use the az CLI to get credentials for our AKS cluster.

    name: deploy_to_aks

    on:
    workflow_call:
    inputs:
    aksName:
    required: true
    type: string
    resourceGroupName:
    required: true
    type: string
    containerRegistryUrl:
    required: true
    type: string
    githubSha:
    required: true
    type: string
    secrets:
    AZURE_CLIENT_ID:
    required: true
    AZURE_TENANT_ID:
    required: true
    AZURE_SUBSCRIPTION_ID:
    required: true

    permissions:
    id-token: write
    contents: read

    jobs:
    deploy:
    runs-on: ubuntu-latest
    steps:
    - uses: actions/checkout@v2
    - uses: azure/login@v1
    name: Sign in to Azure
    with:
    client-id: ${{ secrets.AZURE_CLIENT_ID }}
    tenant-id: ${{ secrets.AZURE_TENANT_ID }}
    subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}
    - name: Get AKS Credentials
    run: |
    az aks get-credentials --resource-group ${{ inputs.resourceGroupName }} --name ${{ inputs.aksName }}

    Edit the Deployment For Our Current Image Tag

    Let's add the Kubernetes manifests to our repo. This post is long enough, so you can find the content for the manifests folder in the manifests folder in the source repo under the week3/day1 branch.

    tip

    If you only forked the main branch of the source repo, you can easily get the updated manifests by using the following git commands:

    git remote add upstream https://github.com/Azure-Samples/eShopOnAks
    git fetch upstream week3/day1
    git checkout upstream/week3/day1 manifests

    This will make the week3/day1 branch available locally and then we can update the manifests directory to match the state of that branch.

    The deployments and the service defintions should be familiar from last week's content (but not the same). This week, however, there's a new file in the manifests - ./manifests/kustomization.yaml

    This file helps us more dynamically edit our kubernetes manifests and support is baked right in to the kubectl command.

    Kustomize Definition

    Kustomize allows us to specify specific resource manifests and areas of that manifest to replace. We've put some placeholders in our file as well, so we can replace those for each run of our CI/CD system.

    In ./manifests/kustomization.yaml you will see:

    resources:
    - deployment-api.yaml
    - deployment-web.yaml

    # Change the image name and version
    images:
    - name: notavalidregistry.azurecr.io/api:v0.1.0
    newName: <YOUR_ACR_SERVER>/api
    newTag: <YOUR_IMAGE_TAG>
    - name: notavalidregistry.azurecr.io/web:v0.1.0
    newName: <YOUR_ACR_SERVER>/web
    newTag: <YOUR_IMAGE_TAG>

    Replacing Values in our Build

    Now, we encounter a little problem - our deployment files need to know the tag and ACR server. We can do a bit of sed magic to edit the file on the fly.

    In .github/workflows/deploy_to_aks.yml, we'll add:

          - name: replace_placeholders_with_current_run
    run: |
    sed -i "s/<YOUR_ACR_SERVER>/${{ inputs.containerRegistryUrl }}/g" ./manifests/kustomization.yaml
    sed -i "s/<YOUR_IMAGE_TAG>/${{ inputs.githubSha }}/g" ./manifests/kustomization.yaml

    Deploying the Manifests

    We have our manifests in place and our kustomization.yaml file (with commands to update it at runtime) ready to go, we can deploy our manifests.

    First, we'll deploy our database (deployment and service). Next, we'll use the -k parameter on kubectl to tell it to look for a kustomize configuration, transform the requested manifests and apply those. Finally, we apply the service defintions for the web and API deployments.

            run: |
    kubectl apply -f ./manifests/deployment-db.yaml \
    -f ./manifests/service-db.yaml
    kubectl apply -k ./manifests
    kubectl apply -f ./manifests/service-api.yaml \
    -f ./manifests/service-web.yaml

    Summary

    We've covered a lot of ground in today's post. We set up federated credentials with GitHub. Then we added reusable workflows to deploy an AKS environment and build/scan/publish our container images, and then to deploy them into our AKS environment.

    This sets us up to start making changes to our application and Kubernetes configuration and have those changes automatically validated and deployed by our CI/CD system. Tomorrow, we'll look at updating our application environment with runtime configuration, persistent storage, and more.

    Resources

    Take the Cloud Skills Challenge!

    Enroll in the Cloud Skills Challenge!

    Don't miss out on this opportunity to level up your skills and stay ahead of the curve in the world of cloud native.

    - + \ No newline at end of file diff --git a/cnny-2023/tags/azure-kubernetes-service/page/13/index.html b/cnny-2023/tags/azure-kubernetes-service/page/13/index.html index 48f805ba39..7f6278d5bb 100644 --- a/cnny-2023/tags/azure-kubernetes-service/page/13/index.html +++ b/cnny-2023/tags/azure-kubernetes-service/page/13/index.html @@ -14,13 +14,13 @@ - +

    21 posts tagged with "azure-kubernetes-service"

    View All Tags

    · 12 min read
    Paul Yu

    Welcome to Day 2 of Week 3 of #CloudNativeNewYear!

    The theme for this week is Bringing Your Application to Kubernetes. Yesterday we talked about getting an existing application running in Kubernetes with a full pipeline in GitHub Actions. Today we'll evaluate our sample application's configuration, storage, and networking requirements and implement using Kubernetes and Azure resources.

    Ask the Experts Thursday, February 9th at 9 AM PST
    Friday, February 10th at 11 AM PST

    Watch the recorded demo and conversation about this week's topics.

    We were live on YouTube walking through today's (and the rest of this week's) demos. Join us Friday, February 10th and bring your questions!

    What We'll Cover

    • Gather requirements
    • Implement environment variables using ConfigMaps
    • Implement persistent volumes using Azure Files
    • Implement secrets using Azure Key Vault
    • Re-package deployments
    • Conclusion
    • Resources
    caution

    Before you begin, make sure you've gone through yesterday's post to set up your AKS cluster.

    Gather requirements

    The eShopOnWeb application is written in .NET 7 and has two major pieces of functionality. The web UI is where customers can browse and shop. The web UI also includes an admin portal for managing the product catalog. This admin portal, is packaged as a WebAssembly application and relies on a separate REST API service. Both the web UI and the REST API connect to the same SQL Server container.

    Looking through the source code which can be found here we can identify requirements for configs, persistent storage, and secrets.

    Database server

    • Need to store the password for the sa account as a secure secret
    • Need persistent storage volume for data directory
    • Need to inject environment variables for SQL Server license type and EULA acceptance

    Web UI and REST API service

    • Need to store database connection string as a secure secret
    • Need to inject ASP.NET environment variables to override app settings
    • Need persistent storage volume for ASP.NET key storage

    Implement environment variables using ConfigMaps

    ConfigMaps are relatively straight-forward to create. If you were following along with the examples last week, this should be review 😉

    Create a ConfigMap to store database environment variables.

    kubectl apply -f - <<EOF
    apiVersion: v1
    kind: ConfigMap
    metadata:
    name: mssql-settings
    data:
    MSSQL_PID: Developer
    ACCEPT_EULA: "Y"
    EOF

    Create another ConfigMap to store ASP.NET environment variables.

    kubectl apply -f - <<EOF
    apiVersion: v1
    kind: ConfigMap
    metadata:
    name: aspnet-settings
    data:
    ASPNETCORE_ENVIRONMENT: Development
    EOF

    Implement persistent volumes using Azure Files

    Similar to last week, we'll take advantage of storage classes built into AKS. For our SQL Server data, we'll use the azurefile-csi-premium storage class and leverage an Azure Files resource as our PersistentVolume.

    Create a PersistentVolumeClaim (PVC) for persisting SQL Server data.

    kubectl apply -f - <<EOF
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
    name: mssql-data
    spec:
    accessModes:
    - ReadWriteMany
    storageClassName: azurefile-csi-premium
    resources:
    requests:
    storage: 5Gi
    EOF

    Create another PVC for persisting ASP.NET data.

    kubectl apply -f - <<EOF
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
    name: aspnet-data
    spec:
    accessModes:
    - ReadWriteMany
    storageClassName: azurefile-csi-premium
    resources:
    requests:
    storage: 5Gi
    EOF

    Implement secrets using Azure Key Vault

    It's a well known fact that Kubernetes secretes are not really secrets. They're just base64-encoded values and not secure, especially if malicious users have access to your Kubernetes cluster.

    In a production scenario, you will want to leverage an external vault like Azure Key Vault or HashiCorp Vault to encrypt and store secrets.

    With AKS, we can enable the Secrets Store CSI driver add-on which will allow us to leverage Azure Key Vault.

    # Set some variables
    RG_NAME=<YOUR_RESOURCE_GROUP_NAME>
    AKS_NAME=<YOUR_AKS_CLUSTER_NAME>
    ACR_NAME=<YOUR_ACR_NAME>

    az aks enable-addons \
    --addons azure-keyvault-secrets-provider \
    --name $AKS_NAME \
    --resource-group $RG_NAME

    With the add-on enabled, you should see aks-secrets-store-csi-driver and aks-secrets-store-provider-azure resources installed on each node in your Kubernetes cluster.

    Run the command below to verify.

    kubectl get pods \
    --namespace kube-system \
    --selector 'app in (secrets-store-csi-driver, secrets-store-provider-azure)'

    The Secrets Store CSI driver allows us to use secret stores via Container Storage Interface (CSI) volumes. This provider offers capabilities such as mounting and syncing between the secure vault and Kubernetes Secrets. On AKS, the Azure Key Vault Provider for Secrets Store CSI Driver enables integration with Azure Key Vault.

    You may not have an Azure Key Vault created yet, so let's create one and add some secrets to it.

    AKV_NAME=$(az keyvault create \
    --name akv-eshop$RANDOM \
    --resource-group $RG_NAME \
    --query name -o tsv)

    # Database server password
    az keyvault secret set \
    --vault-name $AKV_NAME \
    --name mssql-password \
    --value "@someThingComplicated1234"

    # Catalog database connection string
    az keyvault secret set \
    --vault-name $AKV_NAME \
    --name mssql-connection-catalog \
    --value "Server=db;Database=Microsoft.eShopOnWeb.CatalogDb;User Id=sa;Password=@someThingComplicated1234;TrustServerCertificate=True;"

    # Identity database connection string
    az keyvault secret set \
    --vault-name $AKV_NAME \
    --name mssql-connection-identity \
    --value "Server=db;Database=Microsoft.eShopOnWeb.Identity;User Id=sa;Password=@someThingComplicated1234;TrustServerCertificate=True;"

    Pods authentication using Azure Workload Identity

    In order for our Pods to retrieve secrets from Azure Key Vault, we'll need to set up a way for the Pod to authenticate against Azure AD. This can be achieved by implementing the new Azure Workload Identity feature of AKS.

    info

    At the time of this writing, the workload identity feature of AKS is in Preview.

    The workload identity feature within AKS allows us to leverage native Kubernetes resources and link a Kubernetes ServiceAccount to an Azure Managed Identity to authenticate against Azure AD.

    For the authentication flow, our Kubernetes cluster will act as an Open ID Connect (OIDC) issuer and will be able issue identity tokens to ServiceAccounts which will be assigned to our Pods.

    The Azure Managed Identity will be granted permission to access secrets in our Azure Key Vault and with the ServiceAccount being assigned to our Pods, they will be able to retrieve our secrets.

    For more information on how the authentication mechanism all works, check out this doc.

    To implement all this, start by enabling the new preview feature for AKS.

    az feature register \
    --namespace "Microsoft.ContainerService" \
    --name "EnableWorkloadIdentityPreview"
    caution

    This can take several minutes to complete.

    Check the status and ensure the state shows Regestered before moving forward.

    az feature show \
    --namespace "Microsoft.ContainerService" \
    --name "EnableWorkloadIdentityPreview"

    Update your AKS cluster to enable the workload identity feature and enable the OIDC issuer endpoint.

    az aks update \
    --name $AKS_NAME \
    --resource-group $RG_NAME \
    --enable-workload-identity \
    --enable-oidc-issuer

    Create an Azure Managed Identity and retrieve its client ID.

    MANAGED_IDENTITY_CLIENT_ID=$(az identity create \
    --name aks-workload-identity \
    --resource-group $RG_NAME \
    --subscription $(az account show --query id -o tsv) \
    --query 'clientId' -o tsv)

    Create the Kubernetes ServiceAccount.

    # Set namespace (this must align with the namespace that your app is deployed into)
    SERVICE_ACCOUNT_NAMESPACE=default

    # Set the service account name
    SERVICE_ACCOUNT_NAME=eshop-serviceaccount

    # Create the service account
    kubectl apply -f - <<EOF
    apiVersion: v1
    kind: ServiceAccount
    metadata:
    annotations:
    azure.workload.identity/client-id: ${MANAGED_IDENTITY_CLIENT_ID}
    labels:
    azure.workload.identity/use: "true"
    name: ${SERVICE_ACCOUNT_NAME}
    namespace: ${SERVICE_ACCOUNT_NAMESPACE}
    EOF
    info

    Note to enable this ServiceAccount to work with Azure Workload Identity, you must annotate the resource with azure.workload.identity/client-id, and add a label of azure.workload.identity/use: "true"

    That was a lot... Let's review what we just did.

    We have an Azure Managed Identity (object in Azure AD), an OIDC issuer URL (endpoint in our Kubernetes cluster), and a Kubernetes ServiceAccount.

    The next step is to "tie" these components together and establish a Federated Identity Credential so that Azure AD can trust authentication requests from your Kubernetes cluster.

    info

    This identity federation can be established between Azure AD any Kubernetes cluster; not just AKS 🤗

    To establish the federated credential, we'll need the OIDC issuer URL, and a subject which points to your Kubernetes ServiceAccount.

    # Get the OIDC issuer URL
    OIDC_ISSUER_URL=$(az aks show \
    --name $AKS_NAME \
    --resource-group $RG_NAME \
    --query "oidcIssuerProfile.issuerUrl" -o tsv)

    # Set the subject name using this format: `system:serviceaccount:<YOUR_SERVICE_ACCOUNT_NAMESPACE>:<YOUR_SERVICE_ACCOUNT_NAME>`
    SUBJECT=system:serviceaccount:$SERVICE_ACCOUNT_NAMESPACE:$SERVICE_ACCOUNT_NAME

    az identity federated-credential create \
    --name aks-federated-credential \
    --identity-name aks-workload-identity \
    --resource-group $RG_NAME \
    --issuer $OIDC_ISSUER_URL \
    --subject $SUBJECT

    With the authentication components set, we can now create a SecretProviderClass which includes details about the Azure Key Vault, the secrets to pull out from the vault, and identity used to access the vault.

    # Get the tenant id for the key vault
    TENANT_ID=$(az keyvault show \
    --name $AKV_NAME \
    --resource-group $RG_NAME \
    --query properties.tenantId -o tsv)

    # Create the secret provider for azure key vault
    kubectl apply -f - <<EOF
    apiVersion: secrets-store.csi.x-k8s.io/v1
    kind: SecretProviderClass
    metadata:
    name: eshop-azure-keyvault
    spec:
    provider: azure
    parameters:
    usePodIdentity: "false"
    useVMManagedIdentity: "false"
    clientID: "${MANAGED_IDENTITY_CLIENT_ID}"
    keyvaultName: "${AKV_NAME}"
    cloudName: ""
    objects: |
    array:
    - |
    objectName: mssql-password
    objectType: secret
    objectVersion: ""
    - |
    objectName: mssql-connection-catalog
    objectType: secret
    objectVersion: ""
    - |
    objectName: mssql-connection-identity
    objectType: secret
    objectVersion: ""
    tenantId: "${TENANT_ID}"
    secretObjects:
    - secretName: eshop-secrets
    type: Opaque
    data:
    - objectName: mssql-password
    key: mssql-password
    - objectName: mssql-connection-catalog
    key: mssql-connection-catalog
    - objectName: mssql-connection-identity
    key: mssql-connection-identity
    EOF

    Finally, lets grant the Azure Managed Identity permissions to retrieve secrets from the Azure Key Vault.

    az keyvault set-policy \
    --name $AKV_NAME \
    --secret-permissions get \
    --spn $MANAGED_IDENTITY_CLIENT_ID

    Re-package deployments

    Update your database deployment to load environment variables from our ConfigMap, attach the PVC and SecretProviderClass as volumes, mount the volumes into the Pod, and use the ServiceAccount to retrieve secrets.

    Additionally, you may notice the database Pod is set to use fsGroup:10001 as part of the securityContext. This is required as the MSSQL container runs using a non-root account called mssql and this account has the proper permissions to read/write data at the /var/opt/mssql mount path.

    kubectl apply -f - <<EOF
    apiVersion: apps/v1
    kind: Deployment
    metadata:
    name: db
    labels:
    app: db
    spec:
    replicas: 1
    selector:
    matchLabels:
    app: db
    template:
    metadata:
    labels:
    app: db
    spec:
    securityContext:
    fsGroup: 10001
    serviceAccountName: ${SERVICE_ACCOUNT_NAME}
    containers:
    - name: db
    image: mcr.microsoft.com/mssql/server:2019-latest
    ports:
    - containerPort: 1433
    envFrom:
    - configMapRef:
    name: mssql-settings
    env:
    - name: MSSQL_SA_PASSWORD
    valueFrom:
    secretKeyRef:
    name: eshop-secrets
    key: mssql-password
    resources: {}
    volumeMounts:
    - name: mssqldb
    mountPath: /var/opt/mssql
    - name: eshop-secrets
    mountPath: "/mnt/secrets-store"
    readOnly: true
    volumes:
    - name: mssqldb
    persistentVolumeClaim:
    claimName: mssql-data
    - name: eshop-secrets
    csi:
    driver: secrets-store.csi.k8s.io
    readOnly: true
    volumeAttributes:
    secretProviderClass: eshop-azure-keyvault
    EOF

    We'll update the API and Web deployments in a similar way.

    # Set the image tag
    IMAGE_TAG=<YOUR_IMAGE_TAG>

    # API deployment
    kubectl apply -f - <<EOF
    apiVersion: apps/v1
    kind: Deployment
    metadata:
    name: api
    labels:
    app: api
    spec:
    replicas: 1
    selector:
    matchLabels:
    app: api
    template:
    metadata:
    labels:
    app: api
    spec:
    serviceAccount: ${SERVICE_ACCOUNT_NAME}
    containers:
    - name: api
    image: ${ACR_NAME}.azurecr.io/api:${IMAGE_TAG}
    ports:
    - containerPort: 80
    envFrom:
    - configMapRef:
    name: aspnet-settings
    env:
    - name: ConnectionStrings__CatalogConnection
    valueFrom:
    secretKeyRef:
    name: eshop-secrets
    key: mssql-connection-catalog
    - name: ConnectionStrings__IdentityConnection
    valueFrom:
    secretKeyRef:
    name: eshop-secrets
    key: mssql-connection-identity
    resources: {}
    volumeMounts:
    - name: aspnet
    mountPath: ~/.aspnet/https:/root/.aspnet/https:ro
    - name: eshop-secrets
    mountPath: "/mnt/secrets-store"
    readOnly: true
    volumes:
    - name: aspnet
    persistentVolumeClaim:
    claimName: aspnet-data
    - name: eshop-secrets
    csi:
    driver: secrets-store.csi.k8s.io
    readOnly: true
    volumeAttributes:
    secretProviderClass: eshop-azure-keyvault
    EOF

    ## Web deployment
    kubectl apply -f - <<EOF
    apiVersion: apps/v1
    kind: Deployment
    metadata:
    name: web
    labels:
    app: web
    spec:
    replicas: 1
    selector:
    matchLabels:
    app: web
    template:
    metadata:
    labels:
    app: web
    spec:
    serviceAccount: ${SERVICE_ACCOUNT_NAME}
    containers:
    - name: web
    image: ${ACR_NAME}.azurecr.io/web:${IMAGE_TAG}
    ports:
    - containerPort: 80
    envFrom:
    - configMapRef:
    name: aspnet-settings
    env:
    - name: ConnectionStrings__CatalogConnection
    valueFrom:
    secretKeyRef:
    name: eshop-secrets
    key: mssql-connection-catalog
    - name: ConnectionStrings__IdentityConnection
    valueFrom:
    secretKeyRef:
    name: eshop-secrets
    key: mssql-connection-identity
    resources: {}
    volumeMounts:
    - name: aspnet
    mountPath: ~/.aspnet/https:/root/.aspnet/https:ro
    - name: eshop-secrets
    mountPath: "/mnt/secrets-store"
    readOnly: true
    volumes:
    - name: aspnet
    persistentVolumeClaim:
    claimName: aspnet-data
    - name: eshop-secrets
    csi:
    driver: secrets-store.csi.k8s.io
    readOnly: true
    volumeAttributes:
    secretProviderClass: eshop-azure-keyvault
    EOF

    If all went well with your deployment updates, you should be able to browse to your website and buy some merchandise again 🥳

    echo "http://$(kubectl get service web -o jsonpath='{.status.loadBalancer.ingress[0].ip}')"

    Conclusion

    Although there is no visible changes on with our website, we've made a ton of changes on the Kubernetes backend to make this application much more secure and resilient.

    We used a combination of Kubernetes resources and AKS-specific features to achieve our goal of securing our secrets and ensuring data is not lost on container crashes and restarts.

    To learn more about the components we leveraged here today, checkout the resources and additional tutorials listed below.

    You can also find manifests with all the changes made in today's post in the Azure-Samples/eShopOnAKS repository.

    See you in the next post!

    Resources

    Take the Cloud Skills Challenge!

    Enroll in the Cloud Skills Challenge!

    Don't miss out on this opportunity to level up your skills and stay ahead of the curve in the world of cloud native.

    - + \ No newline at end of file diff --git a/cnny-2023/tags/azure-kubernetes-service/page/14/index.html b/cnny-2023/tags/azure-kubernetes-service/page/14/index.html index 780b514e72..ddde61fd0c 100644 --- a/cnny-2023/tags/azure-kubernetes-service/page/14/index.html +++ b/cnny-2023/tags/azure-kubernetes-service/page/14/index.html @@ -14,13 +14,13 @@ - +

    21 posts tagged with "azure-kubernetes-service"

    View All Tags

    · 10 min read
    Paul Yu

    Welcome to Day 3 of Week 3 of #CloudNativeNewYear!

    The theme for this week is Bringing Your Application to Kubernetes. Yesterday we added configuration, secrets, and storage to our app. Today we'll explore how to expose the eShopOnWeb app so that customers can reach it over the internet using a custom domain name and TLS.

    Ask the Experts Thursday, February 9th at 9 AM PST
    Friday, February 10th at 11 AM PST

    Watch the recorded demo and conversation about this week's topics.

    We were live on YouTube walking through today's (and the rest of this week's) demos. Join us Friday, February 10th and bring your questions!

    What We'll Cover

    • Gather requirements
    • Generate TLS certificate and store in Azure Key Vault
    • Implement custom DNS using Azure DNS
    • Enable Web Application Routing add-on for AKS
    • Implement Ingress for the web application
    • Conclusion
    • Resources

    Gather requirements

    Currently, our eShopOnWeb app has three Kubernetes services deployed:

    1. db exposed internally via ClusterIP
    2. api exposed externally via LoadBalancer
    3. web exposed externally via LoadBalancer

    As mentioned in my post last week, Services allow applications to communicate with each other using DNS names. Kubernetes has service discovery capabilities built-in that allows Pods to resolve Services simply by using their names.

    In the case of our api and web deployments, they can simply reach the database by calling its name. The service type of ClusterIP for the db can remain as-is since it only needs to be accessed by the api and web apps.

    On the other hand, api and web both need to be accessed over the public internet. Currently, these services are using service type LoadBalancer which tells AKS to provision an Azure Load Balancer with a public IP address. No one is going to remember the IP addresses, so we need to make the app more accessible by adding a custom domain name and securing it with a TLS certificate.

    Here's what we're going to need:

    • Custom domain name for our app
    • TLS certificate for the custom domain name
    • Routing rule to ensure requests with /api/ in the URL is routed to the backend REST API
    • Routing rule to ensure requests without /api/ in the URL is routing to the web UI

    Just like last week, we will use the Web Application Routing add-on for AKS. But this time, we'll integrate it with Azure DNS and Azure Key Vault to satisfy all of our requirements above.

    info

    At the time of this writing the add-on is still in Public Preview

    Generate TLS certificate and store in Azure Key Vault

    We deployed an Azure Key Vault yesterday to store secrets. We'll use it again to store a TLS certificate too.

    Let's create and export a self-signed certificate for the custom domain.

    DNS_NAME=eshoponweb$RANDOM.com
    openssl req -new -x509 -nodes -out web-tls.crt -keyout web-tls.key -subj "/CN=${DNS_NAME}" -addext "subjectAltName=DNS:${DNS_NAME}"
    openssl pkcs12 -export -in web-tls.crt -inkey web-tls.key -out web-tls.pfx -password pass:
    info

    For learning purposes we'll use a self-signed certificate and a fake custom domain name.

    To browse to the site using the fake domain, we'll mimic a DNS lookup by adding an entry to your host file which maps the public IP address assigned to the ingress controller to the custom domain.

    In a production scenario, you will need to have a real domain delegated to Azure DNS and a valid TLS certificate for the domain.

    Grab your Azure Key Vault name and set the value in a variable for later use.

    RESOURCE_GROUP=cnny-week3

    AKV_NAME=$(az resource list \
    --resource-group $RESOURCE_GROUP \
    --resource-type Microsoft.KeyVault/vaults \
    --query "[0].name" -o tsv)

    Grant yourself permissions to get, list, and import certificates.

    MY_USER_NAME=$(az account show --query user.name -o tsv)
    MY_USER_OBJECT_ID=$(az ad user show --id $MY_USER_NAME --query id -o tsv)

    az keyvault set-policy \
    --name $AKV_NAME \
    --object-id $MY_USER_OBJECT_ID \
    --certificate-permissions get list import

    Upload the TLS certificate to Azure Key Vault and grab its certificate URI.

    WEB_TLS_CERT_ID=$(az keyvault certificate import \
    --vault-name $AKV_NAME \
    --name web-tls \
    --file web-tls.pfx \
    --query id \
    --output tsv)

    Implement custom DNS with Azure DNS

    Create a custom domain for our application and grab its Azure resource id.

    DNS_ZONE_ID=$(az network dns zone create \
    --name $DNS_NAME \
    --resource-group $RESOURCE_GROUP \
    --query id \
    --output tsv)

    Enable Web Application Routing add-on for AKS

    As we enable the Web Application Routing add-on, we'll also pass in the Azure DNS Zone resource id which triggers the installation of the external-dns controller in your Kubernetes cluster. This controller will be able to write Azure DNS zone entries on your behalf as you deploy Ingress manifests.

    AKS_NAME=$(az resource list \
    --resource-group $RESOURCE_GROUP \
    --resource-type Microsoft.ContainerService/managedClusters \
    --query "[0].name" -o tsv)

    az aks enable-addons \
    --name $AKS_NAME \
    --resource-group $RESOURCE_GROUP \
    --addons web_application_routing \
    --dns-zone-resource-id=$DNS_ZONE_ID \
    --enable-secret-rotation

    The add-on will also deploy a new Azure Managed Identity which is used by the external-dns controller when writing Azure DNS zone entries. Currently, it does not have permission to do that, so let's grant it permission.

    # This is where resources are automatically deployed by AKS
    NODE_RESOURCE_GROUP=$(az aks show \
    --name $AKS_NAME \
    --resource-group $RESOURCE_GROUP \
    --query nodeResourceGroup -o tsv)

    # This is the managed identity created by the Web Application Routing add-on
    MANAGED_IDENTTIY_OBJECT_ID=$(az resource show \
    --name webapprouting-${AKS_NAME} \
    --resource-group $NODE_RESOURCE_GROUP \
    --resource-type Microsoft.ManagedIdentity/userAssignedIdentities \
    --query properties.principalId \
    --output tsv)

    # Grant the managed identity permissions to write DNS entries
    az role assignment create \
    --role "DNS Zone Contributor" \
    --assignee $MANAGED_IDENTTIY_OBJECT_ID \
    --scope $DNS_ZONE_ID

    The Azure Managed Identity will also be used to retrieve and rotate TLS certificates from Azure Key Vault. So we'll need to grant it permission for that too.

    az keyvault set-policy \
    --name $AKV_NAME \
    --object-id $MANAGED_IDENTTIY_OBJECT_ID \
    --secret-permissions get \
    --certificate-permissions get

    Implement Ingress for the web application

    Before we create a new Ingress manifest, let's update the existing services to use ClusterIP instead of LoadBalancer. With an Ingress in place, there is no reason why we need the Service resources to be accessible from outside the cluster. The new Ingress will be the only entrypoint for external users.

    We can use the kubectl patch command to update the services

    kubectl patch service api -p '{"spec": {"type": "ClusterIP"}}'
    kubectl patch service web -p '{"spec": {"type": "ClusterIP"}}'

    Deploy a new Ingress to place in front of the web Service. Notice there is a special annotations entry for kubernetes.azure.com/tls-cert-keyvault-uri which points back to our self-signed certificate that was uploaded to Azure Key Vault.

    kubectl apply -f - <<EOF
    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
    annotations:
    kubernetes.azure.com/tls-cert-keyvault-uri: ${WEB_TLS_CERT_ID}
    name: web
    spec:
    ingressClassName: webapprouting.kubernetes.azure.com
    rules:
    - host: ${DNS_NAME}
    http:
    paths:
    - backend:
    service:
    name: web
    port:
    number: 80
    path: /
    pathType: Prefix
    - backend:
    service:
    name: api
    port:
    number: 80
    path: /api
    pathType: Prefix
    tls:
    - hosts:
    - ${DNS_NAME}
    secretName: web-tls
    EOF

    In our manifest above, we've also configured the Ingress route the traffic to either the web or api services based on the URL path requested. If the request URL includes /api/ then it will send traffic to the api backend service. Otherwise, it will send traffic to the web service.

    Within a few minutes, the external-dns controller will add an A record to Azure DNS which points to the Ingress resource's public IP. With the custom domain in place, we can simply browse using this domain name.

    info

    As mentioned above, since this is not a real domain name, we need to modify our host file to make it seem like our custom domain is resolving to the Ingress' public IP address.

    To get the ingress public IP, run the following:

    # Get the IP
    kubectl get ingress web -o jsonpath="{.status.loadBalancer.ingress[0].ip}"

    # Get the hostname
    kubectl get ingress web -o jsonpath="{.spec.tls[0].hosts[0]}"

    Next, open your host file and add an entry using the format <YOUR_PUBLIC_IP> <YOUR_CUSTOM_DOMAIN>. Below is an example of what it should look like.

    20.237.116.224 eshoponweb11265.com

    See this doc for more info on how to do this.

    When browsing to the website, you may be presented with a warning about the connection not being private. This is due to the fact that we are using a self-signed certificate. This is expected, so go ahead and proceed anyway to load up the page.

    Why is the Admin page broken?

    If you log in using the admin@microsoft.com account and browse to the Admin page, you'll notice no products are loaded on the page.

    This is because the admin page is built using Blazor and compiled as a WebAssembly application that runs in your browser. When the application was compiled, it packed the appsettings.Development.json file as an embedded resource. This file contains the base URL for the public API and it currently points to https://localhost:5099. Now that we have a domain name, we can update the base URL and point it to our custom domain.

    From the root of the eShopOnWeb repo, update the configuration file using a sed command.

    sed -i -e "s/localhost:5099/${DNS_NAME}/g" ./src/BlazorAdmin/wwwroot/appsettings.Development.json

    Rebuild and push the container to Azure Container Registry.

    # Grab the name of your Azure Container Registry
    ACR_NAME=$(az resource list \
    --resource-group $RESOURCE_GROUP \
    --resource-type Microsoft.ContainerRegistry/registries \
    --query "[0].name" -o tsv)

    # Invoke a build and publish job
    az acr build \
    --registry $ACR_NAME \
    --image $ACR_NAME.azurecr.io/web:v0.1.0 \
    --file ./src/Web/Dockerfile .

    Once the container build has completed, we can issue a kubectl patch command to quickly update the web deployment to test our change.

    kubectl patch deployment web -p "$(cat <<EOF
    {
    "spec": {
    "template": {
    "spec": {
    "containers": [
    {
    "name": "web",
    "image": "${ACR_NAME}.azurecr.io/web:v0.1.0"
    }
    ]
    }
    }
    }
    }
    EOF
    )"

    If all went well, you will be able to browse the admin page again and confirm product data is being loaded 🥳

    Conclusion

    The Web Application Routing add-on for AKS aims to streamline the process of exposing it to the public using the open-source NGINX Ingress Controller. With the add-on being managed by Azure, it natively integrates with other Azure services like Azure DNS and eliminates the need to manually create DNS entries. It can also integrate with Azure Key Vault to automatically pull in TLS certificates and rotate them as needed to further reduce operational overhead.

    We are one step closer to production and in the upcoming posts we'll further operationalize and secure our deployment, so stay tuned!

    In the meantime, check out the resources listed below for further reading.

    You can also find manifests with all the changes made in today's post in the Azure-Samples/eShopOnAKS repository.

    Resources

    Take the Cloud Skills Challenge!

    Enroll in the Cloud Skills Challenge!

    Don't miss out on this opportunity to level up your skills and stay ahead of the curve in the world of cloud native.

    - + \ No newline at end of file diff --git a/cnny-2023/tags/azure-kubernetes-service/page/15/index.html b/cnny-2023/tags/azure-kubernetes-service/page/15/index.html index ab2f33ae43..ad9d12ab0b 100644 --- a/cnny-2023/tags/azure-kubernetes-service/page/15/index.html +++ b/cnny-2023/tags/azure-kubernetes-service/page/15/index.html @@ -14,13 +14,13 @@ - +

    21 posts tagged with "azure-kubernetes-service"

    View All Tags

    · 9 min read
    Steven Murawski

    Welcome to Day 4 of Week 3 of #CloudNativeNewYear!

    The theme for this week is Bringing Your Application to Kubernetes. Yesterday we exposed the eShopOnWeb app so that customers can reach it over the internet using a custom domain name and TLS. Today we'll explore the topic of debugging and instrumentation.

    Ask the Experts Thursday, February 9th at 9 AM PST
    Friday, February 10th at 11 AM PST

    Watch the recorded demo and conversation about this week's topics.

    We were live on YouTube walking through today's (and the rest of this week's) demos. Join us Friday, February 10th and bring your questions!

    What We'll Cover

    • Debugging
    • Bridge To Kubernetes
    • Instrumentation
    • Resources: For self-study!

    Debugging

    Debugging applications in a Kubernetes cluster can be challenging for several reasons:

    • Complexity: Kubernetes is a complex system with many moving parts, including pods, nodes, services, and config maps, all of which can interact in unexpected ways and cause issues.
    • Distributed Environment: Applications running in a Kubernetes cluster are often distributed across multiple nodes, which makes it harder to determine the root cause of an issue.
    • Logging and Monitoring: Debugging an application in a Kubernetes cluster requires access to logs and performance metrics, which can be difficult to obtain in a large and dynamic environment.
    • Resource Management: Kubernetes manages resources such as CPU and memory, which can impact the performance and behavior of applications. Debugging resource-related issues requires a deep understanding of the Kubernetes resource model and the underlying infrastructure.
    • Dynamic Nature: Kubernetes is designed to be dynamic, with the ability to add and remove resources as needed. This dynamic nature can make it difficult to reproduce issues and debug problems.

    However, there are many tools and practices that can help make debugging applications in a Kubernetes cluster easier, such as using centralized logging, monitoring, and tracing solutions, and following best practices for managing resources and deployment configurations.

    There's also another great tool in our toolbox - Bridge to Kubernetes.

    Bridge to Kubernetes

    Bridge to Kubernetes is a great tool for microservice development and debugging applications without having to locally replicate all the required microservices.

    Bridge to Kubernetes works with Visual Studio or Visual Studio Code.

    We'll walk through using it with Visual Studio Code.

    Connecting Bridge to Kubernetes to Our Cluster

    Ensure your AKS cluster is the default for kubectl

    If you've recently spun up a new AKS cluster or you have been working with a different cluster, you may need to change what cluster credentials you have configured.

    If it's a new cluster, we can use:

    RESOURCE_GROUP=<YOUR RESOURCE GROUP NAME>
    CLUSTER_NAME=<YOUR AKS CLUSTER NAME>
    az aks get-credentials az aks get-credentials --resource-group $RESOURCE_GROUP --name $CLUSTER_NAME

    Open the command palette

    Open the command palette and find Bridge to Kubernetes: Configure. You may need to start typing the name to get it to show up.

    The command palette for Visual Studio Code is open and the first item is Bridge to Kubernetes: Configure

    Pick the service you want to debug

    Bridge to Kubernetes will redirect a service for you. Pick the service you want to redirect, in this case we'll pick web.

    Selecting the `web` service to redirect in Visual Studio Code

    Identify the port your application runs on

    Next, we'll be prompted to identify what port our application will run on locally. For this application it'll be 5001, but that's just specific to this application (and the default for ASP.NET 7, I believe).

    Setting port 5001 as the port to redirect to the `web` Kubernetes service in Visual Studio Code

    Pick a debug configuration to extend

    Bridge to Kubernetes has a couple of ways to run - it can inject it's setup and teardown to your existing debug configurations. We'll pick .NET Core Launch (web).

    Telling Bridge to Kubernetes to use the .NET Core Launch (web) debug configuration in Visual Studio Code

    Forward Traffic for All Requests

    The last prompt you'll get in the configuration is about how you want Bridge to Kubernetes to handle re-routing traffic. The default is that all requests into the service will get your local version.

    You can also redirect specific traffic. Bridge to Kubernetes will set up a subdomain and route specific traffic to your local service, while allowing other traffic to the deployed service.

    Allowing the launch of Endpoint Manager on Windows

    Using Bridge to Kubernetes to Debug Our Service

    Now that we've configured Bridge to Kubernetes, we see that tasks and a new launch configuration have been added.

    Added to .vscode/tasks.json:

            {
    "label": "bridge-to-kubernetes.resource",
    "type": "bridge-to-kubernetes.resource",
    "resource": "web",
    "resourceType": "service",
    "ports": [
    5001
    ],
    "targetCluster": "aks1",
    "targetNamespace": "default",
    "useKubernetesServiceEnvironmentVariables": false
    },
    {
    "label": "bridge-to-kubernetes.compound",
    "dependsOn": [
    "bridge-to-kubernetes.resource",
    "build"
    ],
    "dependsOrder": "sequence"
    }

    And added to .vscode/launch.json:

    {
    "name": ".NET Core Launch (web) with Kubernetes",
    "type": "coreclr",
    "request": "launch",
    "preLaunchTask": "bridge-to-kubernetes.compound",
    "program": "${workspaceFolder}/src/Web/bin/Debug/net7.0/Web.dll",
    "args": [],
    "cwd": "${workspaceFolder}/src/Web",
    "stopAtEntry": false,
    "env": {
    "ASPNETCORE_ENVIRONMENT": "Development",
    "ASPNETCORE_URLS": "http://+:5001"
    },
    "sourceFileMap": {
    "/Views": "${workspaceFolder}/Views"
    }
    }

    Launch the debug configuration

    We can start the process with the .NET Core Launch (web) with Kubernetes launch configuration in the Debug pane in Visual Studio Code.

    Launch the `.NET Core Launch (web) with Kubernetes` from the Debug pane in Visual Studio Code

    Enable the Endpoint Manager

    Part of this process includes a local service to help manage the traffic routing and your hosts file. This will require admin or sudo privileges. On Windows, you'll get a prompt like:

    Prompt to launch the endpoint manager.

    Access your Kubernetes cluster "locally"

    Bridge to Kubernetes will set up a tunnel (thanks to port forwarding) to your local workstation and create local endpoints for the other Kubernetes hosted services in your cluster, as well as pretending to be a pod in that cluster (for the application you are debugging).

    Output from Bridge To Kubernetes setup task.

    After making the connection to your Kubernetes cluster, the launch configuration will continue. In this case, we'll make a debug build of the application and attach the debugger. (This process may cause the terminal in VS Code to scroll with build output. You can find the Bridge to Kubernetes output with the local IP addresses and ports in the Output pane for Bridge to Kubernetes.)

    You can set breakpoints, use your debug console, set watches, run tests against your local version of the service.

    Exploring the Running Application Environment

    One of the cool things that Bridge to Kubernetes does for our debugging experience is bring the environment configuration that our deployed pod would inherit. When we launch our app, it'll see configuration and secrets that we'd expect our pod to be running with.

    To test this, we'll set a breakpoint in our application's start up to see what SQL Server is being used. We'll set a breakpoint at src/Infrastructure/Dependencies.cs on line 32.

    Then, we will start debugging the application with Bridge to Kubernetes. When it hits the breakpoint, we'll open the Debug pane and type configuration.GetConnectionString("CatalogConnection").

    When we run locally (not with Bridge to Kubernetes), we'd see:

    configuration.GetConnectionString("CatalogConnection")
    "Server=(localdb)\\mssqllocaldb;Integrated Security=true;Initial Catalog=Microsoft.eShopOnWeb.CatalogDb;"

    But, with Bridge to Kubernetes we see something more like (yours will vary based on the password ):

    configuration.GetConnectionString("CatalogConnection")
    "Server=db;Database=Microsoft.eShopOnWeb.CatalogDb;User Id=sa;Password=*****************;TrustServerCertificate=True;"

    Debugging our local application connected to Kubernetes.

    We can see that the database server configured is based on our db service and the password is pulled from our secret in Azure KeyVault (via AKS).

    This helps us run our local application just like it was actually in our cluster.

    Going Further

    Bridge to Kubernetes also supports more advanced scenarios and, as you need to start routing traffic around inside your cluster, may require you to modify your application to pass along a kubernetes-route-as header to help ensure that traffic for your debugging workloads is properly handled. The docs go into much greater detail about that.

    Instrumentation

    Now that we've figured out our debugging story, we'll need to ensure that we have the right context clues to find where we need to debug or to give us a better idea of how well our microservices are running.

    Logging and Tracing

    Logging and tracing become even more critical in Kubernetes, where your application could be running in a number of pods across different nodes. When you have an issue, in addition to the normal application data, you'll want to know what pod and what node had the issue, what the state of those resources were (were you resource constrained or were shared resources unavailable?), and if autoscaling is enabled, you'll want to know if a scale event has been triggered. There are a multitude of other concerns based on your application and the environment you maintain.

    Given these informational needs, it's crucial to revisit your existing logging and instrumentation. Most frameworks and languages have extensible logging, tracing, and instrumentation libraries that you can iteratively add information to, such as pod and node states, and ensuring that requests can be traced across your microservices. This will pay you back time and time again when you have to troubleshoot issues in your existing environment.

    Centralized Logging

    To enhance the troubleshooting process further, it's important to implement centralized logging to consolidate logs from all your microservices into a single location. This makes it easier to search and analyze logs when you're troubleshooting an issue.

    Automated Alerting

    Additionally, implementing automated alerting, such as sending notifications when specific conditions occur in the logs, can help you detect issues before they escalate.

    End to end Visibility

    End-to-end visibility is also essential in understanding the flow of requests and responses between microservices in a distributed system. With end-to-end visibility, you can quickly identify bottlenecks and slowdowns in the system, helping you to resolve issues more efficiently.

    Resources

    Take the Cloud Skills Challenge!

    Enroll in the Cloud Skills Challenge!

    Don't miss out on this opportunity to level up your skills and stay ahead of the curve in the world of cloud native.

    - + \ No newline at end of file diff --git a/cnny-2023/tags/azure-kubernetes-service/page/16/index.html b/cnny-2023/tags/azure-kubernetes-service/page/16/index.html index 08baea630d..41aabf42cf 100644 --- a/cnny-2023/tags/azure-kubernetes-service/page/16/index.html +++ b/cnny-2023/tags/azure-kubernetes-service/page/16/index.html @@ -14,13 +14,13 @@ - +

    21 posts tagged with "azure-kubernetes-service"

    View All Tags

    · 6 min read
    Josh Duffney

    Welcome to Day 5 of Week 3 of #CloudNativeNewYear!

    The theme for this week is Bringing Your Application to Kubernetes. Yesterday we talked about debugging and instrumenting our application. Today we'll explore the topic of container image signing and secure supply chain.

    Ask the Experts Thursday, February 9th at 9 AM PST
    Friday, February 10th at 11 AM PST

    Watch the recorded demo and conversation about this week's topics.

    We were live on YouTube walking through today's (and the rest of this week's) demos. Join us Friday, February 10th and bring your questions!

    What We'll Cover

    • Introduction
    • Prerequisites
    • Create a digital signing certificate
    • Generate an Azure Container Registry Token
    • Set up Notation
    • Install the Notation Azure Key Vault Plugin
    • Add the signing Certificate to Notation
    • Sign Container Images
    • Conclusion

    Introduction

    The secure supply chain is a crucial aspect of software development, delivery, and deployment, and digital signing plays a critical role in this process.

    By using digital signatures to verify the authenticity and integrity of container images, organizations can improve the security of your software supply chain and reduce the risk of security breaches and data compromise.

    In this article, you'll learn how to use Notary, an open-source project hosted by the Cloud Native Computing Foundation (CNCF) to digitally sign container images stored on Azure Container Registry.

    Prerequisites

    To follow along, you'll need an instance of:

    Create a digital signing certificate

    A digital signing certificate is a certificate that is used to digitally sign and verify the authenticity and integrity of digital artifacts. Such documents, software, and of course container images.

    Before you can implement digital signatures, you must first create a digital signing certificate.

    Run the following command to generate the certificate:

    1. Create the policy file

      cat <<EOF > ./my_policy.json
      {
      "issuerParameters": {
      "certificateTransparency": null,
      "name": "Self"
      },
      "x509CertificateProperties": {
      "ekus": [
      "1.3.6.1.5.5.7.3.3"
      ],
      "key_usage": [
      "digitalSignature"
      ],
      "subject": "CN=${keySubjectName}",
      "validityInMonths": 12
      }
      }
      EOF

      The ekus and key usage of this certificate policy dictate that the certificate can only be used for digital signatures.

    2. Create the certificate in Azure Key Vault

      az keyvault certificate create --name $keyName --vault-name $keyVaultName --policy @my_policy.json

      Replace $keyName and $keyVaultName with your desired certificate name and Azure Key Vault instance name.

    Generate a Azure Container Registry token

    Azure Container Registry tokens are used to grant access to the contents of the registry. Tokens can be used for a variety of things such as pulling images, pushing images, or managing the registry.

    As part of the container image signing workflow, you'll need a token to authenticate the Notation CLI with your Azure Container Registry.

    Run the following command to generate an ACR token:

    az acr token create \
    --name $tokenName \
    --registry $registry \
    --scope-map _repositories_admin \
    --query 'credentials.passwords[0].value' \
    --only-show-errors \
    --output tsv

    Replace $tokenName with your name for the ACR token and $registry with the name of your ACR instance.

    Setup Notation

    Notation is the command-line interface for the CNCF Notary project. You'll use it to digitally sign the api and web container images for the eShopOnWeb application.

    Run the following commands to download and install the NotationCli:

    1. Open a terminal or command prompt window

    2. Download the Notary notation release

      curl -Lo notation.tar.gz https://github.com/notaryproject/notation/releases/download/v1.0.0-rc.1/notation_1.0.0-rc.1_linux_amd64.tar.gz > /dev/null 2>&1

      If you're not using Linux, you can find the releases here.

    3. Extract the contents of the notation.tar.gz

      tar xvzf notation.tar.gz > /dev/null 2>&1
    4. Copy the notation binary to the $HOME/bin directory

      cp ./notation $HOME/bin
    5. Add the $HOME/bin directory to the PATH environment variable

      export PATH="$HOME/bin:$PATH"
    6. Remove the downloaded files

      rm notation.tar.gz LICENSE
    7. Check the notation version

      notation --version

    Install the Notation Azure Key Vault plugin

    By design the NotationCli supports plugins that extend its digital signing capabilities to remote registries. And in order to sign your container images stored in Azure Container Registry, you'll need to install the Azure Key Vault plugin for Notation.

    Run the following commands to install the azure-kv plugin:

    1. Download the plugin

      curl -Lo notation-azure-kv.tar.gz \
      https://github.com/Azure/notation-azure-kv/releases/download/v0.5.0-rc.1/notation-azure-kv_0.5.0-rc.1_linux_amd64.tar.gz > /dev/null 2>&1

      Non-Linux releases can be found here.

    2. Extract to the plugin directory & delete download files

      tar xvzf notation-azure-kv.tar.gz -C ~/.config/notation/plugins/azure-kv notation-azure-kv > /dev/null 2>&

      rm -rf notation-azure-kv.tar.gz
    3. Verify the plugin was installed

      notation plugin ls

    Add the signing certificate to Notation

    Now that you have Notation and the Azure Key Vault plugin installed, add the certificate's keyId created above to Notation.

    1. Get the Certificate Key ID from Azure Key Vault

      az keyvault certificate show \
      --vault-name $keyVaultName \
      --name $keyName \
      --query "kid" --only-show-errors --output tsv

      Replace $keyVaultName and $keyName with the appropriate information.

    2. Add the Key ID to KMS using Notation

      notation key add --plugin azure-kv --id $keyID $keyName
    3. Check the key list

      notation key ls

    Sign Container Images

    At this point, all that's left is to sign the container images.

    Run the notation sign command to sign the api and web container images:

    notation sign $registry.azurecr.io/web:$tag \
    --username $tokenName \
    --password $tokenPassword

    notation sign $registry.azurecr.io/api:$tag \
    --username $tokenName \
    --password $tokenPassword

    Replace $registry, $tag, $tokenName, and $tokenPassword with the appropriate values. To improve security, use a SHA hash for the tag.

    NOTE: If you didn't take note of the token password, you can rerun the az acr token create command to generate a new password.

    Conclusion

    Digital signing plays a critical role in ensuring the security of software supply chains.

    By signing software components, organizations can verify the authenticity and integrity of software, helping to prevent unauthorized modifications, tampering, and malware.

    And if you want to take digital signing to a whole new level by using them to prevent the deployment of unsigned container images, check out the Ratify project on GitHub!

    Take the Cloud Skills Challenge!

    Enroll in the Cloud Skills Challenge!

    Don't miss out on this opportunity to level up your skills and stay ahead of the curve in the world of cloud native.

    - + \ No newline at end of file diff --git a/cnny-2023/tags/azure-kubernetes-service/page/17/index.html b/cnny-2023/tags/azure-kubernetes-service/page/17/index.html index dc8c1c5126..d05e9ccb54 100644 --- a/cnny-2023/tags/azure-kubernetes-service/page/17/index.html +++ b/cnny-2023/tags/azure-kubernetes-service/page/17/index.html @@ -14,13 +14,13 @@ - +

    21 posts tagged with "azure-kubernetes-service"

    View All Tags

    · 7 min read
    Nitya Narasimhan

    Welcome to Week 4 of #CloudNativeNewYear!

    This week we'll go further with Cloud-native by exploring advanced topics and best practices for the Cloud-native practitioner. We'll start with an exploration of Serverless Container Options - ranging from managed services to Azure Kubernetes Service (AKS) and Azure Container Apps (ACA), to options that allow more granular control!

    What We'll Cover

    • The Azure Compute Landscape
    • Serverless Compute on Azure
    • Comparing Container Options On Azure
    • Other Considerations
    • Exercise: Try this yourself!
    • Resources: For self-study!


    We started this series with an introduction to core concepts:

    • In Containers 101, we learned why containerization matters. Think portability, isolation, scalability, resource-efficiency and cost-effectiveness. But not all apps can be containerized.
    • In Kubernetes 101, we learned how orchestration works. Think systems to automate container deployment, scaling, and management. But using Kubernetes directly can be complex.
    • In Exploring Cloud Native Options we asked the real questions: can we containerize - and should we?. The first depends on app characteristics, the second on your requirements.

    For example:

    • Can we containerize? The answer might be no if your app has system or OS dependencies, requires access to low-level hardware, or maintains complex state across sessions.
    • Should we containerize? The answer might be yes if your app is microservices-based, is stateless by default, requires portability, or is a legaacy app that can benefit from container isolation.

    As with every technology adoption decision process, there are no clear yes/no answers - just tradeoffs that you need to evaluate based on your architecture and application requirements. In today's post, we try to look at this from two main perspectives:

    1. Should you go serverless? Think: managed services that let you focus on app, not infra.
    2. What Azure Compute should I use? Think: best fit for my architecture & technology choices.

    Azure Compute Landscape

    Let's answer the second question first by exploring all available compute options on Azure. The illustrated decision-flow below is my favorite ways to navigate the choices, with questions like:

    • Are you migrating an existing app or building a new one?
    • Can you app be containerized?
    • Does it use a specific technology (Spring Boot, Red Hat Openshift)?
    • Do you need access to the Kubernetes API?
    • What characterizes the workload? (event-driven, web app, microservices etc.)

    Read the docs to understand how your choices can be influenced by the hosting model (IaaS, PaaS, FaaS), supported features (Networking, DevOps, Scalability, Availability, Security), architectural styles (Microservices, Event-driven, High-Performance Compute, Task Automation,Web-Queue Worker) etc.

    Compute Choices

    Now that we know all available compute options, let's address the second question: why go serverless? and what are my serverless compute options on Azure?

    Azure Serverless Compute

    Serverless gets defined many ways, but from a compute perspective, we can focus on a few key characteristics that are key to influencing this decision:

    • managed services - focus on application, let cloud provider handle infrastructure.
    • pay for what you use - get cost-effective resource utilization, flexible pricing options.
    • autoscaling on demand - take advantage of built-in features like KEDA-compliant triggers.
    • use preferred languages - write code in Java, JS, C#, Python etc. (specifics based on service)
    • cloud-native architectures - can support event-driven solutions, APIs, Microservices, DevOps!

    So what are some of the key options for Serverless Compute on Azure? The article dives into serverless support for fully-managed end-to-end serverless solutions with comprehensive support for DevOps, DevTools, AI/ML, Database, Storage, Monitoring and Analytics integrations. But we'll just focus on the 4 categories of applications when we look at Compute!

    1. Serverless Containerized Microservices using Azure Container Apps. Code in your preferred language, exploit full Dapr support, scale easily with any KEDA-compliant trigger.
    2. Serverless Application Environments using Azure App Service. Suitable for hosting monolithic apps (vs. microservices) in a managed service, with built-in support for on-demand scaling.
    3. Serverless Kubernetes using Azure Kubernetes Service (AKS). Spin up pods inside container instances and deploy Kubernetes-based applications with built-in KEDA-compliant autoscaling.
    4. Serverless Functions using Azure Functions. Execute "code at the granularity of functions" in your preferred language, and scale on demand with event-driven compute.

    We'll talk about these, and other compute comparisons, at the end of the article. But let's start with the core option you might choose if you want a managed serverless compute solution with built-in features for delivering containerized microservices at scale. Hello, Azure Container Apps!.

    Azure Container Apps

    Azure Container Apps (ACA) became generally available in May 2022 - providing customers with the ability to run microservices and containerized applications on a serverless, consumption-based platform. The figure below showcases the different types of applications that can be built with ACA. Note that it comes with built-in KEDA-compliant autoscaling triggers, and other auto-scale criteria that may be better-suited to the type of application you are building.

    About ACA

    So far in the series, we've put the spotlight on Azure Kubernetes Service (AKS) - so you're probably asking yourself: How does ACA compare to AKS?. We're glad you asked. Check out our Go Cloud-native with Azure Container Apps post from the #ServerlessSeptember series last year for a deeper-dive, or review the figure below for the main comparison points.

    The key takeaway is this. Azure Container Apps (ACA) also runs on Kubernetes but abstracts away its complexity in a managed service offering that lets you get productive quickly without requiring detailed knowledge of Kubernetes workings or APIs. However, if you want full access and control over the Kubernetes API then go with Azure Kubernetes Service (AKS) instead.

    Comparison

    Other Container Options

    Azure Container Apps is the preferred Platform As a Service (PaaS) option for a fully-managed serverless solution on Azure that is purpose-built for cloud-native microservices-based application workloads. But - there are other options that may be suitable for your specific needs, from a requirements and tradeoffs perspective. Let's review them quickly:

    1. Azure Functions is the serverless Functions-as-a-Service (FaaS) option, as opposed to ACA which supports a PaaS approach. It's optimized for running event-driven applications built at the granularity of ephemeral functions that can be deployed as code or containers.
    2. Azure App Service provides fully managed hosting for web applications that may be deployed using code or containers. It can be integrated with other services including Azure Container Apps and Azure Functions. It's optimized for deploying traditional web apps.
    3. Azure Kubernetes Service provides a fully managed Kubernetes option capable of running any Kubernetes workload, with direct access to the Kubernetes API.
    4. Azure Container Instances provides a single pod of Hyper-V isolated containers on demand, making them more of a low-level "building block" option compared to ACA.

    Based on the technology choices you made for application development you may also have more specialized options you want to consider. For instance:

    1. Azure Spring Apps is ideal if you're running Spring Boot or Spring Cloud workloads on Azure,
    2. Azure Red Hat OpenShift is ideal for integrated Kubernetes-powered OpenShift on Azure.
    3. Azure Confidential Computing is ideal if you have data/code integrity and confidentiality needs.
    4. Kubernetes At The Edge is ideal for bare-metal options that extend compute to edge devices.

    This is just the tip of the iceberg in your decision-making journey - but hopefully, it gave you a good sense of the options and criteria that influences your final choices. Let's wrap this up with a look at self-study resources for skilling up further.

    Exercise

    Want to get hands on learning related to these technologies?

    TAKE THE CLOUD SKILLS CHALLENGE

    Register today and level up your skills by completing free learning modules, while competing with your peers for a place on the leaderboards!

    Resources

    - + \ No newline at end of file diff --git a/cnny-2023/tags/azure-kubernetes-service/page/18/index.html b/cnny-2023/tags/azure-kubernetes-service/page/18/index.html index 7a14477b09..adac0a4f58 100644 --- a/cnny-2023/tags/azure-kubernetes-service/page/18/index.html +++ b/cnny-2023/tags/azure-kubernetes-service/page/18/index.html @@ -14,13 +14,13 @@ - +

    21 posts tagged with "azure-kubernetes-service"

    View All Tags

    · 3 min read
    Cory Skimming

    It's the final week of #CloudNativeNewYear! This week we'll go further with Cloud-native by exploring advanced topics and best practices for the Cloud-native practitioner. In today's post, we will introduce you to the basics of the open-source project Draft and how it can be used to easily create and deploy applications to Kubernetes.

    It's not too late to sign up for and complete the Cloud Skills Challenge!

    What We'll Cover

    • What is Draft?
    • Draft basics
    • Demo: Developing to AKS with Draft
    • Resources


    What is Draft?

    Draft is an open-source tool that can be used to streamline the development and deployment of applications on Kubernetes clusters. It provides a simple and easy-to-use workflow for creating and deploying applications, making it easier for developers to focus on writing code and building features, rather than worrying about the underlying infrastructure. This is great for users who are just getting started with Kubernetes, or those who are just looking to simplify their experience.

    New to Kubernetes?

    Draft basics

    Draft streamlines Kubernetes development by taking a non-containerized application and generating the Dockerfiles, K8s manifests, Helm charts, and other artifacts associated with a containerized application. Draft can also create a GitHub Action workflow file to quickly build and deploy your application onto any Kubernetes cluster.

    1. 'draft create'': Create a new Draft project by simply running the 'draft create' command - this command will walk you through a series of questions on your application specification (such as the application language) and create a Dockerfile, Helm char, and Kubernetes
    2. 'draft generate-workflow'': Automatically build out a GitHub Action using the 'draft generate-workflow' command
    3. 'draft setup-gh'': If you are using Azure, use this command to automate the GitHub OIDC set up process to ensure that you will be able to deploy your application using your GitHub Action.

    At this point, you will have all the files needed to deploy your app onto a Kubernetes cluster (we told you it was easy!).

    You can also use the 'draft info' command if you are looking for information on supported languages and deployment types. Let's see it in action, shall we?


    Developing to AKS with Draft

    In this Microsoft Reactor session below, we'll briefly introduce Kubernetes and the Azure Kubernetes Service (AKS) and then demo how enable your applications for Kubernetes using the open-source tool Draft. We'll show how Draft can help you create the boilerplate code to containerize your applications and add routing and scaling behaviours.

    ##Conclusion

    Overall, Draft simplifies the process of building, deploying, and managing applications on Kubernetes, and can make the overall journey from code to Kubernetes significantly easier.


    Resources


    - + \ No newline at end of file diff --git a/cnny-2023/tags/azure-kubernetes-service/page/19/index.html b/cnny-2023/tags/azure-kubernetes-service/page/19/index.html index ec45b20088..c320509e90 100644 --- a/cnny-2023/tags/azure-kubernetes-service/page/19/index.html +++ b/cnny-2023/tags/azure-kubernetes-service/page/19/index.html @@ -14,14 +14,14 @@ - +

    21 posts tagged with "azure-kubernetes-service"

    View All Tags

    · 7 min read
    Vinicius Apolinario

    Welcome to Day 3 of Week 4 of #CloudNativeNewYear!

    The theme for this week is going further with Cloud Native. Yesterday we talked about using Draft to accelerate your Kubernetes adoption. Today we'll explore the topic of Windows containers.

    What We'll Cover

    • Introduction
    • Windows containers overview
    • Windows base container images
    • Isolation
    • Exercise: Try this yourself!
    • Resources: For self-study!

    Introduction

    Windows containers were launched along with Windows Server 2016, and have evolved since. In its latest release, Windows Server 2022, Windows containers have reached a great level of maturity and allow for customers to run production grade workloads.

    While suitable for new developments, Windows containers also provide developers and operations with a different approach than Linux containers. It allows for existing Windows applications to be containerized with little or no code changes. It also allows for professionals that are more comfortable with the Windows platform and OS, to leverage their skill set, while taking advantage of the containers platform.

    Windows container overview

    In essence, Windows containers are very similar to Linux. Since Windows containers use the same foundation of Docker containers, you can expect that the same architecture applies - with the specific notes of the Windows OS. For example, when running a Windows container via Docker, you use the same commands, such as docker run. To pull a container image, you can use docker pull, just like on Linux. However, to run a Windows container, you also need a Windows container host. This requirement is there because, as you might remember, a container shares the OS kernel with its container host.

    On Kubernetes, Windows containers are supported since Windows Server 2019. Just like with Docker, you can manage Windows containers like any other resource on the Kubernetes ecosystem. A Windows node can be part of a Kubernetes cluster, allowing you to run Windows container based applications on services like Azure Kubernetes Service. To deploy an Windows application to a Windows pod in Kubernetes, you can author a YAML specification much like you would for Linux. The main difference is that you would point to an image that runs on Windows, and you need to specify a node selection tag to indicate said pod needs to run on a Windows node.

    Windows base container images

    On Windows containers, you will always use a base container image provided by Microsoft. This base container image contains the OS binaries for the container to run. This image can be as large as 3GB+, or small as ~300MB. The difference in the size is a consequence of the APIs and components available in each Windows container base container image. There are primarily, three images: Nano Server, Server Core, and Server.

    Nano Server is the smallest image, ranging around 300MB. It's a base container image for new developments and cloud-native scenarios. Applications need to target Nano Server as the Windows OS, so not all frameworks will work. For example, .Net works on Nano Server, but .Net Framework doesn't. Other third-party frameworks also work on Nano Server, such as Apache, NodeJS, Phyton, Tomcat, Java runtime, JBoss, Redis, among others.

    Server Core is a much larger base container image, ranging around 1.25GB. It's larger size is compensated by it's application compatibility. Simply put, any application that meets the requirements to be run on a Windows container, can be containerized with this image.

    The Server image builds on the Server Core one. It ranges around 3.1GB and has even greater application compatibility than the Server Core image. In addition to the traditional Windows APIs and components, this image allows for scenarios such as Machine Learning via DirectX with GPU access.

    The best image for your scenario is dependent on the requirements your application has on the Windows OS inside a container. However, there are some scenarios that are not supported at all on Windows containers - such as GUI or RDP dependent applications, some Windows Server infrastructure roles, such as Active Directory, among others.

    Isolation

    When running containers, the kernel of the container host is shared with the containers running on it. While extremely convenient, this poses a potential risk for multi-tenant scenarios. If one container is compromised and has access to the host, it could potentially compromise other containers in the same system.

    For enterprise customers running on-premises (or even in the cloud), this can be mitigated by using a VM as a container host and considering the VM itself a security boundary. However, if multiple workloads from different tenants need to share the same host, Windows containers offer another option: Hyper-V isolation. While the name Hyper-V is associated with VMs, its virtualization capabilities extend to other services, including containers. Hyper-V isolated containers run on a purpose built, extremely small, highly performant VM. However, you manage a container running with Hyper-V isolation the same way you do with a process isolated one. In fact, the only notable difference is that you need to append the --isolation=hyperv tag to the docker run command.

    Exercise

    Here are a few examples of how to use Windows containers:

    Run Windows containers via Docker on your machine

    To pull a Windows base container image:

    docker pull mcr.microsoft.com/windows/servercore:ltsc2022

    To run a basic IIS container:

    #This command will pull and start a IIS container. You can access it from http://<your local IP>:8080
    docker run -d -p 8080:80 mcr.microsoft.com/windows/servercore/iis:windowsservercore-ltsc2022

    Run the same IIS container with Hyper-V isolation

    #This command will pull and start a IIS container. You can access it from http://<your local IP>:8080
    docker run -d -p 8080:80 --isolation=hyperv mcr.microsoft.com/windows/servercore/iis:windowsservercore-ltsc2022

    To run a Windows container interactively:

    docker run -it mcr.microsoft.com/windows/servercore:ltsc2022 powershell

    Run Windows containers on Kubernetes

    To prepare an AKS cluster for Windows containers: Note: Replace the values on the example below with the ones from your environment.

    echo "Please enter the username to use as administrator credentials for Windows Server nodes on your cluster: " && read WINDOWS_USERNAME
    az aks create \
    --resource-group myResourceGroup \
    --name myAKSCluster \
    --node-count 2 \
    --generate-ssh-keys \
    --windows-admin-username $WINDOWS_USERNAME \
    --vm-set-type VirtualMachineScaleSets \
    --network-plugin azure

    To add a Windows node pool for Windows containers:

    az aks nodepool add \
    --resource-group myResourceGroup \
    --cluster-name myAKSCluster \
    --os-type Windows \
    --name npwin \
    --node-count 1

    Deploy a sample ASP.Net application to the AKS cluster above using the YAML file below:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
    name: sample
    labels:
    app: sample
    spec:
    replicas: 1
    template:
    metadata:
    name: sample
    labels:
    app: sample
    spec:
    nodeSelector:
    "kubernetes.io/os": windows
    containers:
    - name: sample
    image: mcr.microsoft.com/dotnet/framework/samples:aspnetapp
    resources:
    limits:
    cpu: 1
    memory: 800M
    ports:
    - containerPort: 80
    selector:
    matchLabels:
    app: sample
    ---
    apiVersion: v1
    kind: Service
    metadata:
    name: sample
    spec:
    type: LoadBalancer
    ports:
    - protocol: TCP
    port: 80
    selector:
    app: sample

    Save the file above and run the command below on your Kubernetes cluster:

    kubectl apply -f <filename> .

    Once deployed, you can access the application by getting the IP address of your service:

    kubectl get service

    Resources

    It's not too late to sign up for and complete the Cloud Skills Challenge!
    - + \ No newline at end of file diff --git a/cnny-2023/tags/azure-kubernetes-service/page/2/index.html b/cnny-2023/tags/azure-kubernetes-service/page/2/index.html index a744407309..456b3e1f4c 100644 --- a/cnny-2023/tags/azure-kubernetes-service/page/2/index.html +++ b/cnny-2023/tags/azure-kubernetes-service/page/2/index.html @@ -14,14 +14,14 @@ - +

    21 posts tagged with "azure-kubernetes-service"

    View All Tags

    · 5 min read
    Cory Skimming

    Welcome to Week 1 of #CloudNativeNewYear!

    Cloud-native New Year

    You will often hear the term "cloud-native" when discussing modern application development, but even a quick online search will return a huge number of articles, tweets, and web pages with a variety of definitions. So, what does cloud-native actually mean? Also, what makes an application a cloud-native application versus a "regular" application?

    Today, we will address these questions and more as we kickstart our learning journey (and our new year!) with an introductory dive into the wonderful world of cloud-native.


    What We'll Cover

    • What is cloud-native?
    • What is a cloud-native application?
    • The benefits of cloud-native
    • The five pillars of cloud-native
    • Exercise: Take the Cloud Skills Challenge!

    1. What is cloud-native?

    The term "cloud-native" can seem pretty self-evident (yes, hello, native to the cloud?), and in a way, it is. While there are lots of definitions of cloud-native floating around, at it's core, cloud-native simply refers to a modern approach to building software that takes advantage of cloud services and environments. This includes using cloud-native technologies, such as containers, microservices, and serverless, and following best practices for deploying, scaling, and managing applications in a cloud environment.

    Official definition from the Cloud Native Computing Foundation:

    Cloud-native technologies empower organizations to build and run scalable applications in modern, dynamic environments such as public, private, and hybrid clouds. Containers, service meshes, microservices, immutable infrastructure, and declarative APIs exemplify this approach.

    These techniques enable loosely coupled systems that are resilient, manageable, and observable. Combined with robust automation, they allow engineers to make high-impact changes frequently and predictably with minimal toil. Source


    2. So, what exactly is a cloud-native application?

    Cloud-native applications are specifically designed to take advantage of the scalability, resiliency, and distributed nature of modern cloud infrastructure. But how does this differ from a "traditional" application?

    Traditional applications are generally been built, tested, and deployed as a single, monolithic unit. The monolithic nature of this type of architecture creates close dependencies between components. This complexity and interweaving only increases as an application grows and can make it difficult to evolve (not to mention troubleshoot) and challenging to operate over time.

    To contrast, in cloud-native architectures the application components are decomposed into loosely coupled services, rather than built and deployed as one block of code. This decomposition into multiple self-contained services enables teams to manage complexity and improve the speed, agility, and scale of software delivery. Many small parts enables teams to make targeted updates, deliver new features, and fix any issues without leading to broader service disruption.


    3. The benefits of cloud-native

    Cloud-native architectures can bring many benefits to an organization, including:

    1. Scalability: easily scale up or down based on demand, allowing organizations to adjust their resource usage and costs as needed.
    2. Flexibility: deploy and run on any cloud platform, and easily move between clouds and on-premises environments.
    3. High-availability: techniques such as redundancy, self-healing, and automatic failover help ensure that cloud-native applications are designed to be highly-available and fault tolerant.
    4. Reduced costs: take advantage of the pay-as-you-go model of cloud computing, reducing the need for expensive infrastructure investments.
    5. Improved security: tap in to cloud security features, such as encryption and identity management, to improve the security of the application.
    6. Increased agility: easily add new features or services to your applications to meet changing business needs and market demand.

    4. The pillars of cloud-native

    There are five areas that are generally cited as the core building blocks of cloud-native architecture:

    1. Microservices: Breaking down monolithic applications into smaller, independent, and loosely-coupled services that can be developed, deployed, and scaled independently.
    2. Containers: Packaging software in lightweight, portable, and self-sufficient containers that can run consistently across different environments.
    3. Automation: Using automation tools and DevOps processes to manage and operate the cloud-native infrastructure and applications, including deployment, scaling, monitoring, and self-healing.
    4. Service discovery: Using service discovery mechanisms, such as APIs & service meshes, to enable services to discover and communicate with each other.
    5. Observability: Collecting and analyzing data from the infrastructure and applications to understand and optimize the performance, behavior, and health of the system.

    These can (and should!) be used in combination to deliver cloud-native solutions that are highly scalable, flexible, and available.

    WHAT'S NEXT

    Stay tuned, as we will be diving deeper into these topics in the coming weeks:

    • Jan 24: Containers 101
    • Jan 25: Adopting Microservices with Kubernetes
    • Jan 26: Kubernetes 101
    • Jan 27: Exploring your Cloud-native Options

    Resources


    Don't forget to subscribe to the blog to get daily posts delivered directly to your favorite feed reader!


    - + \ No newline at end of file diff --git a/cnny-2023/tags/azure-kubernetes-service/page/20/index.html b/cnny-2023/tags/azure-kubernetes-service/page/20/index.html index 27bf1f593f..5a99630ac4 100644 --- a/cnny-2023/tags/azure-kubernetes-service/page/20/index.html +++ b/cnny-2023/tags/azure-kubernetes-service/page/20/index.html @@ -14,13 +14,13 @@ - +

    21 posts tagged with "azure-kubernetes-service"

    View All Tags

    · 4 min read
    Jorge Arteiro

    Welcome to Day 4 of Week 4 of #CloudNativeNewYear!

    The theme for this week is going further with Cloud Native. Yesterday we talked about Windows Containers. Today we'll explore addons and extensions available to Azure Kubernetes Services (AKS).

    What We'll Cover

    • Introduction
    • Add-ons
    • Extensions
    • Add-ons vs Extensions
    • Resources

    Introduction

    Azure Kubernetes Service (AKS) is a fully managed container orchestration service that makes it easy to deploy and manage containerized applications on Azure. AKS offers a number of features and capabilities, including the ability to extend its supported functionality through the use of add-ons and extensions.

    There are also integrations available from open-source projects and third parties, but they are not covered by the AKS support policy.

    Add-ons

    Add-ons provide a supported way to extend AKS. Installation, configuration and lifecycle are managed by AKS following pre-determine updates rules.

    As an example, let's enable Container Insights with the monitoring addon. on an existing AKS cluster using az aks enable-addons --addons CLI command

    az aks enable-addons \
    --name MyManagedCluster \
    --resource-group MyResourceGroup \
    --addons monitoring

    or you can use az aks create --enable-addons when creating new clusters

    az aks create \
    --name MyManagedCluster \
    --resource-group MyResourceGroup \
    --enable-addons monitoring

    The current available add-ons are:

    1. http_application_routing - Configure ingress with automatic public DNS name creation. Only recommended for development.
    2. monitoring - Container Insights monitoring.
    3. virtual-node - CNCF virtual nodes open source project.
    4. azure-policy - Azure Policy for AKS.
    5. ingress-appgw - Application Gateway Ingress Controller (AGIC).
    6. open-service-mesh - CNCF Open Service Mesh project.
    7. azure-keyvault-secrets-provider - Azure Key Vault Secrets Provider for Secret Store CSI Driver.
    8. web_application_routing - Managed NGINX ingress Controller.
    9. keda - CNCF Event-driven autoscaling project.

    For more details, get the updated list of AKS Add-ons here

    Extensions

    Cluster Extensions uses Helm charts and integrates with Azure Resource Manager (ARM) to provide installation and lifecycle management of capabilities on top of AKS.

    Extensions can be auto upgraded using minor versions, but it requires extra management and configuration. Using Scope parameter, it can be installed on the whole cluster or per namespace.

    AKS Extensions requires an Azure CLI extension to be installed. To add or update this CLI extension use the following commands:

    az extension add --name k8s-extension

    and to update an existing extension

    az extension update --name k8s-extension

    There are only 3 available extensions:

    1. Dapr - CNCF Dapr project.
    2. Azure ML - Integrate Azure Machine Learning with AKS to train, inference and manage ML models.
    3. Flux (GitOps) - CNCF Flux project integrated with AKS to enable cluster configuration and application deployment using GitOps.

    As an example, you can install Azure ML using the following command:

    az k8s-extension create \
    --name aml-compute --extension-type Microsoft.AzureML.Kubernetes \
    --scope cluster --cluster-name <clusterName> \
    --resource-group <resourceGroupName> \
    --cluster-type managedClusters \
    --configuration-settings enableInference=True allowInsecureConnections=True

    For more details, get the updated list of AKS Extensions here

    Add-ons vs Extensions

    AKS Add-ons brings an advantage of been fully managed by AKS itself, and AKS Extensions are more flexible and configurable but requires extra level of management.

    Add-ons are part of the AKS resource provider in the Azure API, and AKS Extensions are a separate resource provider on the Azure API.

    Resources

    It's not too late to sign up for and complete the Cloud Skills Challenge!
    - + \ No newline at end of file diff --git a/cnny-2023/tags/azure-kubernetes-service/page/21/index.html b/cnny-2023/tags/azure-kubernetes-service/page/21/index.html index 0b8a0ed0b5..e23e38f226 100644 --- a/cnny-2023/tags/azure-kubernetes-service/page/21/index.html +++ b/cnny-2023/tags/azure-kubernetes-service/page/21/index.html @@ -14,13 +14,13 @@ - +

    21 posts tagged with "azure-kubernetes-service"

    View All Tags

    · 6 min read
    Cory Skimming
    Steven Murawski
    Paul Yu
    Josh Duffney
    Nitya Narasimhan
    Vinicius Apolinario
    Jorge Arteiro
    Devanshi Joshi

    And that's a wrap on the inaugural #CloudNativeNewYear! Thank you for joining us to kick off the new year with this learning journey into cloud-native! In this final post of the 2023 celebration of all things cloud-native, we'll do two things:

    • Look Back - with a quick retrospective of what was covered.
    • Look Ahead - with resources and suggestions for how you can continue your skilling journey!

    We appreciate your time and attention and we hope you found this curated learning valuable. Feedback and suggestions are always welcome. From our entire team, we wish you good luck with the learning journey - now go build some apps and share your knowledge! 🎉


    What We'll Cover

    • Cloud-native fundamentals
    • Kubernetes fundamentals
    • Bringing your applications to Kubernetes
    • Go further with cloud-native
    • Resources to keep the celebration going!

    Week 1: Cloud-native Fundamentals

    In Week 1, we took a tour through the fundamentals of cloud-native technologies, including a walkthrough of the core concepts of containers, microservices, and Kubernetes.

    • Jan 23 - Cloud-native Fundamentals: The answers to life and all the universe - what is cloud-native? What makes an application cloud-native? What are the benefits? (yes, we all know it's 42, but hey, gotta start somewhere!)
    • Jan 24 - Containers 101: Containers are an essential component of cloud-native development. In this intro post, we cover how containers work and why they have become so popular.
    • Jan 25 - Kubernetes 101: Kuber-what-now? Learn the basics of Kubernetes and how it enables us to deploy and manage our applications effectively and consistently.
    A QUICKSTART GUIDE TO KUBERNETES CONCEPTS

    Missed it Live? Tune in to A Quickstart Guide to Kubernetes Concepts on demand, now!

    • Jan 26 - Microservices 101: What is a microservices architecture and how can we go about designing one?
    • Jan 27 - Exploring your Cloud Native Options: Cloud-native, while catchy, can be a very broad term. What technologies should you use? Learn some basic guidelines for when it is optimal to use different technologies for your project.

    Week 2: Kubernetes Fundamentals

    In Week 2, we took a deeper dive into the Fundamentals of Kubernetes. The posts and live demo from this week took us through how to build a simple application on Kubernetes, covering everything from deployment to networking and scaling. Note: for our samples and demo we have used Azure Kubernetes Service, but the principles apply to any Kubernetes!

    • Jan 30 - Pods and Deployments: how to use pods and deployments in Kubernetes.
    • Jan 31 - Services and Ingress: how to use services and ingress and a walk through the steps of making our containers accessible internally and externally!
    • Feb 1 - ConfigMaps and Secrets: how to of passing configuration and secrets to our applications in Kubernetes with ConfigMaps and Secrets.
    • Feb 2 - Volumes, Mounts, and Claims: how to use persistent storage on Kubernetes (and ensure your data can survive container restarts!).
    • Feb 3 - Scaling Pods and Nodes: how to scale pods and nodes in our Kubernetes cluster.
    ASK THE EXPERTS: AZURE KUBERNETES SERVICE

    Missed it Live? Tune in to Ask the Expert with Azure Kubernetes Service on demand, now!


    Week 3: Bringing your applications to Kubernetes

    So, you have learned how to build an application on Kubernetes. What about your existing applications? In Week 3, we explored how to take an existing application and set it up to run in Kubernetes:

    • Feb 6 - CI/CD: learn how to get an existing application running in Kubernetes with a full pipeline in GitHub Actions.
    • Feb 7 - Adapting Storage, Secrets, and Configuration: how to evaluate our sample application's configuration, storage, and networking requirements and implement using Kubernetes.
    • Feb 8 - Opening your Application with Ingress: how to expose the eShopOnWeb app so that customers can reach it over the internet using a custom domain name and TLS.
    • Feb 9 - Debugging and Instrumentation: how to debug and instrument your application now that it is on Kubernetes.
    • Feb 10 - CI/CD Secure Supply Chain: now that we have set up our application on Kubernetes, let's talk about container image signing and how to set up a secure supply change.

    Week 4: Go Further with Cloud-Native

    This week we have gone further with Cloud-native by exploring advanced topics and best practices for the Cloud-native practitioner.

    And today, February 17th, with this one post to rule (er, collect) them all!


    Keep the Learning Going!

    Learning is great, so why stop here? We have a host of great resources and samples for you to continue your cloud-native journey with Azure below:


    - + \ No newline at end of file diff --git a/cnny-2023/tags/azure-kubernetes-service/page/3/index.html b/cnny-2023/tags/azure-kubernetes-service/page/3/index.html index c916cb932d..bfa8b07e3f 100644 --- a/cnny-2023/tags/azure-kubernetes-service/page/3/index.html +++ b/cnny-2023/tags/azure-kubernetes-service/page/3/index.html @@ -14,14 +14,14 @@ - +

    21 posts tagged with "azure-kubernetes-service"

    View All Tags

    · 4 min read
    Steven Murawski
    Paul Yu
    Josh Duffney

    Welcome to Day 2 of Week 1 of #CloudNativeNewYear!

    Today, we'll focus on building an understanding of containers.

    What We'll Cover

    • Introduction
    • How do Containers Work?
    • Why are Containers Becoming so Popular?
    • Conclusion
    • Resources
    • Learning Path

    REGISTER & LEARN: KUBERNETES 101

    Interested in a dive into Kubernetes and a chance to talk to experts?

    🎙: Join us Jan 26 @1pm PST by registering here

    Here's what you will learn:

    • Key concepts and core principles of Kubernetes.
    • How to deploy, scale and manage containerized workloads.
    • Live Demo of the concepts explained
    • How to get started with Azure Kubernetes Service for free.

    Start your free Azure Kubernetes Trial Today!!: aka.ms/TryAKS

    Introduction

    In the beginning, we deployed our applications onto physical servers. We only had a certain number of those servers, so often they hosted multiple applications. This led to some problems when those applications shared dependencies. Upgrading one application could break another application on the same server.

    Enter virtualization. Virtualization allowed us to run our applications in an isolated operating system instance. This removed much of the risk of updating shared dependencies. However, it increased our overhead since we had to run a full operating system for each application environment.

    To address the challenges created by virtualization, containerization was created to improve isolation without duplicating kernel level resources. Containers provide efficient and consistent deployment and runtime experiences for our applications and have become very popular as a way of packaging and distributing applications.

    How do Containers Work?

    Containers build on two capabilities in the Linux operating system, namespaces and cgroups. These constructs allow the operating system to provide isolation to a process or group of processes, keeping their access to filesystem resources separate and providing controls on resource utilization. This, combined with tooling to help package, deploy, and run container images has led to their popularity in today’s operating environment. This provides us our isolation without the overhead of additional operating system resources.

    When a container host is deployed on an operating system, it works at scheduling the access to the OS (operating systems) components. This is done by providing a logical isolated group that can contain processes for a given application, called a namespace. The container host then manages /schedules access from the namespace to the host OS. The container host then uses cgroups to allocate compute resources. Together, the container host with the help of cgroups and namespaces can schedule multiple applications to access host OS resources.

    Overall, this gives the illusion of virtualizing the host OS, where each application gets its own OS. In actuality, all the applications are running on the same operating system and sharing the same kernel as the container host.

    Containers are popular in the software development industry because they provide several benefits over traditional virtualization methods. Some of these benefits include:

    • Portability: Containers make it easy to move an application from one environment to another without having to worry about compatibility issues or missing dependencies.
    • Isolation: Containers provide a level of isolation between the application and the host system, which means that the application running in the container cannot access the host system's resources.
    • Scalability: Containers make it easy to scale an application up or down as needed, which is useful for applications that experience a lot of traffic or need to handle a lot of data.
    • Resource Efficiency: Containers are more resource-efficient than traditional virtualization methods because they don't require a full operating system to be running on each virtual machine.
    • Cost-Effective: Containers are more cost-effective than traditional virtualization methods because they don't require expensive hardware or licensing fees.

    Conclusion

    Containers are a powerful technology that allows developers to package and deploy applications in a portable and isolated environment. This technology is becoming increasingly popular in the world of software development and is being used by many companies and organizations to improve their application deployment and management processes. With the benefits of portability, isolation, scalability, resource efficiency, and cost-effectiveness, containers are definitely worth considering for your next application development project.

    Containerizing applications is a key step in modernizing them, and there are many other patterns that can be adopted to achieve cloud-native architectures, including using serverless platforms, Kubernetes, and implementing DevOps practices.

    Resources

    Learning Path

    - + \ No newline at end of file diff --git a/cnny-2023/tags/azure-kubernetes-service/page/4/index.html b/cnny-2023/tags/azure-kubernetes-service/page/4/index.html index b6fda1f58a..c4b90307db 100644 --- a/cnny-2023/tags/azure-kubernetes-service/page/4/index.html +++ b/cnny-2023/tags/azure-kubernetes-service/page/4/index.html @@ -14,14 +14,14 @@ - +

    21 posts tagged with "azure-kubernetes-service"

    View All Tags

    · 3 min read
    Steven Murawski

    Welcome to Day 3 of Week 1 of #CloudNativeNewYear!

    This week we'll focus on what Kubernetes is.

    What We'll Cover

    • Introduction
    • What is Kubernetes? (Video)
    • How does Kubernetes Work? (Video)
    • Conclusion


    REGISTER & LEARN: KUBERNETES 101

    Interested in a dive into Kubernetes and a chance to talk to experts?

    🎙: Join us Jan 26 @1pm PST by registering here

    Here's what you will learn:

    • Key concepts and core principles of Kubernetes.
    • How to deploy, scale and manage containerized workloads.
    • Live Demo of the concepts explained
    • How to get started with Azure Kubernetes Service for free.

    Start your free Azure Kubernetes Trial Today!!: aka.ms/TryAKS

    Introduction

    Kubernetes is an open source container orchestration engine that can help with automated deployment, scaling, and management of our applications.

    Kubernetes takes physical (or virtual) resources and provides a consistent API over them, bringing a consistency to the management and runtime experience for our applications. Kubernetes provides us with a number of capabilities such as:

    • Container scheduling
    • Service discovery and load balancing
    • Storage orchestration
    • Automated rollouts and rollbacks
    • Automatic bin packing
    • Self-healing
    • Secret and configuration management

    We'll learn more about most of these topics as we progress through Cloud Native New Year.

    What is Kubernetes?

    Let's hear from Brendan Burns, one of the founders of Kubernetes as to what Kubernetes actually is.

    How does Kubernetes Work?

    And Brendan shares a bit more with us about how Kubernetes works.

    Conclusion

    Kubernetes allows us to deploy and manage our applications effectively and consistently.

    By providing a consistent API across many of the concerns our applications have, like load balancing, networking, storage, and compute, Kubernetes improves both our ability to build and ship new software.

    There are standards for the applications to depend on for resources needed. Deployments, metrics, and logs are provided in a standardized fashion allowing more effecient operations across our application environments.

    And since Kubernetes is an open source platform, it can be found in just about every type of operating environment - cloud, virtual machines, physical hardware, shared data centers, even small devices like Rasberry Pi's!

    Want to learn more? Join us for a webinar on Kubernetes Concepts (or catch the playback) on Thursday, January 26th at 1 PM PST and watch for the rest of this series right here!

    - + \ No newline at end of file diff --git a/cnny-2023/tags/azure-kubernetes-service/page/5/index.html b/cnny-2023/tags/azure-kubernetes-service/page/5/index.html index be0f0f3ca8..64c129df3d 100644 --- a/cnny-2023/tags/azure-kubernetes-service/page/5/index.html +++ b/cnny-2023/tags/azure-kubernetes-service/page/5/index.html @@ -14,13 +14,13 @@ - +

    21 posts tagged with "azure-kubernetes-service"

    View All Tags

    · 6 min read
    Josh Duffney

    Welcome to Day 4 of Week 1 of #CloudNativeNewYear!

    This week we'll focus on advanced topics and best practices for Cloud-Native practitioners, kicking off with this post on Serverless Container Options with Azure. We'll look at technologies, tools and best practices that range from managed services like Azure Kubernetes Service, to options allowing finer granularity of control and oversight.

    What We'll Cover

    • What is Microservice Architecture?
    • How do you design a Microservice?
    • What challenges do Microservices introduce?
    • Conclusion
    • Resources


    Microservices are a modern way of designing and building software that increases deployment velocity by decomposing an application into small autonomous services that can be deployed independently.

    By deploying loosely coupled microservices your applications can be developed, deployed, and scaled independently. Because each service is independent, it can be updated or replaced without having to worry about the impact on the rest of the application. This means that if a bug is found in one service, it can be fixed without having to redeploy the entire application. All of which gives an organization the ability to deliver value to their customers faster.

    In this article, we will explore the basics of microservices architecture, its benefits and challenges, and how it can help improve the development, deployment, and maintenance of software applications.

    What is Microservice Architecture?

    Before explaining what Microservice architecture is, it’s important to understand what problems microservices aim to address.

    Traditional software development is centered around building monolithic applications. Monolithic applications are built as a single, large codebase. Meaning your code is tightly coupled causing the monolithic app to suffer from the following:

    Too much Complexity: Monolithic applications can become complex and difficult to understand and maintain as they grow. This can make it hard to identify and fix bugs and add new features.

    Difficult to Scale: Monolithic applications can be difficult to scale as they often have a single point of failure, which can cause the whole application to crash if a service fails.

    Slow Deployment: Deploying a monolithic application can be risky and time-consuming, as a small change in one part of the codebase can affect the entire application.

    Microservice architecture (often called microservices) is an architecture style that addresses the challenges created by Monolithic applications. Microservices architecture is a way of designing and building software applications as a collection of small, independent services that communicate with each other through APIs. This allows for faster development and deployment cycles, as well as easier scaling and maintenance than is possible with a monolithic application.

    How do you design a Microservice?

    Building applications with Microservices architecture requires a different approach. Microservices architecture focuses on business capabilities rather than technical layers, such as data access or messaging. Doing so requires that you shift your focus away from the technical stack and model your applications based upon the various domains that exist within the business.

    Domain-driven design (DDD) is a way to design software by focusing on the business needs. You can use Domain-driven design as a framework that guides the development of well-designed microservices by building services that encapsulate knowledge in each domain and abstract that knowledge from clients.

    In Domain-driven design you start by modeling the business domain and creating a domain model. A domain model is an abstract model of the business model that distills and organizes a domain of knowledge and provides a common language for developers and domain experts. It’s the resulting domain model that microservices a best suited to be built around because it helps establish a well-defined boundary between external systems and other internal applications.

    In short, before you begin designing microservices, start by mapping the functions of the business and their connections to create a domain model for the microservice(s) to be built around.

    What challenges do Microservices introduce?

    Microservices solve a lot of problems and have several advantages, but the grass isn’t always greener on the other side.

    One of the key challenges of microservices is managing communication between services. Because services are independent, they need to communicate with each other through APIs. This can be complex and difficult to manage, especially as the number of services grows. To address this challenge, it is important to have a clear API design, with well-defined inputs and outputs for each service. It is also important to have a system for managing and monitoring communication between services, to ensure that everything is running smoothly.

    Another challenge of microservices is managing the deployment and scaling of services. Because each service is independent, it needs to be deployed and scaled separately from the rest of the application. This can be complex and difficult to manage, especially as the number of services grows. To address this challenge, it is important to have a clear and consistent deployment process, with well-defined steps for deploying and scaling each service. Furthermore, it is advisable to host them on a system with self-healing capabilities to reduce operational burden.

    It is also important to have a system for monitoring and managing the deployment and scaling of services, to ensure optimal performance.

    Each of these challenges has created fertile ground for tooling and process that exists in the cloud-native ecosystem. Kubernetes, CI CD, and other DevOps practices are part of the package of adopting the microservices architecture.

    Conclusion

    In summary, microservices architecture focuses on software applications as a collection of small, independent services that communicate with each other over well-defined APIs.

    The main advantages of microservices include:

    • increased flexibility and scalability per microservice,
    • efficient resource utilization (with help from a container orchestrator like Kubernetes),
    • and faster development cycles.

    Continue following along with this series to see how you can use Kubernetes to help adopt microservices patterns in your own environments!

    Resources

    - + \ No newline at end of file diff --git a/cnny-2023/tags/azure-kubernetes-service/page/6/index.html b/cnny-2023/tags/azure-kubernetes-service/page/6/index.html index 297beb62a6..0070c36acd 100644 --- a/cnny-2023/tags/azure-kubernetes-service/page/6/index.html +++ b/cnny-2023/tags/azure-kubernetes-service/page/6/index.html @@ -14,14 +14,14 @@ - +

    21 posts tagged with "azure-kubernetes-service"

    View All Tags

    · 6 min read
    Cory Skimming

    We are excited to be wrapping up our first week of #CloudNativeNewYear! This week, we have tried to set the stage by covering the fundamentals of cloud-native practices and technologies, including primers on containerization, microservices, and Kubernetes.

    Don't forget to sign up for the the Cloud Skills Challenge!

    Today, we will do a brief recap of some of these technologies and provide some basic guidelines for when it is optimal to use each.


    What We'll Cover

    • To Containerize or not to Containerize?
    • The power of Kubernetes
    • Where does Serverless fit?
    • Resources
    • What's coming next!


    Just joining us now? Check out these other Week 1 posts:

    To Containerize or not to Containerize?

    As mentioned in our Containers 101 post earlier this week, containers can provide several benefits over traditional virtualization methods, which has made them popular within the software development community. Containers provide a consistent and predictable runtime environment, which can help reduce the risk of compatibility issues and simplify the deployment process. Additionally, containers can improve resource efficiency by allowing multiple applications to run on the same host while isolating their dependencies.

    Some types of apps that are a particularly good fit for containerization include:

    1. Microservices: Containers are particularly well-suited for microservices-based applications, as they can be used to isolate and deploy individual components of the system. This allows for more flexibility and scalability in the deployment process.
    2. Stateless applications: Applications that do not maintain state across multiple sessions, such as web applications, are well-suited for containers. Containers can be easily scaled up or down as needed and replaced with new instances, without losing data.
    3. Portable applications: Applications that need to be deployed in different environments, such as on-premises, in the cloud, or on edge devices, can benefit from containerization. The consistent and portable runtime environment of containers can make it easier to move the application between different environments.
    4. Legacy applications: Applications that are built using older technologies or that have compatibility issues can be containerized to run in an isolated environment, without impacting other applications or the host system.
    5. Dev and testing environments: Containerization can be used to create isolated development and testing environments, which can be easily created and destroyed as needed.

    While there are many types of applications that can benefit from a containerized approach, it's worth noting that containerization is not always the best option, and it's important to weigh the benefits and trade-offs before deciding to containerize an application. Additionally, some types of applications may not be a good fit for containers including:

    • Apps that require full access to host resources: Containers are isolated from the host system, so if an application needs direct access to hardware resources such as GPUs or specialized devices, it might not work well in a containerized environment.
    • Apps that require low-level system access: If an application requires deep access to the underlying operating system, it may not be suitable for running in a container.
    • Applications that have specific OS dependencies: Apps that have specific dependencies on a certain version of an operating system or libraries may not be able to run in a container.
    • Stateful applications: Apps that maintain state across multiple sessions, such as databases, may not be well suited for containers. Containers are ephemeral by design, so the data stored inside a container may not persist between restarts.

    The good news is that some of these limitations can be overcome with the use of specialized containerization technologies such as Kubernetes, and by carefully designing the architecture of the application.


    The power of Kubernetes

    Speaking of Kubernetes...

    Kubernetes is a powerful tool for managing and deploying containerized applications in production environments, particularly for applications that need to scale, handle large numbers of requests, or run in multi-cloud or hybrid environments.

    Kubernetes is well-suited for a wide variety of applications, but it is particularly well-suited for the following types of applications:

    1. Microservices-based applications: Kubernetes provides a powerful set of tools for managing and deploying microservices-based applications, making it easy to scale, update, and manage the individual components of the application.
    2. Stateful applications: Kubernetes provides support for stateful applications through the use of Persistent Volumes and StatefulSets, allowing for applications that need to maintain state across multiple instances.
    3. Large-scale, highly-available systems: Kubernetes provides built-in support for scaling, self-healing, and rolling updates, making it an ideal choice for large-scale, highly-available systems that need to handle large numbers of users and requests.
    4. Multi-cloud and hybrid environments: Kubernetes can be used to deploy and manage applications across multiple cloud providers and on-premises environments, making it a good choice for organizations that want to take advantage of the benefits of multiple cloud providers or that need to deploy applications in a hybrid environment.
    New to Kubernetes?

    Where does Serverless fit in?

    Serverless is a cloud computing model where the cloud provider (like Azure) is responsible for executing a piece of code by dynamically allocating the resources. With serverless, you only pay for the exact amount of compute time that you use, rather than paying for a fixed amount of resources. This can lead to significant cost savings, particularly for applications with variable or unpredictable workloads.

    Serverless is commonly used for building applications like web or mobile apps, IoT, data processing, and real-time streaming - apps where the workloads are variable and high scalability is required. It's important to note that serverless is not a replacement for all types of workloads - it's best suited for stateless, short-lived and small-scale workloads.

    For a detailed look into the world of Serverless and lots of great learning content, revisit #30DaysofServerless.


    Resources


    What's up next in #CloudNativeNewYear?

    Week 1 has been all about the fundamentals of cloud-native. Next week, the team will be diving in to application deployment with Azure Kubernetes Service. Don't forget to subscribe to the blog to get daily posts delivered directly to your favorite feed reader!


    - + \ No newline at end of file diff --git a/cnny-2023/tags/azure-kubernetes-service/page/7/index.html b/cnny-2023/tags/azure-kubernetes-service/page/7/index.html index bfb9703be3..5b7611fd0d 100644 --- a/cnny-2023/tags/azure-kubernetes-service/page/7/index.html +++ b/cnny-2023/tags/azure-kubernetes-service/page/7/index.html @@ -14,13 +14,13 @@ - +

    21 posts tagged with "azure-kubernetes-service"

    View All Tags

    · 14 min read
    Steven Murawski

    Welcome to Day #1 of Week 2 of #CloudNativeNewYear!

    The theme for this week is Kubernetes fundamentals. Last week we talked about Cloud Native architectures and the Cloud Native landscape. Today we'll explore the topic of Pods and Deployments in Kubernetes.

    Ask the Experts Thursday, February 9th at 9 AM PST
    Catch the Replay of the Live Demo

    Watch the recorded demo and conversation about this week's topics.

    We were live on YouTube walking through today's (and the rest of this week's) demos.

    What We'll Cover

    • Setting Up A Kubernetes Environment in Azure
    • Running Containers in Kubernetes Pods
    • Making the Pods Resilient with Deployments
    • Exercise
    • Resources

    Setting Up A Kubernetes Environment in Azure

    For this week, we'll be working with a simple app - the Azure Voting App. My teammate Paul Yu ported the app to Rust and we tweaked it a bit to let us highlight some of the basic features of Kubernetes.

    You should be able to replicate this in just about any Kubernetes environment, but we'll use Azure Kubernetes Service (AKS) as our working environment for this week.

    To make it easier to get started, there's a Bicep template to deploy an AKS cluster, an Azure Container Registry (ACR) (to host our container image), and connect the two so that we can easily deploy our application.

    Step 0 - Prerequisites

    There are a few things you'll need if you want to work through this and the following examples this week.

    Required:

    • Git (and probably a GitHub account if you want to persist your work outside of your computer)
    • Azure CLI
    • An Azure subscription (if you want to follow along with the Azure steps)
    • Kubectl (the command line tool for managing Kubernetes)

    Helpful:

    • Visual Studio Code (or equivalent editor)

    Step 1 - Clone the application repository

    First, I forked the source repository to my account.

    $GitHubOrg = 'smurawski' # Replace this with your GitHub account name or org name
    git clone "https://github.com/$GitHubOrg/azure-voting-app-rust"
    cd azure-voting-app-rust

    Leave your shell opened with your current location inside the application repository.

    Step 2 - Set up AKS

    Running the template deployment from the demo script (I'm using the PowerShell example in cnny23-week2-day1.ps1, but there's a Bash variant at cnny23-week2-day1.sh) stands up the environment. The second, third, and fourth commands take some of the output from the Bicep deployment to set up for later commands, so don't close out your shell after you run these commands.

    az deployment sub create --template-file ./deploy/main.bicep --location eastus --parameters 'resourceGroup=cnny-week2'
    $AcrName = az deployment sub show --name main --query 'properties.outputs.acr_name.value' -o tsv
    $AksName = az deployment sub show --name main --query 'properties.outputs.aks_name.value' -o tsv
    $ResourceGroup = az deployment sub show --name main --query 'properties.outputs.resource_group_name.value' -o tsv

    az aks get-credentials --resource-group $ResourceGroup --name $AksName

    Step 3 - Build our application container

    Since we have an Azure Container Registry set up, I'll use ACR Build Tasks to build and store my container image.

    az acr build --registry $AcrName --% --image cnny2023/azure-voting-app-rust:{{.Run.ID}} .
    $BuildTag = az acr repository show-tags `
    --name $AcrName `
    --repository cnny2023/azure-voting-app-rust `
    --orderby time_desc `
    --query '[0]' -o tsv
    tip

    Wondering what the --% is in the first command line? That tells the PowerShell interpreter to pass the input after it "as is" to the command without parsing/evaluating it. Otherwise, PowerShell messes a bit with the templated {{.Run.ID}} bit.

    Running Containers in Kubernetes Pods

    Now that we have our AKS cluster and application image ready to go, let's look into how Kubernetes runs containers.

    If you've been in tech for any length of time, you've seen that every framework, runtime, orchestrator, etc.. can have their own naming scheme for their concepts. So let's get into some of what Kubernetes calls things.

    The Pod

    A container running in Kubernetes is called a Pod. A Pod is basically a running container on a Node or VM. It can be more. For example you can run multiple containers and specify some funky configuration, but we'll keep it simple for now - add the complexity when you need it.

    Our Pod definition can be created via the kubectl command imperatively from arguments or declaratively from a configuration file. We'll do a little of both. We'll use the kubectl command to help us write our configuration files. Kubernetes configuration files are YAML, so having an editor that supports and can help you syntax check YAML is really helpful.

    Creating a Pod Definition

    Let's create a few Pod definitions. Our application requires two containers to get working - the application and a database.

    Let's create the database Pod first. And before you comment, the configuration isn't secure nor best practice. We'll fix that later this week. For now, let's focus on getting up and running.

    This is a trick I learned from one of my teammates - Paul. By using the --output yaml and --dry-run=client options, we can have the command help us write our YAML. And with a bit of output redirection, we can stash it safely in a file for later use.

    kubectl run azure-voting-db `
    --image "postgres:15.0-alpine" `
    --env "POSTGRES_PASSWORD=mypassword" `
    --output yaml `
    --dry-run=client > manifests/pod-db.yaml

    This creates a file that looks like:

    apiVersion: v1
    kind: Pod
    metadata:
    creationTimestamp: null
    labels:
    run: azure-voting-db
    name: azure-voting-db
    spec:
    containers:
    - env:
    - name: POSTGRES_PASSWORD
    value: mypassword
    image: postgres:15.0-alpine
    name: azure-voting-db
    resources: {}
    dnsPolicy: ClusterFirst
    restartPolicy: Always
    status: {}

    The file, when supplied to the Kubernetes API, will identify what kind of resource to create, the API version to use, and the details of the container (as well as an environment variable to be supplied).

    We'll get that container image started with the kubectl command. Because the details of what to create are in the file, we don't need to specify much else to the kubectl command but the path to the file.

    kubectl apply -f ./manifests/pod-db.yaml

    I'm going to need the IP address of the Pod, so that my application can connect to it, so we can use kubectl to get some information about our pod. By default, kubectl get pod only displays certain information but it retrieves a lot more. We can use the JSONPath syntax to index into the response and get the information you want.

    tip

    To see what you can get, I usually run the kubectl command with the output type (-o JSON) of JSON and then I can find where the data I want is and create my JSONPath query to get it.

    $DB_IP = kubectl get pod azure-voting-db -o jsonpath='{.status.podIP}'

    Now, let's create our Pod definition for our application. We'll use the same technique as before.

    kubectl run azure-voting-app `
    --image "$AcrName.azurecr.io/cnny2023/azure-voting-app-rust:$BuildTag" `
    --env "DATABASE_SERVER=$DB_IP" `
    --env "DATABASE_PASSWORD=mypassword`
    --output yaml `
    --dry-run=client > manifests/pod-app.yaml

    That command gets us a similar YAML file to the database container - you can see the full file here

    Let's get our application container running.

    kubectl apply -f ./manifests/pod-app.yaml

    Now that the Application is Running

    We can check the status of our Pods with:

    kubectl get pods

    And we should see something like:

    azure-voting-app-rust ❯  kubectl get pods
    NAME READY STATUS RESTARTS AGE
    azure-voting-app 1/1 Running 0 36s
    azure-voting-db 1/1 Running 0 84s

    Once our pod is running, we can check to make sure everything is working by letting kubectl proxy network connections to our Pod running the application. If we get the voting web page, we'll know the application found the database and we can start voting!

    kubectl port-forward pod/azure-voting-app 8080:8080

    Azure voting website in a browser with three buttons, one for Dogs, one for Cats, and one for Reset.  The counter is Dogs - 0 and Cats - 0.

    When you are done voting, you can stop the port forwarding by using Control-C to break the command.

    Clean Up

    Let's clean up after ourselves and see if we can't get Kubernetes to help us keep our application running. We can use the same configuration files to ensure that Kubernetes only removes what we want removed.

    kubectl delete -f ./manifests/pod-app.yaml
    kubectl delete -f ./manifests/pod-db.yaml

    Summary - Pods

    A Pod is the most basic unit of work inside Kubernetes. Once the Pod is deleted, it's gone. That leads us to our next topic (and final topic for today.)

    Making the Pods Resilient with Deployments

    We've seen how easy it is to deploy a Pod and get our containers running on Nodes in our Kubernetes cluster. But there's a problem with that. Let's illustrate it.

    Breaking Stuff

    Setting Back Up

    First, let's redeploy our application environment. We'll start with our application container.

    kubectl apply -f ./manifests/pod-db.yaml
    kubectl get pod azure-voting-db -o jsonpath='{.status.podIP}'

    The second command will report out the new IP Address for our database container. Let's open ./manifests/pod-app.yaml and update the container IP to our new one.

    - name: DATABASE_SERVER
    value: YOUR_NEW_IP_HERE

    Then we can deploy the application with the information it needs to find its database. We'll also list out our pods to see what is running.

    kubectl apply -f ./manifests/pod-app.yaml
    kubectl get pods

    Feel free to look back and use the port forwarding trick to make sure your app is running if you'd like.

    Knocking It Down

    The first thing we'll try to break is our application pod. Let's delete it.

    kubectl delete pod azure-voting-app

    Then, we'll check our pod's status:

    kubectl get pods

    Which should show something like:

    azure-voting-app-rust ❯  kubectl get pods
    NAME READY STATUS RESTARTS AGE
    azure-voting-db 1/1 Running 0 50s

    We should be able to recreate our application pod deployment with no problem, since it has the current database IP address and nothing else depends on it.

    kubectl apply -f ./manifests/pod-app.yaml

    Again, feel free to do some fun port forwarding and check your site is running.

    Uncomfortable Truths

    Here's where it gets a bit stickier, what if we delete the database container?

    If we delete our database container and recreate it, it'll likely have a new IP address, which would force us to update our application configuration. We'll look at some solutions for these problems in the next three posts this week.

    Because our database problem is a bit tricky, we'll primarily focus on making our application layer more resilient and prepare our database layer for those other techniques over the next few days.

    Let's clean back up and look into making things more resilient.

    kubectl delete -f ./manifests/pod-app.yaml
    kubectl delete -f ./manifests/pod-db.yaml

    The Deployment

    One of the reasons you may want to use Kubernetes is it's ability to orchestrate workloads. Part of that orchestration includes being able to ensure that certain workloads are running (regardless of what Node they might be on).

    We saw that we could delete our application pod and then restart it from the manifest with little problem. It just meant that we had to run a command to restart it. We can use the Deployment in Kubernetes to tell the orchestrator to ensure we have our application pod running.

    The Deployment also can encompass a lot of extra configuration - controlling how many containers of a particular type should be running, how upgrades of container images should proceed, and more.

    Creating the Deployment

    First, we'll create a Deployment for our database. We'll use a technique similar to what we did for the Pod, with just a bit of difference.

    kubectl create deployment azure-voting-db `
    --image "postgres:15.0-alpine" `
    --port 5432 `
    --output yaml `
    --dry-run=client > manifests/deployment-db.yaml

    Unlike our Pod definition creation, we can't pass in environment variable configuration from the command line. We'll have to edit the YAML file to add that.

    So, let's open ./manifests/deployment-db.yaml in our editor and add the following in the spec/containers configuration.

            env:
    - name: POSTGRES_PASSWORD
    value: "mypassword"

    Your file should look like this deployment-db.yaml.

    Once we have our configuration file updated, we can deploy our database container image.

    kubectl apply -f ./manifests/deployment-db.yaml

    For our application, we'll use the same technique.

    kubectl create deployment azure-voting-app `
    --image "$AcrName.azurecr.io/cnny2023/azure-voting-app-rust:$BuildTag" `
    --port 8080 `
    --output yaml `
    --dry-run=client > manifests/deployment-app.yaml

    Next, we'll need to add an environment variable to the generated configuration. We'll also need the new IP address for the database deployment.

    Previously, we named the pod and were able to ask for the IP address with kubectl and a bit of JSONPath. Now, the deployment created the pod for us, so there's a bit of random in the naming. Check out:

    kubectl get pods

    Should return something like:

    azure-voting-app-rust ❯  kubectl get pods
    NAME READY STATUS RESTARTS AGE
    azure-voting-db-686d758fbf-8jnq8 1/1 Running 0 7s

    We can either ask for the IP with the new pod name, or we can use a selector to find our desired pod.

    kubectl get pod --selector app=azure-voting-db -o jsonpath='{.items[0].status.podIP}'

    Now, we can update our application deployment configuration file with:

            env:
    - name: DATABASE_SERVER
    value: YOUR_NEW_IP_HERE
    - name: DATABASE_PASSWORD
    value: mypassword

    Your file should look like this deployment-app.yaml (but with IPs and image names matching your environment).

    After we save those changes, we can deploy our application.

    kubectl apply -f ./manifests/deployment-app.yaml

    Let's test the resilience of our app now. First, we'll delete the pod running our application, then we'll check to make sure Kubernetes restarted our application pod.

    kubectl get pods
    azure-voting-app-rust ❯  kubectl get pods
    NAME READY STATUS RESTARTS AGE
    azure-voting-app-56c9ccc89d-skv7x 1/1 Running 0 71s
    azure-voting-db-686d758fbf-8jnq8 1/1 Running 0 12m
    kubectl delete pod azure-voting-app-56c9ccc89d-skv7x
    kubectl get pods
    azure-voting-app-rust ❯  kubectl delete pod azure-voting-app-56c9ccc89d-skv7x
    >> kubectl get pods
    pod "azure-voting-app-56c9ccc89d-skv7x" deleted
    NAME READY STATUS RESTARTS AGE
    azure-voting-app-56c9ccc89d-2b5mx 1/1 Running 0 2s
    azure-voting-db-686d758fbf-8jnq8 1/1 Running 0 14m
    info

    Your Pods will likely have different identifiers at the end, so adjust your commands to match the names in your environment.

    As you can see, by the time the kubectl get pods command was run, Kubernetes had already spun up a new pod for the application container image. Thanks Kubernetes!

    Clean up

    Since we can't just delete the pods, we have to delete the deployments.

    kubectl delete -f ./manifests/deployment-app.yaml
    kubectl delete -f ./manifests/deployment-db.yaml

    Summary - Deployments

    Deployments allow us to create more durable configuration for the workloads we deploy into Kubernetes. As we dig deeper, we'll discover more capabilities the deployments offer. Check out the Resources below for more.

    Exercise

    If you want to try these steps, head over to the source repository, fork it, clone it locally, and give it a spin!

    You can check your manifests against the manifests in the week2/day1 branch of the source repository.

    Resources

    Take the Cloud Skills Challenge!

    Enroll in the Cloud Skills Challenge!

    Don't miss out on this opportunity to level up your skills and stay ahead of the curve in the world of cloud native.

    Documentation

    Training

    - + \ No newline at end of file diff --git a/cnny-2023/tags/azure-kubernetes-service/page/8/index.html b/cnny-2023/tags/azure-kubernetes-service/page/8/index.html index e97e8bb828..4dcb877336 100644 --- a/cnny-2023/tags/azure-kubernetes-service/page/8/index.html +++ b/cnny-2023/tags/azure-kubernetes-service/page/8/index.html @@ -14,13 +14,13 @@ - +

    21 posts tagged with "azure-kubernetes-service"

    View All Tags

    · 11 min read
    Paul Yu

    Welcome to Day 2 of Week 2 of #CloudNativeNewYear!

    The theme for this week is #Kubernetes fundamentals. Yesterday we talked about how to deploy a containerized web app workload to Azure Kubernetes Service (AKS). Today we'll explore the topic of services and ingress and walk through the steps of making our containers accessible both internally as well as over the internet so that you can share it with the world 😊

    Ask the Experts Thursday, February 9th at 9 AM PST
    Catch the Replay of the Live Demo

    Watch the recorded demo and conversation about this week's topics.

    We were live on YouTube walking through today's (and the rest of this week's) demos.

    What We'll Cover

    • Exposing Pods via Service
    • Exposing Services via Ingress
    • Takeaways
    • Resources

    Exposing Pods via Service

    There are a few ways to expose your pod in Kubernetes. One way is to take an imperative approach and use the kubectl expose command. This is probably the quickest way to achieve your goal but it isn't the best way. A better way to expose your pod by taking a declarative approach by creating a services manifest file and deploying it using the kubectl apply command.

    Don't worry if you are unsure of how to make this manifest, we'll use kubectl to help generate it.

    First, let's ensure we have the database deployed on our AKS cluster.

    📝 NOTE: If you don't have an AKS cluster deployed, please head over to Azure-Samples/azure-voting-app-rust, clone the repo, and follow the instructions in the README.md to execute the Azure deployment and setup your kubectl context. Check out the first post this week for more on the environment setup.

    kubectl apply -f ./manifests/deployment-db.yaml

    Next, let's deploy the application. If you are following along from yesterday's content, there isn't anything you need to change; however, if you are deploy the app from scratch, you'll need to modify the deployment-app.yaml manifest and update it with your image tag and database pod's IP address.

    kubectl apply -f ./manifests/deployment-app.yaml

    Now, let's expose the database using a service so that we can leverage Kubernetes' built-in service discovery to be able to reference it by name; not pod IP. Run the following command.

    kubectl expose deployment azure-voting-db \
    --port=5432 \
    --target-port=5432

    With the database exposed using service, we can update the app deployment manifest to use the service name instead of pod IP. This way, if the pod ever gets assigned a new IP, we don't have to worry about updating the IP each time and redeploying our web application. Kubernetes has internal service discovery mechanism in place that allows us to reference a service by its name.

    Let's make an update to the manifest. Replace the environment variable for DATABASE_SERVER with the following:

    - name: DATABASE_SERVER
    value: azure-voting-db

    Re-deploy the app with the updated configuration.

    kubectl apply -f ./manifests/deployment-app.yaml

    One service down, one to go. Run the following command to expose the web application.

    kubectl expose deployment azure-voting-app \
    --type=LoadBalancer \
    --port=80 \
    --target-port=8080

    Notice the --type argument has a value of LoadBalancer. This service type is implemented by the cloud-controller-manager which is part of the Kubernetes control plane. When using a managed Kubernetes cluster such as Azure Kubernetes Service, a public standard load balancer will be able to provisioned when the service type is set to LoadBalancer. The load balancer will also have a public IP assigned which will make your deployment publicly available.

    Kubernetes supports four service types:

    • ClusterIP: this is the default and limits service access to internal traffic within the cluster
    • NodePort: this assigns a port mapping on the node's IP address and allows traffic from the virtual network (outside the cluster)
    • LoadBalancer: as mentioned above, this creates a cloud-based load balancer
    • ExternalName: this is used in special case scenarios where you want to map a service to an external DNS name

    📝 NOTE: When exposing a web application to the internet, allowing external users to connect to your Service directly is not the best approach. Instead, you should use an Ingress, which we'll cover in the next section.

    Now, let's confirm you can reach the web app from the internet. You can use the following command to print the URL to your terminal.

    echo "http://$(kubectl get service azure-voting-app -o jsonpath='{.status.loadBalancer.ingress[0].ip}')"

    Great! The kubectl expose command gets the job done, but as mentioned above, it is not the best method of exposing deployments. It is better to expose deployments declaratively using a service manifest, so let's delete the services and redeploy using manifests.

    kubectl delete service azure-voting-db azure-voting-app

    To use kubectl to generate our manifest file, we can use the same kubectl expose command that we ran earlier but this time, we'll include --output=yaml and --dry-run=client. This will instruct the command to output the manifest that would be sent to the kube-api server in YAML format to the terminal.

    Generate the manifest for the database service.

    kubectl expose deployment azure-voting-db \
    --type=ClusterIP \
    --port=5432 \
    --target-port=5432 \
    --output=yaml \
    --dry-run=client > ./manifests/service-db.yaml

    Generate the manifest for the application service.

    kubectl expose deployment azure-voting-app \
    --type=LoadBalancer \
    --port=80 \
    --target-port=8080 \
    --output=yaml \
    --dry-run=client > ./manifests/service-app.yaml

    The command above redirected the YAML output to your manifests directory. Here is what the web application service looks like.

    apiVersion: v1
    kind: Service
    metadata:
    creationTimestamp: null
    labels:
    app: azure-voting-app
    name: azure-voting-app
    spec:
    ports:
    - port: 80
    protocol: TCP
    targetPort: 8080
    selector:
    app: azure-voting-app
    type: LoadBalancer
    status:
    loadBalancer: {}

    💡 TIP: To view the schema of any api-resource in Kubernetes, you can use the kubectl explain command. In this case the kubectl explain service command will tell us exactly what each of these fields do.

    Re-deploy the services using the new service manifests.

    kubectl apply -f ./manifests/service-db.yaml -f ./manifests/service-app.yaml

    # You should see TYPE is set to LoadBalancer and the EXTERNAL-IP is set
    kubectl get service azure-voting-db azure-voting-app

    Confirm again that our application is accessible again. Run the following command to print the URL to the terminal.

    echo "http://$(kubectl get service azure-voting-app -o jsonpath='{.status.loadBalancer.ingress[0].ip}')"

    That was easy, right? We just exposed both of our pods using Kubernetes services. The database only needs to be accessible from within the cluster so ClusterIP is perfect for that. For the web application, we specified the type to be LoadBalancer so that we can access the application over the public internet.

    But wait... remember that if you want to expose web applications over the public internet, a Service with a public IP is not the best way; the better approach is to use an Ingress resource.

    Exposing Services via Ingress

    If you read through the Kubernetes documentation on Ingress you will see a diagram that depicts the Ingress sitting in front of the Service resource with a routing rule between it. In order to use Ingress, you need to deploy an Ingress Controller and it can be configured with many routing rules to forward traffic to one or many backend services. So effectively, an Ingress is a load balancer for your Services.

    With that said, we no longer need a service type of LoadBalancer since the service does not need to be accessible from the internet. It only needs to be accessible from the Ingress Controller (internal to the cluster) so we can change the service type to ClusterIP.

    Update your service.yaml file to look like this:

    apiVersion: v1
    kind: Service
    metadata:
    creationTimestamp: null
    labels:
    app: azure-voting-app
    name: azure-voting-app
    spec:
    ports:
    - port: 80
    protocol: TCP
    targetPort: 8080
    selector:
    app: azure-voting-app

    📝 NOTE: The default service type is ClusterIP so we can omit the type altogether.

    Re-apply the app service manifest.

    kubectl apply -f ./manifests/service-app.yaml

    # You should see TYPE set to ClusterIP and EXTERNAL-IP set to <none>
    kubectl get service azure-voting-app

    Next, we need to install an Ingress Controller. There are quite a few options, and the Kubernetes-maintained NGINX Ingress Controller is commonly deployed.

    You could install this manually by following these instructions, but if you do that you'll be responsible for maintaining and supporting the resource.

    I like to take advantage of free maintenance and support when I can get it, so I'll opt to use the Web Application Routing add-on for AKS.

    💡 TIP: Whenever you install an AKS add-on, it will be maintained and fully supported by Azure Support.

    Enable the web application routing add-on in our AKS cluster with the following command.

    az aks addon enable \
    --name <YOUR_AKS_NAME> \
    --resource-group <YOUR_AKS_RESOURCE_GROUP>
    --addon web_application_routing

    ⚠️ WARNING: This command can take a few minutes to complete

    Now, let's use the same approach we took in creating our service to create our Ingress resource. Run the following command to generate the Ingress manifest.

    kubectl create ingress azure-voting-app \
    --class=webapprouting.kubernetes.azure.com \
    --rule="/*=azure-voting-app:80" \
    --output yaml \
    --dry-run=client > ./manifests/ingress.yaml

    The --class=webapprouting.kubernetes.azure.com option activates the AKS web application routing add-on. This AKS add-on can also integrate with other Azure services such as Azure DNS and Azure Key Vault for TLS certificate management and this special class makes it all work.

    The --rule="/*=azure-voting-app:80" option looks confusing but we can use kubectl again to help us understand how to format the value for the option.

    kubectl create ingress --help

    In the output you will see the following:

    --rule=[]:
    Rule in format host/path=service:port[,tls=secretname]. Paths containing the leading character '*' are
    considered pathType=Prefix. tls argument is optional.

    It expects a host and path separated by a forward-slash, then expects the backend service name and port separated by a colon. We're not using a hostname for this demo so we can omit it. For the path, an asterisk is used to specify a wildcard path prefix.

    So, the value of /*=azure-voting-app:80 creates a routing rule for all paths following the domain (or in our case since we don't have a hostname specified, the IP) to route traffic to our azure-voting-app backend service on port 80.

    📝 NOTE: Configuring the hostname and TLS is outside the scope of this demo but please visit this URL https://bit.ly/aks-webapp-routing for an in-depth hands-on lab centered around Web Application Routing on AKS.

    Your ingress.yaml file should look like this:

    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
    creationTimestamp: null
    name: azure-voting-app
    spec:
    ingressClassName: webapprouting.kubernetes.azure.com
    rules:
    - http:
    paths:
    - backend:
    service:
    name: azure-voting-app
    port:
    number: 80
    path: /
    pathType: Prefix
    status:
    loadBalancer: {}

    Apply the app ingress manifest.

    kubectl apply -f ./manifests/ingress.yaml

    Validate the web application is available from the internet again. You can run the following command to print the URL to the terminal.

    echo "http://$(kubectl get ingress azure-voting-app -o jsonpath='{.status.loadBalancer.ingress[0].ip}')"

    Takeaways

    Exposing your applications both internally and externally can be easily achieved using Service and Ingress resources respectively. If your service is HTTP or HTTPS based and needs to be accessible from outsie the cluster, use Ingress with an internal Service (i.e., ClusterIP or NodePort); otherwise, use the Service resource. If your TCP-based Service needs to be publicly accessible, you set the type to LoadBalancer to expose a public IP for it. To learn more about these resources, please visit the links listed below.

    Lastly, if you are unsure how to begin writing your service manifest, you can use kubectl and have it do most of the work for you 🥳

    Resources

    Take the Cloud Skills Challenge!

    Enroll in the Cloud Skills Challenge!

    Don't miss out on this opportunity to level up your skills and stay ahead of the curve in the world of cloud native.

    - + \ No newline at end of file diff --git a/cnny-2023/tags/azure-kubernetes-service/page/9/index.html b/cnny-2023/tags/azure-kubernetes-service/page/9/index.html index 61eedb4630..57b8e34bef 100644 --- a/cnny-2023/tags/azure-kubernetes-service/page/9/index.html +++ b/cnny-2023/tags/azure-kubernetes-service/page/9/index.html @@ -14,14 +14,14 @@ - +

    21 posts tagged with "azure-kubernetes-service"

    View All Tags

    · 6 min read
    Josh Duffney

    Welcome to Day 3 of Week 2 of #CloudNativeNewYear!

    The theme for this week is Kubernetes fundamentals. Yesterday we talked about Services and Ingress. Today we'll explore the topic of passing configuration and secrets to our applications in Kubernetes with ConfigMaps and Secrets.

    Ask the Experts Thursday, February 9th at 9 AM PST
    Catch the Replay of the Live Demo

    Watch the recorded demo and conversation about this week's topics.

    We were live on YouTube walking through today's (and the rest of this week's) demos.

    What We'll Cover

    • Decouple configurations with ConfigMaps and Secerts
    • Passing Environment Data with ConfigMaps and Secrets
    • Conclusion

    Decouple configurations with ConfigMaps and Secerts

    A ConfigMap is a Kubernetes object that decouples configuration data from pod definitions. Kubernetes secerts are similar, but were designed to decouple senstive information.

    Separating the configuration and secerts from your application promotes better organization and security of your Kubernetes environment. It also enables you to share the same configuration and different secerts across multiple pods and deployments which can simplify scaling and management. Using ConfigMaps and Secerts in Kubernetes is a best practice that can help to improve the scalability, security, and maintainability of your cluster.

    By the end of this tutorial, you'll have added a Kubernetes ConfigMap and Secret to the Azure Voting deployment.

    Passing Environment Data with ConfigMaps and Secrets

    📝 NOTE: If you don't have an AKS cluster deployed, please head over to Azure-Samples/azure-voting-app-rust, clone the repo, and follow the instructions in the README.md to execute the Azure deployment and setup your kubectl context. Check out the first post this week for more on the environment setup.

    Create the ConfigMap

    ConfigMaps can be used in one of two ways; as environment variables or volumes.

    For this tutorial you'll use a ConfigMap to create three environment variables inside the pod; DATABASE_SERVER, FISRT_VALUE, and SECOND_VALUE. The DATABASE_SERVER provides part of connection string to a Postgres. FIRST_VALUE and SECOND_VALUE are configuration options that change what voting options the application presents to the users.

    Follow the below steps to create a new ConfigMap:

    1. Create a YAML file named 'config-map.yaml'. In this file, specify the environment variables for the application.

      apiVersion: v1
      kind: ConfigMap
      metadata:
      name: azure-voting-config
      data:
      DATABASE_SERVER: azure-voting-db
      FIRST_VALUE: "Go"
      SECOND_VALUE: "Rust"
    2. Create the config map in your Kubernetes cluster by running the following command:

      kubectl create -f config-map.yaml

    Create the Secret

    The deployment-db.yaml and deployment-app.yaml are Kubernetes manifests that deploy the Azure Voting App. Currently, those deployment manifests contain the environment variables POSTGRES_PASSWORD and DATABASE_PASSWORD with the value stored as plain text. Your task is to replace that environment variable with a Kubernetes Secret.

    Create a Secret running the following commands:

    1. Encode mypassword.

      echo -n "mypassword" | base64
    2. Create a YAML file named secret.yaml. In this file, add POSTGRES_PASSWORD as the key and the encoded value returned above under as the value in the data section.

      apiVersion: v1
      kind: Secret
      metadata:
      name: azure-voting-secret
      type: Opaque
      data:
      POSTGRES_PASSWORD: bXlwYXNzd29yZA==
    3. Create the Secret in your Kubernetes cluster by running the following command:

      kubectl create -f secret.yaml

    [!WARNING] base64 encoding is a simple and widely supported way to obscure plaintext data, it is not secure, as it can easily be decoded. If you want to store sensitive data like password, you should use a more secure method like encrypting with a Key Management Service (KMS) before storing it in the Secret.

    Modify the app deployment manifest

    With the ConfigMap and Secert both created the next step is to replace the environment variables provided in the application deployment manuscript with the values stored in the ConfigMap and the Secert.

    Complete the following steps to add the ConfigMap and Secert to the deployment mainifest:

    1. Open the Kubernetes manifest file deployment-app.yaml.

    2. In the containers section, add an envFrom section and upate the env section.

      envFrom:
      - configMapRef:
      name: azure-voting-config
      env:
      - name: DATABASE_PASSWORD
      valueFrom:
      secretKeyRef:
      name: azure-voting-secret
      key: POSTGRES_PASSWORD

      Using envFrom exposes all the values witin the ConfigMap as environment variables. Making it so you don't have to list them individually.

    3. Save the changes to the deployment manifest file.

    4. Apply the changes to the deployment by running the following command:

      kubectl apply -f deployment-app.yaml

    Modify the database deployment manifest

    Next, update the database deployment manifest and replace the plain text environment variable with the Kubernetes Secert.

    1. Open the deployment-db.yaml.

    2. To add the secret to the deployment, replace the env section with the following code:

      env:
      - name: POSTGRES_PASSWORD
      valueFrom:
      secretKeyRef:
      name: azure-voting-secret
      key: POSTGRES_PASSWORD
    3. Apply the updated manifest.

      kubectl apply -f deployment-db.yaml

    Verify the ConfigMap and output environment variables

    Verify that the ConfigMap was added to your deploy by running the following command:

    ```bash
    kubectl describe deployment azure-voting-app
    ```

    Browse the output until you find the envFrom section with the config map reference.

    You can also verify that the environment variables from the config map are being passed to the container by running the command kubectl exec -it <pod-name> -- printenv. This command will show you all the environment variables passed to the pod including the one from configmap.

    By following these steps, you will have successfully added a config map to the Azure Voting App Kubernetes deployment, and the environment variables defined in the config map will be passed to the container running in the pod.

    Verify the Secret and describe the deployment

    Once the secret has been created you can verify it exists by running the following command:

    kubectl get secrets

    You can view additional information, such as labels, annotations, type, and the Data by running kubectl describe:

    kubectl describe secret azure-voting-secret

    By default, the describe command doesn't output the encoded value, but if you output the results as JSON or YAML you'll be able to see the secret's encoded value.

     kubectl get secret azure-voting-secret -o json

    Conclusion

    In conclusion, using ConfigMaps and Secrets in Kubernetes can help to improve the scalability, security, and maintainability of your cluster. By decoupling configuration data and sensitive information from pod definitions, you can promote better organization and security in your Kubernetes environment. Additionally, separating these elements allows for sharing the same configuration and different secrets across multiple pods and deployments, simplifying scaling and management.

    Take the Cloud Skills Challenge!

    Enroll in the Cloud Skills Challenge!

    Don't miss out on this opportunity to level up your skills and stay ahead of the curve in the world of cloud native.

    - + \ No newline at end of file diff --git a/cnny-2023/tags/cloud-native-new-year/index.html b/cnny-2023/tags/cloud-native-new-year/index.html index 2729e25e96..f4b76ce33f 100644 --- a/cnny-2023/tags/cloud-native-new-year/index.html +++ b/cnny-2023/tags/cloud-native-new-year/index.html @@ -14,13 +14,13 @@ - +

    5 posts tagged with "cloud-native-new-year"

    View All Tags

    · 11 min read
    Paul Yu

    Welcome to Day 2 of Week 2 of #CloudNativeNewYear!

    The theme for this week is #Kubernetes fundamentals. Yesterday we talked about how to deploy a containerized web app workload to Azure Kubernetes Service (AKS). Today we'll explore the topic of services and ingress and walk through the steps of making our containers accessible both internally as well as over the internet so that you can share it with the world 😊

    Ask the Experts Thursday, February 9th at 9 AM PST
    Catch the Replay of the Live Demo

    Watch the recorded demo and conversation about this week's topics.

    We were live on YouTube walking through today's (and the rest of this week's) demos.

    What We'll Cover

    • Exposing Pods via Service
    • Exposing Services via Ingress
    • Takeaways
    • Resources

    Exposing Pods via Service

    There are a few ways to expose your pod in Kubernetes. One way is to take an imperative approach and use the kubectl expose command. This is probably the quickest way to achieve your goal but it isn't the best way. A better way to expose your pod by taking a declarative approach by creating a services manifest file and deploying it using the kubectl apply command.

    Don't worry if you are unsure of how to make this manifest, we'll use kubectl to help generate it.

    First, let's ensure we have the database deployed on our AKS cluster.

    📝 NOTE: If you don't have an AKS cluster deployed, please head over to Azure-Samples/azure-voting-app-rust, clone the repo, and follow the instructions in the README.md to execute the Azure deployment and setup your kubectl context. Check out the first post this week for more on the environment setup.

    kubectl apply -f ./manifests/deployment-db.yaml

    Next, let's deploy the application. If you are following along from yesterday's content, there isn't anything you need to change; however, if you are deploy the app from scratch, you'll need to modify the deployment-app.yaml manifest and update it with your image tag and database pod's IP address.

    kubectl apply -f ./manifests/deployment-app.yaml

    Now, let's expose the database using a service so that we can leverage Kubernetes' built-in service discovery to be able to reference it by name; not pod IP. Run the following command.

    kubectl expose deployment azure-voting-db \
    --port=5432 \
    --target-port=5432

    With the database exposed using service, we can update the app deployment manifest to use the service name instead of pod IP. This way, if the pod ever gets assigned a new IP, we don't have to worry about updating the IP each time and redeploying our web application. Kubernetes has internal service discovery mechanism in place that allows us to reference a service by its name.

    Let's make an update to the manifest. Replace the environment variable for DATABASE_SERVER with the following:

    - name: DATABASE_SERVER
    value: azure-voting-db

    Re-deploy the app with the updated configuration.

    kubectl apply -f ./manifests/deployment-app.yaml

    One service down, one to go. Run the following command to expose the web application.

    kubectl expose deployment azure-voting-app \
    --type=LoadBalancer \
    --port=80 \
    --target-port=8080

    Notice the --type argument has a value of LoadBalancer. This service type is implemented by the cloud-controller-manager which is part of the Kubernetes control plane. When using a managed Kubernetes cluster such as Azure Kubernetes Service, a public standard load balancer will be able to provisioned when the service type is set to LoadBalancer. The load balancer will also have a public IP assigned which will make your deployment publicly available.

    Kubernetes supports four service types:

    • ClusterIP: this is the default and limits service access to internal traffic within the cluster
    • NodePort: this assigns a port mapping on the node's IP address and allows traffic from the virtual network (outside the cluster)
    • LoadBalancer: as mentioned above, this creates a cloud-based load balancer
    • ExternalName: this is used in special case scenarios where you want to map a service to an external DNS name

    📝 NOTE: When exposing a web application to the internet, allowing external users to connect to your Service directly is not the best approach. Instead, you should use an Ingress, which we'll cover in the next section.

    Now, let's confirm you can reach the web app from the internet. You can use the following command to print the URL to your terminal.

    echo "http://$(kubectl get service azure-voting-app -o jsonpath='{.status.loadBalancer.ingress[0].ip}')"

    Great! The kubectl expose command gets the job done, but as mentioned above, it is not the best method of exposing deployments. It is better to expose deployments declaratively using a service manifest, so let's delete the services and redeploy using manifests.

    kubectl delete service azure-voting-db azure-voting-app

    To use kubectl to generate our manifest file, we can use the same kubectl expose command that we ran earlier but this time, we'll include --output=yaml and --dry-run=client. This will instruct the command to output the manifest that would be sent to the kube-api server in YAML format to the terminal.

    Generate the manifest for the database service.

    kubectl expose deployment azure-voting-db \
    --type=ClusterIP \
    --port=5432 \
    --target-port=5432 \
    --output=yaml \
    --dry-run=client > ./manifests/service-db.yaml

    Generate the manifest for the application service.

    kubectl expose deployment azure-voting-app \
    --type=LoadBalancer \
    --port=80 \
    --target-port=8080 \
    --output=yaml \
    --dry-run=client > ./manifests/service-app.yaml

    The command above redirected the YAML output to your manifests directory. Here is what the web application service looks like.

    apiVersion: v1
    kind: Service
    metadata:
    creationTimestamp: null
    labels:
    app: azure-voting-app
    name: azure-voting-app
    spec:
    ports:
    - port: 80
    protocol: TCP
    targetPort: 8080
    selector:
    app: azure-voting-app
    type: LoadBalancer
    status:
    loadBalancer: {}

    💡 TIP: To view the schema of any api-resource in Kubernetes, you can use the kubectl explain command. In this case the kubectl explain service command will tell us exactly what each of these fields do.

    Re-deploy the services using the new service manifests.

    kubectl apply -f ./manifests/service-db.yaml -f ./manifests/service-app.yaml

    # You should see TYPE is set to LoadBalancer and the EXTERNAL-IP is set
    kubectl get service azure-voting-db azure-voting-app

    Confirm again that our application is accessible again. Run the following command to print the URL to the terminal.

    echo "http://$(kubectl get service azure-voting-app -o jsonpath='{.status.loadBalancer.ingress[0].ip}')"

    That was easy, right? We just exposed both of our pods using Kubernetes services. The database only needs to be accessible from within the cluster so ClusterIP is perfect for that. For the web application, we specified the type to be LoadBalancer so that we can access the application over the public internet.

    But wait... remember that if you want to expose web applications over the public internet, a Service with a public IP is not the best way; the better approach is to use an Ingress resource.

    Exposing Services via Ingress

    If you read through the Kubernetes documentation on Ingress you will see a diagram that depicts the Ingress sitting in front of the Service resource with a routing rule between it. In order to use Ingress, you need to deploy an Ingress Controller and it can be configured with many routing rules to forward traffic to one or many backend services. So effectively, an Ingress is a load balancer for your Services.

    With that said, we no longer need a service type of LoadBalancer since the service does not need to be accessible from the internet. It only needs to be accessible from the Ingress Controller (internal to the cluster) so we can change the service type to ClusterIP.

    Update your service.yaml file to look like this:

    apiVersion: v1
    kind: Service
    metadata:
    creationTimestamp: null
    labels:
    app: azure-voting-app
    name: azure-voting-app
    spec:
    ports:
    - port: 80
    protocol: TCP
    targetPort: 8080
    selector:
    app: azure-voting-app

    📝 NOTE: The default service type is ClusterIP so we can omit the type altogether.

    Re-apply the app service manifest.

    kubectl apply -f ./manifests/service-app.yaml

    # You should see TYPE set to ClusterIP and EXTERNAL-IP set to <none>
    kubectl get service azure-voting-app

    Next, we need to install an Ingress Controller. There are quite a few options, and the Kubernetes-maintained NGINX Ingress Controller is commonly deployed.

    You could install this manually by following these instructions, but if you do that you'll be responsible for maintaining and supporting the resource.

    I like to take advantage of free maintenance and support when I can get it, so I'll opt to use the Web Application Routing add-on for AKS.

    💡 TIP: Whenever you install an AKS add-on, it will be maintained and fully supported by Azure Support.

    Enable the web application routing add-on in our AKS cluster with the following command.

    az aks addon enable \
    --name <YOUR_AKS_NAME> \
    --resource-group <YOUR_AKS_RESOURCE_GROUP>
    --addon web_application_routing

    ⚠️ WARNING: This command can take a few minutes to complete

    Now, let's use the same approach we took in creating our service to create our Ingress resource. Run the following command to generate the Ingress manifest.

    kubectl create ingress azure-voting-app \
    --class=webapprouting.kubernetes.azure.com \
    --rule="/*=azure-voting-app:80" \
    --output yaml \
    --dry-run=client > ./manifests/ingress.yaml

    The --class=webapprouting.kubernetes.azure.com option activates the AKS web application routing add-on. This AKS add-on can also integrate with other Azure services such as Azure DNS and Azure Key Vault for TLS certificate management and this special class makes it all work.

    The --rule="/*=azure-voting-app:80" option looks confusing but we can use kubectl again to help us understand how to format the value for the option.

    kubectl create ingress --help

    In the output you will see the following:

    --rule=[]:
    Rule in format host/path=service:port[,tls=secretname]. Paths containing the leading character '*' are
    considered pathType=Prefix. tls argument is optional.

    It expects a host and path separated by a forward-slash, then expects the backend service name and port separated by a colon. We're not using a hostname for this demo so we can omit it. For the path, an asterisk is used to specify a wildcard path prefix.

    So, the value of /*=azure-voting-app:80 creates a routing rule for all paths following the domain (or in our case since we don't have a hostname specified, the IP) to route traffic to our azure-voting-app backend service on port 80.

    📝 NOTE: Configuring the hostname and TLS is outside the scope of this demo but please visit this URL https://bit.ly/aks-webapp-routing for an in-depth hands-on lab centered around Web Application Routing on AKS.

    Your ingress.yaml file should look like this:

    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
    creationTimestamp: null
    name: azure-voting-app
    spec:
    ingressClassName: webapprouting.kubernetes.azure.com
    rules:
    - http:
    paths:
    - backend:
    service:
    name: azure-voting-app
    port:
    number: 80
    path: /
    pathType: Prefix
    status:
    loadBalancer: {}

    Apply the app ingress manifest.

    kubectl apply -f ./manifests/ingress.yaml

    Validate the web application is available from the internet again. You can run the following command to print the URL to the terminal.

    echo "http://$(kubectl get ingress azure-voting-app -o jsonpath='{.status.loadBalancer.ingress[0].ip}')"

    Takeaways

    Exposing your applications both internally and externally can be easily achieved using Service and Ingress resources respectively. If your service is HTTP or HTTPS based and needs to be accessible from outsie the cluster, use Ingress with an internal Service (i.e., ClusterIP or NodePort); otherwise, use the Service resource. If your TCP-based Service needs to be publicly accessible, you set the type to LoadBalancer to expose a public IP for it. To learn more about these resources, please visit the links listed below.

    Lastly, if you are unsure how to begin writing your service manifest, you can use kubectl and have it do most of the work for you 🥳

    Resources

    Take the Cloud Skills Challenge!

    Enroll in the Cloud Skills Challenge!

    Don't miss out on this opportunity to level up your skills and stay ahead of the curve in the world of cloud native.

    - + \ No newline at end of file diff --git a/cnny-2023/tags/cloud-native-new-year/page/2/index.html b/cnny-2023/tags/cloud-native-new-year/page/2/index.html index d84c808f8c..c8ac0fc55a 100644 --- a/cnny-2023/tags/cloud-native-new-year/page/2/index.html +++ b/cnny-2023/tags/cloud-native-new-year/page/2/index.html @@ -14,13 +14,13 @@ - +

    5 posts tagged with "cloud-native-new-year"

    View All Tags

    · 8 min read
    Paul Yu

    Welcome to Day 4 of Week 2 of #CloudNativeNewYear!

    The theme for this week is Kubernetes fundamentals. Yesterday we talked about how to set app configurations and secrets at runtime using Kubernetes ConfigMaps and Secrets. Today we'll explore the topic of persistent storage on Kubernetes and show you can leverage Persistent Volumes and Persistent Volume Claims to ensure your PostgreSQL data can survive container restarts.

    Ask the Experts Thursday, February 9th at 9 AM PST
    Catch the Replay of the Live Demo

    Watch the recorded demo and conversation about this week's topics.

    We were live on YouTube walking through today's (and the rest of this week's) demos.

    What We'll Cover

    • Containers are ephemeral
    • Persistent storage on Kubernetes
    • Persistent storage on AKS
    • Takeaways
    • Resources

    Containers are ephemeral

    In our sample application, the frontend UI writes vote values to a backend PostgreSQL database. By default the database container stores its data on the container's local file system, so there will be data loss when the pod is re-deployed or crashes as containers are meant to start with a clean slate each time.

    Let's re-deploy our sample app and experience the problem first hand.

    📝 NOTE: If you don't have an AKS cluster deployed, please head over to Azure-Samples/azure-voting-app-rust, clone the repo, and follow the instructions in the README.md to execute the Azure deployment and setup your kubectl context. Check out the first post this week for more on the environment setup.

    kubectl apply -f ./manifests

    Wait for the azure-voting-app service to be assigned a public IP then browse to the website and submit some votes. Use the command below to print the URL to the terminal.

    echo "http://$(kubectl get ingress azure-voting-app -o jsonpath='{.status.loadBalancer.ingress[0].ip}')"

    Now, let's delete the pods and watch Kubernetes do what it does best... that is, re-schedule pods.

    # wait for the pod to come up then ctrl+c to stop watching
    kubectl delete --all pod --wait=false && kubectl get po -w

    Once the pods have been recovered, reload the website and confirm the vote tally has been reset to zero.

    We need to fix this so that the data outlives the container.

    Persistent storage on Kubernetes

    In order for application data to survive crashes and restarts, you must implement Persistent Volumes and Persistent Volume Claims.

    A persistent volume represents storage that is available to the cluster. Storage volumes can be provisioned manually by an administrator or dynamically using Container Storage Interface (CSI) and storage classes, which includes information on how to provision CSI volumes.

    When a user needs to add persistent storage to their application, a persistent volume claim is made to allocate chunks of storage from the volume. This "claim" includes things like volume mode (e.g., file system or block storage), the amount of storage to allocate, the access mode, and optionally a storage class. Once a persistent volume claim has been deployed, users can add the volume to the pod and mount it in a container.

    In the next section, we'll demonstrate how to enable persistent storage on AKS.

    Persistent storage on AKS

    With AKS, CSI drivers and storage classes are pre-deployed into your cluster. This allows you to natively use Azure Disks, Azure Files, and Azure Blob Storage as persistent volumes. You can either bring your own Azure storage account and use it with AKS or have AKS provision an Azure storage account for you.

    To view the Storage CSI drivers that have been enabled in your AKS cluster, run the following command.

    az aks show \
    --name <YOUR_AKS_NAME> \
    --resource-group <YOUR_AKS_RESOURCE_GROUP> \
    --query storageProfile

    You should see output that looks like this.

    {
    "blobCsiDriver": null,
    "diskCsiDriver": {
    "enabled": true,
    "version": "v1"
    },
    "fileCsiDriver": {
    "enabled": true
    },
    "snapshotController": {
    "enabled": true
    }
    }

    To view the storage classes that have been installed in your cluster, run the following command.

    kubectl get storageclass

    Workload requirements will dictate which CSI driver and storage class you will need to use.

    If you need block storage, then you should use the blobCsiDriver. The driver may not be enabled by default but you can enable it by following instructions which can be found in the Resources section below.

    If you need file storage you should leverage either diskCsiDriver or fileCsiDriver. The decision between these two boils down to whether or not you need to have the underlying storage accessible by one pod or multiple pods. It is important to note that diskCsiDriver currently supports access from a single pod only. Therefore, if you need data to be accessible by multiple pods at the same time, then you should opt for fileCsiDriver.

    For our PostgreSQL deployment, we'll use the diskCsiDriver and have AKS create an Azure Disk resource for us. There is no need to create a PV resource, all we need to do to is create a PVC using the managed-csi-premium storage class.

    Run the following command to create the PVC.

    kubectl apply -f - <<EOF            
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
    name: pvc-azuredisk
    spec:
    accessModes:
    - ReadWriteOnce
    resources:
    requests:
    storage: 10Gi
    storageClassName: managed-csi-premium
    EOF

    When you check the PVC resource, you'll notice the STATUS is set to Pending. It will be set to Bound once the volume is mounted in the PostgreSQL container.

    kubectl get persistentvolumeclaim

    Let's delete the azure-voting-db deployment.

    kubectl delete deploy azure-voting-db

    Next, we need to apply an updated deployment manifest which includes our PVC.

    kubectl apply -f - <<EOF
    apiVersion: apps/v1
    kind: Deployment
    metadata:
    creationTimestamp: null
    labels:
    app: azure-voting-db
    name: azure-voting-db
    spec:
    replicas: 1
    selector:
    matchLabels:
    app: azure-voting-db
    strategy: {}
    template:
    metadata:
    creationTimestamp: null
    labels:
    app: azure-voting-db
    spec:
    containers:
    - image: postgres:15.0-alpine
    name: postgres
    ports:
    - containerPort: 5432
    env:
    - name: POSTGRES_PASSWORD
    valueFrom:
    secretKeyRef:
    name: azure-voting-secret
    key: POSTGRES_PASSWORD
    resources: {}
    volumeMounts:
    - name: mypvc
    mountPath: "/var/lib/postgresql/data"
    subPath: "data"
    volumes:
    - name: mypvc
    persistentVolumeClaim:
    claimName: pvc-azuredisk
    EOF

    In the manifest above, you'll see that we are mounting a new volume called mypvc (the name can be whatever you want) in the pod which points to a PVC named pvc-azuredisk. With the volume in place, we can mount it in the container by referencing the name of the volume mypvc and setting the mount path to /var/lib/postgresql/data (which is the default path).

    💡 IMPORTANT: When mounting a volume into a non-empty subdirectory, you must add subPath to the volume mount and point it to a subdirectory in the volume rather than mounting at root. In our case, when Azure Disk is formatted, it leaves a lost+found directory as documented here.

    Watch the pods and wait for the STATUS to show Running and the pod's READY status shows 1/1.

    # wait for the pod to come up then ctrl+c to stop watching
    kubectl get po -w

    Verify that the STATUS of the PVC is now set to Bound

    kubectl get persistentvolumeclaim

    With the new database container running, let's restart the application pod, wait for the pod's READY status to show 1/1, then head back over to our web browser and submit a few votes.

    kubectl delete pod -lapp=azure-voting-app --wait=false && kubectl get po -lapp=azure-voting-app -w

    Now the moment of truth... let's rip out the pods again, wait for the pods to be re-scheduled, and confirm our vote counts remain in tact.

    kubectl delete --all pod --wait=false && kubectl get po -w

    If you navigate back to the website, you'll find the vote are still there 🎉

    Takeaways

    By design, containers are meant to be ephemeral and stateless workloads are ideal on Kubernetes. However, there will come a time when your data needs to outlive the container. To persist data in your Kubernetes workloads, you need to leverage PV, PVC, and optionally storage classes. In our demo scenario, we leveraged CSI drivers built into AKS and created a PVC using pre-installed storage classes. From there, we updated the database deployment to mount the PVC in the container and AKS did the rest of the work in provisioning the underlying Azure Disk. If the built-in storage classes does not fit your needs; for example, you need to change the ReclaimPolicy or change the SKU for the Azure resource, then you can create your own custom storage class and configure it just the way you need it 😊

    We'll revisit this topic again next week but in the meantime, check out some of the resources listed below to learn more.

    See you in the next post!

    Resources

    Take the Cloud Skills Challenge!

    Enroll in the Cloud Skills Challenge!

    Don't miss out on this opportunity to level up your skills and stay ahead of the curve in the world of cloud native.

    - + \ No newline at end of file diff --git a/cnny-2023/tags/cloud-native-new-year/page/3/index.html b/cnny-2023/tags/cloud-native-new-year/page/3/index.html index a5ec002a3f..376cd5ea56 100644 --- a/cnny-2023/tags/cloud-native-new-year/page/3/index.html +++ b/cnny-2023/tags/cloud-native-new-year/page/3/index.html @@ -14,13 +14,13 @@ - +

    5 posts tagged with "cloud-native-new-year"

    View All Tags

    · 12 min read
    Paul Yu

    Welcome to Day 2 of Week 3 of #CloudNativeNewYear!

    The theme for this week is Bringing Your Application to Kubernetes. Yesterday we talked about getting an existing application running in Kubernetes with a full pipeline in GitHub Actions. Today we'll evaluate our sample application's configuration, storage, and networking requirements and implement using Kubernetes and Azure resources.

    Ask the Experts Thursday, February 9th at 9 AM PST
    Friday, February 10th at 11 AM PST

    Watch the recorded demo and conversation about this week's topics.

    We were live on YouTube walking through today's (and the rest of this week's) demos. Join us Friday, February 10th and bring your questions!

    What We'll Cover

    • Gather requirements
    • Implement environment variables using ConfigMaps
    • Implement persistent volumes using Azure Files
    • Implement secrets using Azure Key Vault
    • Re-package deployments
    • Conclusion
    • Resources
    caution

    Before you begin, make sure you've gone through yesterday's post to set up your AKS cluster.

    Gather requirements

    The eShopOnWeb application is written in .NET 7 and has two major pieces of functionality. The web UI is where customers can browse and shop. The web UI also includes an admin portal for managing the product catalog. This admin portal, is packaged as a WebAssembly application and relies on a separate REST API service. Both the web UI and the REST API connect to the same SQL Server container.

    Looking through the source code which can be found here we can identify requirements for configs, persistent storage, and secrets.

    Database server

    • Need to store the password for the sa account as a secure secret
    • Need persistent storage volume for data directory
    • Need to inject environment variables for SQL Server license type and EULA acceptance

    Web UI and REST API service

    • Need to store database connection string as a secure secret
    • Need to inject ASP.NET environment variables to override app settings
    • Need persistent storage volume for ASP.NET key storage

    Implement environment variables using ConfigMaps

    ConfigMaps are relatively straight-forward to create. If you were following along with the examples last week, this should be review 😉

    Create a ConfigMap to store database environment variables.

    kubectl apply -f - <<EOF
    apiVersion: v1
    kind: ConfigMap
    metadata:
    name: mssql-settings
    data:
    MSSQL_PID: Developer
    ACCEPT_EULA: "Y"
    EOF

    Create another ConfigMap to store ASP.NET environment variables.

    kubectl apply -f - <<EOF
    apiVersion: v1
    kind: ConfigMap
    metadata:
    name: aspnet-settings
    data:
    ASPNETCORE_ENVIRONMENT: Development
    EOF

    Implement persistent volumes using Azure Files

    Similar to last week, we'll take advantage of storage classes built into AKS. For our SQL Server data, we'll use the azurefile-csi-premium storage class and leverage an Azure Files resource as our PersistentVolume.

    Create a PersistentVolumeClaim (PVC) for persisting SQL Server data.

    kubectl apply -f - <<EOF
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
    name: mssql-data
    spec:
    accessModes:
    - ReadWriteMany
    storageClassName: azurefile-csi-premium
    resources:
    requests:
    storage: 5Gi
    EOF

    Create another PVC for persisting ASP.NET data.

    kubectl apply -f - <<EOF
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
    name: aspnet-data
    spec:
    accessModes:
    - ReadWriteMany
    storageClassName: azurefile-csi-premium
    resources:
    requests:
    storage: 5Gi
    EOF

    Implement secrets using Azure Key Vault

    It's a well known fact that Kubernetes secretes are not really secrets. They're just base64-encoded values and not secure, especially if malicious users have access to your Kubernetes cluster.

    In a production scenario, you will want to leverage an external vault like Azure Key Vault or HashiCorp Vault to encrypt and store secrets.

    With AKS, we can enable the Secrets Store CSI driver add-on which will allow us to leverage Azure Key Vault.

    # Set some variables
    RG_NAME=<YOUR_RESOURCE_GROUP_NAME>
    AKS_NAME=<YOUR_AKS_CLUSTER_NAME>
    ACR_NAME=<YOUR_ACR_NAME>

    az aks enable-addons \
    --addons azure-keyvault-secrets-provider \
    --name $AKS_NAME \
    --resource-group $RG_NAME

    With the add-on enabled, you should see aks-secrets-store-csi-driver and aks-secrets-store-provider-azure resources installed on each node in your Kubernetes cluster.

    Run the command below to verify.

    kubectl get pods \
    --namespace kube-system \
    --selector 'app in (secrets-store-csi-driver, secrets-store-provider-azure)'

    The Secrets Store CSI driver allows us to use secret stores via Container Storage Interface (CSI) volumes. This provider offers capabilities such as mounting and syncing between the secure vault and Kubernetes Secrets. On AKS, the Azure Key Vault Provider for Secrets Store CSI Driver enables integration with Azure Key Vault.

    You may not have an Azure Key Vault created yet, so let's create one and add some secrets to it.

    AKV_NAME=$(az keyvault create \
    --name akv-eshop$RANDOM \
    --resource-group $RG_NAME \
    --query name -o tsv)

    # Database server password
    az keyvault secret set \
    --vault-name $AKV_NAME \
    --name mssql-password \
    --value "@someThingComplicated1234"

    # Catalog database connection string
    az keyvault secret set \
    --vault-name $AKV_NAME \
    --name mssql-connection-catalog \
    --value "Server=db;Database=Microsoft.eShopOnWeb.CatalogDb;User Id=sa;Password=@someThingComplicated1234;TrustServerCertificate=True;"

    # Identity database connection string
    az keyvault secret set \
    --vault-name $AKV_NAME \
    --name mssql-connection-identity \
    --value "Server=db;Database=Microsoft.eShopOnWeb.Identity;User Id=sa;Password=@someThingComplicated1234;TrustServerCertificate=True;"

    Pods authentication using Azure Workload Identity

    In order for our Pods to retrieve secrets from Azure Key Vault, we'll need to set up a way for the Pod to authenticate against Azure AD. This can be achieved by implementing the new Azure Workload Identity feature of AKS.

    info

    At the time of this writing, the workload identity feature of AKS is in Preview.

    The workload identity feature within AKS allows us to leverage native Kubernetes resources and link a Kubernetes ServiceAccount to an Azure Managed Identity to authenticate against Azure AD.

    For the authentication flow, our Kubernetes cluster will act as an Open ID Connect (OIDC) issuer and will be able issue identity tokens to ServiceAccounts which will be assigned to our Pods.

    The Azure Managed Identity will be granted permission to access secrets in our Azure Key Vault and with the ServiceAccount being assigned to our Pods, they will be able to retrieve our secrets.

    For more information on how the authentication mechanism all works, check out this doc.

    To implement all this, start by enabling the new preview feature for AKS.

    az feature register \
    --namespace "Microsoft.ContainerService" \
    --name "EnableWorkloadIdentityPreview"
    caution

    This can take several minutes to complete.

    Check the status and ensure the state shows Regestered before moving forward.

    az feature show \
    --namespace "Microsoft.ContainerService" \
    --name "EnableWorkloadIdentityPreview"

    Update your AKS cluster to enable the workload identity feature and enable the OIDC issuer endpoint.

    az aks update \
    --name $AKS_NAME \
    --resource-group $RG_NAME \
    --enable-workload-identity \
    --enable-oidc-issuer

    Create an Azure Managed Identity and retrieve its client ID.

    MANAGED_IDENTITY_CLIENT_ID=$(az identity create \
    --name aks-workload-identity \
    --resource-group $RG_NAME \
    --subscription $(az account show --query id -o tsv) \
    --query 'clientId' -o tsv)

    Create the Kubernetes ServiceAccount.

    # Set namespace (this must align with the namespace that your app is deployed into)
    SERVICE_ACCOUNT_NAMESPACE=default

    # Set the service account name
    SERVICE_ACCOUNT_NAME=eshop-serviceaccount

    # Create the service account
    kubectl apply -f - <<EOF
    apiVersion: v1
    kind: ServiceAccount
    metadata:
    annotations:
    azure.workload.identity/client-id: ${MANAGED_IDENTITY_CLIENT_ID}
    labels:
    azure.workload.identity/use: "true"
    name: ${SERVICE_ACCOUNT_NAME}
    namespace: ${SERVICE_ACCOUNT_NAMESPACE}
    EOF
    info

    Note to enable this ServiceAccount to work with Azure Workload Identity, you must annotate the resource with azure.workload.identity/client-id, and add a label of azure.workload.identity/use: "true"

    That was a lot... Let's review what we just did.

    We have an Azure Managed Identity (object in Azure AD), an OIDC issuer URL (endpoint in our Kubernetes cluster), and a Kubernetes ServiceAccount.

    The next step is to "tie" these components together and establish a Federated Identity Credential so that Azure AD can trust authentication requests from your Kubernetes cluster.

    info

    This identity federation can be established between Azure AD any Kubernetes cluster; not just AKS 🤗

    To establish the federated credential, we'll need the OIDC issuer URL, and a subject which points to your Kubernetes ServiceAccount.

    # Get the OIDC issuer URL
    OIDC_ISSUER_URL=$(az aks show \
    --name $AKS_NAME \
    --resource-group $RG_NAME \
    --query "oidcIssuerProfile.issuerUrl" -o tsv)

    # Set the subject name using this format: `system:serviceaccount:<YOUR_SERVICE_ACCOUNT_NAMESPACE>:<YOUR_SERVICE_ACCOUNT_NAME>`
    SUBJECT=system:serviceaccount:$SERVICE_ACCOUNT_NAMESPACE:$SERVICE_ACCOUNT_NAME

    az identity federated-credential create \
    --name aks-federated-credential \
    --identity-name aks-workload-identity \
    --resource-group $RG_NAME \
    --issuer $OIDC_ISSUER_URL \
    --subject $SUBJECT

    With the authentication components set, we can now create a SecretProviderClass which includes details about the Azure Key Vault, the secrets to pull out from the vault, and identity used to access the vault.

    # Get the tenant id for the key vault
    TENANT_ID=$(az keyvault show \
    --name $AKV_NAME \
    --resource-group $RG_NAME \
    --query properties.tenantId -o tsv)

    # Create the secret provider for azure key vault
    kubectl apply -f - <<EOF
    apiVersion: secrets-store.csi.x-k8s.io/v1
    kind: SecretProviderClass
    metadata:
    name: eshop-azure-keyvault
    spec:
    provider: azure
    parameters:
    usePodIdentity: "false"
    useVMManagedIdentity: "false"
    clientID: "${MANAGED_IDENTITY_CLIENT_ID}"
    keyvaultName: "${AKV_NAME}"
    cloudName: ""
    objects: |
    array:
    - |
    objectName: mssql-password
    objectType: secret
    objectVersion: ""
    - |
    objectName: mssql-connection-catalog
    objectType: secret
    objectVersion: ""
    - |
    objectName: mssql-connection-identity
    objectType: secret
    objectVersion: ""
    tenantId: "${TENANT_ID}"
    secretObjects:
    - secretName: eshop-secrets
    type: Opaque
    data:
    - objectName: mssql-password
    key: mssql-password
    - objectName: mssql-connection-catalog
    key: mssql-connection-catalog
    - objectName: mssql-connection-identity
    key: mssql-connection-identity
    EOF

    Finally, lets grant the Azure Managed Identity permissions to retrieve secrets from the Azure Key Vault.

    az keyvault set-policy \
    --name $AKV_NAME \
    --secret-permissions get \
    --spn $MANAGED_IDENTITY_CLIENT_ID

    Re-package deployments

    Update your database deployment to load environment variables from our ConfigMap, attach the PVC and SecretProviderClass as volumes, mount the volumes into the Pod, and use the ServiceAccount to retrieve secrets.

    Additionally, you may notice the database Pod is set to use fsGroup:10001 as part of the securityContext. This is required as the MSSQL container runs using a non-root account called mssql and this account has the proper permissions to read/write data at the /var/opt/mssql mount path.

    kubectl apply -f - <<EOF
    apiVersion: apps/v1
    kind: Deployment
    metadata:
    name: db
    labels:
    app: db
    spec:
    replicas: 1
    selector:
    matchLabels:
    app: db
    template:
    metadata:
    labels:
    app: db
    spec:
    securityContext:
    fsGroup: 10001
    serviceAccountName: ${SERVICE_ACCOUNT_NAME}
    containers:
    - name: db
    image: mcr.microsoft.com/mssql/server:2019-latest
    ports:
    - containerPort: 1433
    envFrom:
    - configMapRef:
    name: mssql-settings
    env:
    - name: MSSQL_SA_PASSWORD
    valueFrom:
    secretKeyRef:
    name: eshop-secrets
    key: mssql-password
    resources: {}
    volumeMounts:
    - name: mssqldb
    mountPath: /var/opt/mssql
    - name: eshop-secrets
    mountPath: "/mnt/secrets-store"
    readOnly: true
    volumes:
    - name: mssqldb
    persistentVolumeClaim:
    claimName: mssql-data
    - name: eshop-secrets
    csi:
    driver: secrets-store.csi.k8s.io
    readOnly: true
    volumeAttributes:
    secretProviderClass: eshop-azure-keyvault
    EOF

    We'll update the API and Web deployments in a similar way.

    # Set the image tag
    IMAGE_TAG=<YOUR_IMAGE_TAG>

    # API deployment
    kubectl apply -f - <<EOF
    apiVersion: apps/v1
    kind: Deployment
    metadata:
    name: api
    labels:
    app: api
    spec:
    replicas: 1
    selector:
    matchLabels:
    app: api
    template:
    metadata:
    labels:
    app: api
    spec:
    serviceAccount: ${SERVICE_ACCOUNT_NAME}
    containers:
    - name: api
    image: ${ACR_NAME}.azurecr.io/api:${IMAGE_TAG}
    ports:
    - containerPort: 80
    envFrom:
    - configMapRef:
    name: aspnet-settings
    env:
    - name: ConnectionStrings__CatalogConnection
    valueFrom:
    secretKeyRef:
    name: eshop-secrets
    key: mssql-connection-catalog
    - name: ConnectionStrings__IdentityConnection
    valueFrom:
    secretKeyRef:
    name: eshop-secrets
    key: mssql-connection-identity
    resources: {}
    volumeMounts:
    - name: aspnet
    mountPath: ~/.aspnet/https:/root/.aspnet/https:ro
    - name: eshop-secrets
    mountPath: "/mnt/secrets-store"
    readOnly: true
    volumes:
    - name: aspnet
    persistentVolumeClaim:
    claimName: aspnet-data
    - name: eshop-secrets
    csi:
    driver: secrets-store.csi.k8s.io
    readOnly: true
    volumeAttributes:
    secretProviderClass: eshop-azure-keyvault
    EOF

    ## Web deployment
    kubectl apply -f - <<EOF
    apiVersion: apps/v1
    kind: Deployment
    metadata:
    name: web
    labels:
    app: web
    spec:
    replicas: 1
    selector:
    matchLabels:
    app: web
    template:
    metadata:
    labels:
    app: web
    spec:
    serviceAccount: ${SERVICE_ACCOUNT_NAME}
    containers:
    - name: web
    image: ${ACR_NAME}.azurecr.io/web:${IMAGE_TAG}
    ports:
    - containerPort: 80
    envFrom:
    - configMapRef:
    name: aspnet-settings
    env:
    - name: ConnectionStrings__CatalogConnection
    valueFrom:
    secretKeyRef:
    name: eshop-secrets
    key: mssql-connection-catalog
    - name: ConnectionStrings__IdentityConnection
    valueFrom:
    secretKeyRef:
    name: eshop-secrets
    key: mssql-connection-identity
    resources: {}
    volumeMounts:
    - name: aspnet
    mountPath: ~/.aspnet/https:/root/.aspnet/https:ro
    - name: eshop-secrets
    mountPath: "/mnt/secrets-store"
    readOnly: true
    volumes:
    - name: aspnet
    persistentVolumeClaim:
    claimName: aspnet-data
    - name: eshop-secrets
    csi:
    driver: secrets-store.csi.k8s.io
    readOnly: true
    volumeAttributes:
    secretProviderClass: eshop-azure-keyvault
    EOF

    If all went well with your deployment updates, you should be able to browse to your website and buy some merchandise again 🥳

    echo "http://$(kubectl get service web -o jsonpath='{.status.loadBalancer.ingress[0].ip}')"

    Conclusion

    Although there is no visible changes on with our website, we've made a ton of changes on the Kubernetes backend to make this application much more secure and resilient.

    We used a combination of Kubernetes resources and AKS-specific features to achieve our goal of securing our secrets and ensuring data is not lost on container crashes and restarts.

    To learn more about the components we leveraged here today, checkout the resources and additional tutorials listed below.

    You can also find manifests with all the changes made in today's post in the Azure-Samples/eShopOnAKS repository.

    See you in the next post!

    Resources

    Take the Cloud Skills Challenge!

    Enroll in the Cloud Skills Challenge!

    Don't miss out on this opportunity to level up your skills and stay ahead of the curve in the world of cloud native.

    - + \ No newline at end of file diff --git a/cnny-2023/tags/cloud-native-new-year/page/4/index.html b/cnny-2023/tags/cloud-native-new-year/page/4/index.html index 7e4053c1f3..1519682637 100644 --- a/cnny-2023/tags/cloud-native-new-year/page/4/index.html +++ b/cnny-2023/tags/cloud-native-new-year/page/4/index.html @@ -14,13 +14,13 @@ - +

    5 posts tagged with "cloud-native-new-year"

    View All Tags

    · 10 min read
    Paul Yu

    Welcome to Day 3 of Week 3 of #CloudNativeNewYear!

    The theme for this week is Bringing Your Application to Kubernetes. Yesterday we added configuration, secrets, and storage to our app. Today we'll explore how to expose the eShopOnWeb app so that customers can reach it over the internet using a custom domain name and TLS.

    Ask the Experts Thursday, February 9th at 9 AM PST
    Friday, February 10th at 11 AM PST

    Watch the recorded demo and conversation about this week's topics.

    We were live on YouTube walking through today's (and the rest of this week's) demos. Join us Friday, February 10th and bring your questions!

    What We'll Cover

    • Gather requirements
    • Generate TLS certificate and store in Azure Key Vault
    • Implement custom DNS using Azure DNS
    • Enable Web Application Routing add-on for AKS
    • Implement Ingress for the web application
    • Conclusion
    • Resources

    Gather requirements

    Currently, our eShopOnWeb app has three Kubernetes services deployed:

    1. db exposed internally via ClusterIP
    2. api exposed externally via LoadBalancer
    3. web exposed externally via LoadBalancer

    As mentioned in my post last week, Services allow applications to communicate with each other using DNS names. Kubernetes has service discovery capabilities built-in that allows Pods to resolve Services simply by using their names.

    In the case of our api and web deployments, they can simply reach the database by calling its name. The service type of ClusterIP for the db can remain as-is since it only needs to be accessed by the api and web apps.

    On the other hand, api and web both need to be accessed over the public internet. Currently, these services are using service type LoadBalancer which tells AKS to provision an Azure Load Balancer with a public IP address. No one is going to remember the IP addresses, so we need to make the app more accessible by adding a custom domain name and securing it with a TLS certificate.

    Here's what we're going to need:

    • Custom domain name for our app
    • TLS certificate for the custom domain name
    • Routing rule to ensure requests with /api/ in the URL is routed to the backend REST API
    • Routing rule to ensure requests without /api/ in the URL is routing to the web UI

    Just like last week, we will use the Web Application Routing add-on for AKS. But this time, we'll integrate it with Azure DNS and Azure Key Vault to satisfy all of our requirements above.

    info

    At the time of this writing the add-on is still in Public Preview

    Generate TLS certificate and store in Azure Key Vault

    We deployed an Azure Key Vault yesterday to store secrets. We'll use it again to store a TLS certificate too.

    Let's create and export a self-signed certificate for the custom domain.

    DNS_NAME=eshoponweb$RANDOM.com
    openssl req -new -x509 -nodes -out web-tls.crt -keyout web-tls.key -subj "/CN=${DNS_NAME}" -addext "subjectAltName=DNS:${DNS_NAME}"
    openssl pkcs12 -export -in web-tls.crt -inkey web-tls.key -out web-tls.pfx -password pass:
    info

    For learning purposes we'll use a self-signed certificate and a fake custom domain name.

    To browse to the site using the fake domain, we'll mimic a DNS lookup by adding an entry to your host file which maps the public IP address assigned to the ingress controller to the custom domain.

    In a production scenario, you will need to have a real domain delegated to Azure DNS and a valid TLS certificate for the domain.

    Grab your Azure Key Vault name and set the value in a variable for later use.

    RESOURCE_GROUP=cnny-week3

    AKV_NAME=$(az resource list \
    --resource-group $RESOURCE_GROUP \
    --resource-type Microsoft.KeyVault/vaults \
    --query "[0].name" -o tsv)

    Grant yourself permissions to get, list, and import certificates.

    MY_USER_NAME=$(az account show --query user.name -o tsv)
    MY_USER_OBJECT_ID=$(az ad user show --id $MY_USER_NAME --query id -o tsv)

    az keyvault set-policy \
    --name $AKV_NAME \
    --object-id $MY_USER_OBJECT_ID \
    --certificate-permissions get list import

    Upload the TLS certificate to Azure Key Vault and grab its certificate URI.

    WEB_TLS_CERT_ID=$(az keyvault certificate import \
    --vault-name $AKV_NAME \
    --name web-tls \
    --file web-tls.pfx \
    --query id \
    --output tsv)

    Implement custom DNS with Azure DNS

    Create a custom domain for our application and grab its Azure resource id.

    DNS_ZONE_ID=$(az network dns zone create \
    --name $DNS_NAME \
    --resource-group $RESOURCE_GROUP \
    --query id \
    --output tsv)

    Enable Web Application Routing add-on for AKS

    As we enable the Web Application Routing add-on, we'll also pass in the Azure DNS Zone resource id which triggers the installation of the external-dns controller in your Kubernetes cluster. This controller will be able to write Azure DNS zone entries on your behalf as you deploy Ingress manifests.

    AKS_NAME=$(az resource list \
    --resource-group $RESOURCE_GROUP \
    --resource-type Microsoft.ContainerService/managedClusters \
    --query "[0].name" -o tsv)

    az aks enable-addons \
    --name $AKS_NAME \
    --resource-group $RESOURCE_GROUP \
    --addons web_application_routing \
    --dns-zone-resource-id=$DNS_ZONE_ID \
    --enable-secret-rotation

    The add-on will also deploy a new Azure Managed Identity which is used by the external-dns controller when writing Azure DNS zone entries. Currently, it does not have permission to do that, so let's grant it permission.

    # This is where resources are automatically deployed by AKS
    NODE_RESOURCE_GROUP=$(az aks show \
    --name $AKS_NAME \
    --resource-group $RESOURCE_GROUP \
    --query nodeResourceGroup -o tsv)

    # This is the managed identity created by the Web Application Routing add-on
    MANAGED_IDENTTIY_OBJECT_ID=$(az resource show \
    --name webapprouting-${AKS_NAME} \
    --resource-group $NODE_RESOURCE_GROUP \
    --resource-type Microsoft.ManagedIdentity/userAssignedIdentities \
    --query properties.principalId \
    --output tsv)

    # Grant the managed identity permissions to write DNS entries
    az role assignment create \
    --role "DNS Zone Contributor" \
    --assignee $MANAGED_IDENTTIY_OBJECT_ID \
    --scope $DNS_ZONE_ID

    The Azure Managed Identity will also be used to retrieve and rotate TLS certificates from Azure Key Vault. So we'll need to grant it permission for that too.

    az keyvault set-policy \
    --name $AKV_NAME \
    --object-id $MANAGED_IDENTTIY_OBJECT_ID \
    --secret-permissions get \
    --certificate-permissions get

    Implement Ingress for the web application

    Before we create a new Ingress manifest, let's update the existing services to use ClusterIP instead of LoadBalancer. With an Ingress in place, there is no reason why we need the Service resources to be accessible from outside the cluster. The new Ingress will be the only entrypoint for external users.

    We can use the kubectl patch command to update the services

    kubectl patch service api -p '{"spec": {"type": "ClusterIP"}}'
    kubectl patch service web -p '{"spec": {"type": "ClusterIP"}}'

    Deploy a new Ingress to place in front of the web Service. Notice there is a special annotations entry for kubernetes.azure.com/tls-cert-keyvault-uri which points back to our self-signed certificate that was uploaded to Azure Key Vault.

    kubectl apply -f - <<EOF
    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
    annotations:
    kubernetes.azure.com/tls-cert-keyvault-uri: ${WEB_TLS_CERT_ID}
    name: web
    spec:
    ingressClassName: webapprouting.kubernetes.azure.com
    rules:
    - host: ${DNS_NAME}
    http:
    paths:
    - backend:
    service:
    name: web
    port:
    number: 80
    path: /
    pathType: Prefix
    - backend:
    service:
    name: api
    port:
    number: 80
    path: /api
    pathType: Prefix
    tls:
    - hosts:
    - ${DNS_NAME}
    secretName: web-tls
    EOF

    In our manifest above, we've also configured the Ingress route the traffic to either the web or api services based on the URL path requested. If the request URL includes /api/ then it will send traffic to the api backend service. Otherwise, it will send traffic to the web service.

    Within a few minutes, the external-dns controller will add an A record to Azure DNS which points to the Ingress resource's public IP. With the custom domain in place, we can simply browse using this domain name.

    info

    As mentioned above, since this is not a real domain name, we need to modify our host file to make it seem like our custom domain is resolving to the Ingress' public IP address.

    To get the ingress public IP, run the following:

    # Get the IP
    kubectl get ingress web -o jsonpath="{.status.loadBalancer.ingress[0].ip}"

    # Get the hostname
    kubectl get ingress web -o jsonpath="{.spec.tls[0].hosts[0]}"

    Next, open your host file and add an entry using the format <YOUR_PUBLIC_IP> <YOUR_CUSTOM_DOMAIN>. Below is an example of what it should look like.

    20.237.116.224 eshoponweb11265.com

    See this doc for more info on how to do this.

    When browsing to the website, you may be presented with a warning about the connection not being private. This is due to the fact that we are using a self-signed certificate. This is expected, so go ahead and proceed anyway to load up the page.

    Why is the Admin page broken?

    If you log in using the admin@microsoft.com account and browse to the Admin page, you'll notice no products are loaded on the page.

    This is because the admin page is built using Blazor and compiled as a WebAssembly application that runs in your browser. When the application was compiled, it packed the appsettings.Development.json file as an embedded resource. This file contains the base URL for the public API and it currently points to https://localhost:5099. Now that we have a domain name, we can update the base URL and point it to our custom domain.

    From the root of the eShopOnWeb repo, update the configuration file using a sed command.

    sed -i -e "s/localhost:5099/${DNS_NAME}/g" ./src/BlazorAdmin/wwwroot/appsettings.Development.json

    Rebuild and push the container to Azure Container Registry.

    # Grab the name of your Azure Container Registry
    ACR_NAME=$(az resource list \
    --resource-group $RESOURCE_GROUP \
    --resource-type Microsoft.ContainerRegistry/registries \
    --query "[0].name" -o tsv)

    # Invoke a build and publish job
    az acr build \
    --registry $ACR_NAME \
    --image $ACR_NAME.azurecr.io/web:v0.1.0 \
    --file ./src/Web/Dockerfile .

    Once the container build has completed, we can issue a kubectl patch command to quickly update the web deployment to test our change.

    kubectl patch deployment web -p "$(cat <<EOF
    {
    "spec": {
    "template": {
    "spec": {
    "containers": [
    {
    "name": "web",
    "image": "${ACR_NAME}.azurecr.io/web:v0.1.0"
    }
    ]
    }
    }
    }
    }
    EOF
    )"

    If all went well, you will be able to browse the admin page again and confirm product data is being loaded 🥳

    Conclusion

    The Web Application Routing add-on for AKS aims to streamline the process of exposing it to the public using the open-source NGINX Ingress Controller. With the add-on being managed by Azure, it natively integrates with other Azure services like Azure DNS and eliminates the need to manually create DNS entries. It can also integrate with Azure Key Vault to automatically pull in TLS certificates and rotate them as needed to further reduce operational overhead.

    We are one step closer to production and in the upcoming posts we'll further operationalize and secure our deployment, so stay tuned!

    In the meantime, check out the resources listed below for further reading.

    You can also find manifests with all the changes made in today's post in the Azure-Samples/eShopOnAKS repository.

    Resources

    Take the Cloud Skills Challenge!

    Enroll in the Cloud Skills Challenge!

    Don't miss out on this opportunity to level up your skills and stay ahead of the curve in the world of cloud native.

    - + \ No newline at end of file diff --git a/cnny-2023/tags/cloud-native-new-year/page/5/index.html b/cnny-2023/tags/cloud-native-new-year/page/5/index.html index deede86efc..55af95c250 100644 --- a/cnny-2023/tags/cloud-native-new-year/page/5/index.html +++ b/cnny-2023/tags/cloud-native-new-year/page/5/index.html @@ -14,13 +14,13 @@ - +

    5 posts tagged with "cloud-native-new-year"

    View All Tags

    · 6 min read
    Josh Duffney

    Welcome to Day 5 of Week 3 of #CloudNativeNewYear!

    The theme for this week is Bringing Your Application to Kubernetes. Yesterday we talked about debugging and instrumenting our application. Today we'll explore the topic of container image signing and secure supply chain.

    Ask the Experts Thursday, February 9th at 9 AM PST
    Friday, February 10th at 11 AM PST

    Watch the recorded demo and conversation about this week's topics.

    We were live on YouTube walking through today's (and the rest of this week's) demos. Join us Friday, February 10th and bring your questions!

    What We'll Cover

    • Introduction
    • Prerequisites
    • Create a digital signing certificate
    • Generate an Azure Container Registry Token
    • Set up Notation
    • Install the Notation Azure Key Vault Plugin
    • Add the signing Certificate to Notation
    • Sign Container Images
    • Conclusion

    Introduction

    The secure supply chain is a crucial aspect of software development, delivery, and deployment, and digital signing plays a critical role in this process.

    By using digital signatures to verify the authenticity and integrity of container images, organizations can improve the security of your software supply chain and reduce the risk of security breaches and data compromise.

    In this article, you'll learn how to use Notary, an open-source project hosted by the Cloud Native Computing Foundation (CNCF) to digitally sign container images stored on Azure Container Registry.

    Prerequisites

    To follow along, you'll need an instance of:

    Create a digital signing certificate

    A digital signing certificate is a certificate that is used to digitally sign and verify the authenticity and integrity of digital artifacts. Such documents, software, and of course container images.

    Before you can implement digital signatures, you must first create a digital signing certificate.

    Run the following command to generate the certificate:

    1. Create the policy file

      cat <<EOF > ./my_policy.json
      {
      "issuerParameters": {
      "certificateTransparency": null,
      "name": "Self"
      },
      "x509CertificateProperties": {
      "ekus": [
      "1.3.6.1.5.5.7.3.3"
      ],
      "key_usage": [
      "digitalSignature"
      ],
      "subject": "CN=${keySubjectName}",
      "validityInMonths": 12
      }
      }
      EOF

      The ekus and key usage of this certificate policy dictate that the certificate can only be used for digital signatures.

    2. Create the certificate in Azure Key Vault

      az keyvault certificate create --name $keyName --vault-name $keyVaultName --policy @my_policy.json

      Replace $keyName and $keyVaultName with your desired certificate name and Azure Key Vault instance name.

    Generate a Azure Container Registry token

    Azure Container Registry tokens are used to grant access to the contents of the registry. Tokens can be used for a variety of things such as pulling images, pushing images, or managing the registry.

    As part of the container image signing workflow, you'll need a token to authenticate the Notation CLI with your Azure Container Registry.

    Run the following command to generate an ACR token:

    az acr token create \
    --name $tokenName \
    --registry $registry \
    --scope-map _repositories_admin \
    --query 'credentials.passwords[0].value' \
    --only-show-errors \
    --output tsv

    Replace $tokenName with your name for the ACR token and $registry with the name of your ACR instance.

    Setup Notation

    Notation is the command-line interface for the CNCF Notary project. You'll use it to digitally sign the api and web container images for the eShopOnWeb application.

    Run the following commands to download and install the NotationCli:

    1. Open a terminal or command prompt window

    2. Download the Notary notation release

      curl -Lo notation.tar.gz https://github.com/notaryproject/notation/releases/download/v1.0.0-rc.1/notation_1.0.0-rc.1_linux_amd64.tar.gz > /dev/null 2>&1

      If you're not using Linux, you can find the releases here.

    3. Extract the contents of the notation.tar.gz

      tar xvzf notation.tar.gz > /dev/null 2>&1
    4. Copy the notation binary to the $HOME/bin directory

      cp ./notation $HOME/bin
    5. Add the $HOME/bin directory to the PATH environment variable

      export PATH="$HOME/bin:$PATH"
    6. Remove the downloaded files

      rm notation.tar.gz LICENSE
    7. Check the notation version

      notation --version

    Install the Notation Azure Key Vault plugin

    By design the NotationCli supports plugins that extend its digital signing capabilities to remote registries. And in order to sign your container images stored in Azure Container Registry, you'll need to install the Azure Key Vault plugin for Notation.

    Run the following commands to install the azure-kv plugin:

    1. Download the plugin

      curl -Lo notation-azure-kv.tar.gz \
      https://github.com/Azure/notation-azure-kv/releases/download/v0.5.0-rc.1/notation-azure-kv_0.5.0-rc.1_linux_amd64.tar.gz > /dev/null 2>&1

      Non-Linux releases can be found here.

    2. Extract to the plugin directory & delete download files

      tar xvzf notation-azure-kv.tar.gz -C ~/.config/notation/plugins/azure-kv notation-azure-kv > /dev/null 2>&

      rm -rf notation-azure-kv.tar.gz
    3. Verify the plugin was installed

      notation plugin ls

    Add the signing certificate to Notation

    Now that you have Notation and the Azure Key Vault plugin installed, add the certificate's keyId created above to Notation.

    1. Get the Certificate Key ID from Azure Key Vault

      az keyvault certificate show \
      --vault-name $keyVaultName \
      --name $keyName \
      --query "kid" --only-show-errors --output tsv

      Replace $keyVaultName and $keyName with the appropriate information.

    2. Add the Key ID to KMS using Notation

      notation key add --plugin azure-kv --id $keyID $keyName
    3. Check the key list

      notation key ls

    Sign Container Images

    At this point, all that's left is to sign the container images.

    Run the notation sign command to sign the api and web container images:

    notation sign $registry.azurecr.io/web:$tag \
    --username $tokenName \
    --password $tokenPassword

    notation sign $registry.azurecr.io/api:$tag \
    --username $tokenName \
    --password $tokenPassword

    Replace $registry, $tag, $tokenName, and $tokenPassword with the appropriate values. To improve security, use a SHA hash for the tag.

    NOTE: If you didn't take note of the token password, you can rerun the az acr token create command to generate a new password.

    Conclusion

    Digital signing plays a critical role in ensuring the security of software supply chains.

    By signing software components, organizations can verify the authenticity and integrity of software, helping to prevent unauthorized modifications, tampering, and malware.

    And if you want to take digital signing to a whole new level by using them to prevent the deployment of unsigned container images, check out the Ratify project on GitHub!

    Take the Cloud Skills Challenge!

    Enroll in the Cloud Skills Challenge!

    Don't miss out on this opportunity to level up your skills and stay ahead of the curve in the world of cloud native.

    - + \ No newline at end of file diff --git a/cnny-2023/tags/cloud-native/index.html b/cnny-2023/tags/cloud-native/index.html index 04b9ac3a24..9650a108f4 100644 --- a/cnny-2023/tags/cloud-native/index.html +++ b/cnny-2023/tags/cloud-native/index.html @@ -14,13 +14,13 @@ - +

    16 posts tagged with "cloud-native"

    View All Tags

    · 4 min read
    Cory Skimming
    Devanshi Joshi
    Steven Murawski
    Nitya Narasimhan

    Welcome to the Kick-off Post for #30DaysOfCloudNative - one of the core initiatives within #CloudNativeNewYear! Over the next four weeks, join us as we take you from fundamentals to functional usage of Cloud-native technologies, one blog post at a time! Read on to learn a little bit about this initiative and what you can expect to learn from this journey!

    What We'll Cover


    Cloud-native New Year

    Welcome to Week 01 of 🥳 #CloudNativeNewYear ! Today, we kick off a full month of content and activities to skill you up on all things Cloud-native on Azure with content, events, and community interactions! Read on to learn about what we have planned!


    Explore our initiatives

    We have a number of initiatives planned for the month to help you learn and skill up on relevant technologies. Click on the links to visit the relevant pages for each.

    We'll go into more details about #30DaysOfCloudNative in this post - don't forget to subscribe to the blog to get daily posts delivered directly to your preferred feed reader!


    Register for events!

    What are 3 things you can do today, to jumpstart your learning journey?


    #30DaysOfCloudNative

    #30DaysOfCloudNative is a month-long series of daily blog posts grouped into 4 themed weeks - taking you from core concepts to end-to-end solution examples in 30 days. Each article will be short (5-8 mins reading time) and provide exercises and resources to help you reinforce learnings and take next steps.

    This series focuses on the Cloud-native On Azure learning journey in four stages, each building on the previous week to help you skill up in a beginner-friendly way:

    We have a tentative weekly-themed roadmap for the topics we hope to cover and will keep this updated as we go with links to actual articles as they get published.

    Week 1: FOCUS ON CLOUD-NATIVE FUNDAMENTALS

    Here's a sneak peek at the week 1 schedule. We'll start with a broad review of cloud-native fundamentals and walkthrough the core concepts of microservices, containers and Kubernetes.

    • Jan 23: Learn Core Concepts for Cloud-native
    • Jan 24: Container 101
    • Jan 25: Adopting Microservices with Kubernetes
    • Jan 26: Kubernetes 101
    • Jan 27: Exploring your Cloud Native Options

    Let's Get Started!

    Now you know everything! We hope you are as excited as we are to dive into a full month of active learning and doing! Don't forget to subscribe for updates in your favorite feed reader! And look out for our first Cloud-native Fundamentals post on January 23rd!


    - + \ No newline at end of file diff --git a/cnny-2023/tags/cloud-native/page/10/index.html b/cnny-2023/tags/cloud-native/page/10/index.html index 4de24fa361..2fe5e74649 100644 --- a/cnny-2023/tags/cloud-native/page/10/index.html +++ b/cnny-2023/tags/cloud-native/page/10/index.html @@ -14,7 +14,7 @@ - + @@ -22,7 +22,7 @@

    16 posts tagged with "cloud-native"

    View All Tags

    · 14 min read
    Steven Murawski

    Welcome to Day 1 of Week 3 of #CloudNativeNewYear!

    The theme for this week is Bringing Your Application to Kubernetes. Last we talked about Kubernetes Fundamentals. Today we'll explore getting an existing application running in Kubernetes with a full pipeline in GitHub Actions.

    Ask the Experts Thursday, February 9th at 9 AM PST
    Friday, February 10th at 11 AM PST

    Watch the recorded demo and conversation about this week's topics.

    We were live on YouTube walking through today's (and the rest of this week's) demos. Join us Friday, February 10th and bring your questions!

    What We'll Cover

    • Our Application
    • Adding Some Infrastructure as Code
    • Building and Publishing a Container Image
    • Deploying to Kubernetes
    • Summary
    • Resources

    Our Application

    This week we'll be taking an exisiting application - something similar to a typical line of business application - and setting it up to run in Kubernetes. Over the course of the week, we'll address different concerns. Today we'll focus on updating our CI/CD process to handle standing up (or validating that we have) an Azure Kubernetes Service (AKS) environment, building and publishing container images for our web site and API server, and getting those services running in Kubernetes.

    The application we'll be starting with is eShopOnWeb. This application has a web site and API which are backed by a SQL Server instance. It's built in .NET 7, so it's cross-platform.

    info

    For the enterprising among you, you may notice that there are a number of different eShopOn* variants on GitHub, including eShopOnContainers. We aren't using that example as it's more of an end state than a starting place. Afterwards, feel free to check out that example as what this solution could look like as a series of microservices.

    Adding Some Infrastructure as Code

    Just like last week, we need to stand up an AKS environment. This week, however, rather than running commands in our own shell, we'll set up GitHub Actions to do that for us.

    There is a LOT of plumbing in this section, but once it's set up, it'll make our lives a lot easier. This section ensures that we have an environment to deploy our application into configured the way we want. We can easily extend this to accomodate multiple environments or add additional microservices with minimal new effort.

    Federated Identity

    Setting up a federated identity will allow us a more securable and auditable way of accessing Azure from GitHub Actions. For more about setting up a federated identity, Microsoft Learn has the details on connecting GitHub Actions to Azure.

    Here, we'll just walk through the setup of the identity and configure GitHub to use that idenity to deploy our AKS environment and interact with our Azure Container Registry.

    The examples will use PowerShell, but a Bash version of the setup commands is available in the week3/day1 branch.

    Prerequisites

    To follow along, you'll need:

    • a GitHub account
    • an Azure Subscription
    • the Azure CLI
    • and the Git CLI.

    You'll need to fork the source repository under your GitHub user or organization where you can manage secrets and GitHub Actions.

    It would be helpful to have the GitHub CLI, but it's not required.

    Set Up Some Defaults

    You will need to update one or more of the variables (your user or organization, what branch you want to work off of, and possibly the Azure AD application name if there is a conflict).

    # Replace the gitHubOrganizationName value
    # with the user or organization you forked
    # the repository under.

    $githubOrganizationName = 'Azure-Samples'
    $githubRepositoryName = 'eShopOnAKS'
    $branchName = 'week3/day1'
    $applicationName = 'cnny-week3-day1'

    Create an Azure AD Application

    Next, we need to create an Azure AD application.

    # Create an Azure AD application
    $aksDeploymentApplication = New-AzADApplication -DisplayName $applicationName

    Set Up Federation for that Azure AD Application

    And configure that application to allow federated credential requests from our GitHub repository for a particular branch.

    # Create a federated identity credential for the application
    New-AzADAppFederatedCredential `
    -Name $applicationName `
    -ApplicationObjectId $aksDeploymentApplication.Id `
    -Issuer 'https://token.actions.githubusercontent.com' `
    -Audience 'api://AzureADTokenExchange' `
    -Subject "repo:$($githubOrganizationName)/$($githubRepositoryName):ref:refs/heads/$branchName"

    Create a Service Principal for the Azure AD Application

    Once the application has been created, we need a service principal tied to that application. The service principal can be granted rights in Azure.

    # Create a service principal for the application
    New-AzADServicePrincipal -AppId $($aksDeploymentApplication.AppId)

    Give that Service Principal Rights to Azure Resources

    Because our Bicep deployment exists at the subscription level and we are creating role assignments, we need to give it Owner rights. If we changed the scope of the deployment to just a resource group, we could apply more scoped permissions.

    $azureContext = Get-AzContext
    New-AzRoleAssignment `
    -ApplicationId $($aksDeploymentApplication.AppId) `
    -RoleDefinitionName Owner `
    -Scope $azureContext.Subscription.Id

    Add Secrets to GitHub Repository

    If you have the GitHub CLI, you can use that right from your shell to set the secrets needed.

    gh secret set AZURE_CLIENT_ID --body $aksDeploymentApplication.AppId
    gh secret set AZURE_TENANT_ID --body $azureContext.Tenant.Id
    gh secret set AZURE_SUBSCRIPTION_ID --body $azureContext.Subscription.Id

    Otherwise, you can create them through the web interface like I did in the Learn Live event below.

    info

    It may look like the whole video will play, but it'll stop after configuring the secrets in GitHub (after about 9 minutes)

    The video shows creating the Azure AD application, service principals, and configuring the federated identity in Azure AD and GitHub.

    Creating a Bicep Deployment

    Resuable Workflows

    We'll create our Bicep deployment in a reusable workflows. What are they? The previous link has the documentation or the video below has my colleague Brandon Martinez and I talking about them.

    This workflow is basically the same deployment we did in last week's series, just in GitHub Actions.

    Start by creating a file called deploy_aks.yml in the .github/workflows directory with the below contents.

    name: deploy

    on:
    workflow_call:
    inputs:
    resourceGroupName:
    required: true
    type: string
    secrets:
    AZURE_CLIENT_ID:
    required: true
    AZURE_TENANT_ID:
    required: true
    AZURE_SUBSCRIPTION_ID:
    required: true
    outputs:
    containerRegistryName:
    description: Container Registry Name
    value: ${{ jobs.deploy.outputs.containerRegistryName }}
    containerRegistryUrl:
    description: Container Registry Login Url
    value: ${{ jobs.deploy.outputs.containerRegistryUrl }}
    resourceGroupName:
    description: Resource Group Name
    value: ${{ jobs.deploy.outputs.resourceGroupName }}
    aksName:
    description: Azure Kubernetes Service Cluster Name
    value: ${{ jobs.deploy.outputs.aksName }}

    permissions:
    id-token: write
    contents: read

    jobs:
    validate:
    runs-on: ubuntu-latest
    steps:
    - uses: actions/checkout@v2
    - uses: azure/login@v1
    name: Sign in to Azure
    with:
    client-id: ${{ secrets.AZURE_CLIENT_ID }}
    tenant-id: ${{ secrets.AZURE_TENANT_ID }}
    subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}
    - uses: azure/arm-deploy@v1
    name: Run preflight validation
    with:
    deploymentName: ${{ github.run_number }}
    scope: subscription
    region: eastus
    template: ./deploy/main.bicep
    parameters: >
    resourceGroup=${{ inputs.resourceGroupName }}
    deploymentMode: Validate

    deploy:
    needs: validate
    runs-on: ubuntu-latest
    outputs:
    containerRegistryName: ${{ steps.deploy.outputs.acr_name }}
    containerRegistryUrl: ${{ steps.deploy.outputs.acr_login_server_url }}
    resourceGroupName: ${{ steps.deploy.outputs.resource_group_name }}
    aksName: ${{ steps.deploy.outputs.aks_name }}
    steps:
    - uses: actions/checkout@v2
    - uses: azure/login@v1
    name: Sign in to Azure
    with:
    client-id: ${{ secrets.AZURE_CLIENT_ID }}
    tenant-id: ${{ secrets.AZURE_TENANT_ID }}
    subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}
    - uses: azure/arm-deploy@v1
    id: deploy
    name: Deploy Bicep file
    with:
    failOnStdErr: false
    deploymentName: ${{ github.run_number }}
    scope: subscription
    region: eastus
    template: ./deploy/main.bicep
    parameters: >
    resourceGroup=${{ inputs.resourceGroupName }}

    Adding the Bicep Deployment

    Once we have the Bicep deployment workflow, we can add it to the primary build definition in .github/workflows/dotnetcore.yml

    Permissions

    First, we need to add a permissions block to let the workflow request our Azure AD token. This can go towards the top of the YAML file (I started it on line 5).

    permissions:
    id-token: write
    contents: read

    Deploy AKS Job

    Next, we'll add a reference to our reusable workflow. This will go after the build job.

      deploy_aks:
    needs: [build]
    uses: ./.github/workflows/deploy_aks.yml
    with:
    resourceGroupName: 'cnny-week3'
    secrets:
    AZURE_CLIENT_ID: ${{ secrets.AZURE_CLIENT_ID }}
    AZURE_TENANT_ID: ${{ secrets.AZURE_TENANT_ID }}
    AZURE_SUBSCRIPTION_ID: ${{ secrets.AZURE_SUBSCRIPTION_ID }}

    Building and Publishing a Container Image

    Now that we have our target environment in place and an Azure Container Registry, we can build and publish our container images.

    Add a Reusable Workflow

    First, we'll create a new file for our reusable workflow at .github/workflows/publish_container_image.yml.

    We'll start the file with a name, the parameters it needs to run, and the permissions requirements for the federated identity request.

    name: Publish Container Images

    on:
    workflow_call:
    inputs:
    containerRegistryName:
    required: true
    type: string
    containerRegistryUrl:
    required: true
    type: string
    githubSha:
    required: true
    type: string
    secrets:
    AZURE_CLIENT_ID:
    required: true
    AZURE_TENANT_ID:
    required: true
    AZURE_SUBSCRIPTION_ID:
    required: true

    permissions:
    id-token: write
    contents: read

    Build the Container Images

    Our next step is to build the two container images we'll need for the application, the website and the API. We'll build the container images on our build worker and tag it with the git SHA, so there'll be a direct tie between the point in time in our codebase and the container images that represent it.

    jobs:
    publish_container_image:
    runs-on: ubuntu-latest

    steps:
    - uses: actions/checkout@v2
    - name: docker build
    run: |
    docker build . -f src/Web/Dockerfile -t ${{ inputs.containerRegistryUrl }}/web:${{ inputs.githubSha }}
    docker build . -f src/PublicApi/Dockerfile -t ${{ inputs.containerRegistryUrl }}/api:${{ inputs.githubSha}}

    Scan the Container Images

    Before we publish those container images, we'll scan them for vulnerabilities and best practice violations. We can add these two steps (one scan for each image).

        - name: scan web container image
    uses: Azure/container-scan@v0
    with:
    image-name: ${{ inputs.containerRegistryUrl }}/web:${{ inputs.githubSha}}
    - name: scan api container image
    uses: Azure/container-scan@v0
    with:
    image-name: ${{ inputs.containerRegistryUrl }}/web:${{ inputs.githubSha}}

    The container images provided have a few items that'll be found. We can create an allowed list at .github/containerscan/allowedlist.yaml to define vulnerabilities or best practice violations that we'll explictly allow to not fail our build.

    general:
    vulnerabilities:
    - CVE-2022-29458
    - CVE-2022-3715
    - CVE-2022-1304
    - CVE-2021-33560
    - CVE-2020-16156
    - CVE-2019-8457
    - CVE-2018-8292
    bestPracticeViolations:
    - CIS-DI-0001
    - CIS-DI-0005
    - CIS-DI-0006
    - CIS-DI-0008

    Publish the Container Images

    Finally, we'll log in to Azure, then log in to our Azure Container Registry, and push our images.

        - uses: azure/login@v1
    name: Sign in to Azure
    with:
    client-id: ${{ secrets.AZURE_CLIENT_ID }}
    tenant-id: ${{ secrets.AZURE_TENANT_ID }}
    subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}
    - name: acr login
    run: az acr login --name ${{ inputs.containerRegistryName }}
    - name: docker push
    run: |
    docker push ${{ inputs.containerRegistryUrl }}/web:${{ inputs.githubSha}}
    docker push ${{ inputs.containerRegistryUrl }}/api:${{ inputs.githubSha}}

    Update the Build With the Image Build and Publish

    Now that we have our reusable workflow to create and publish our container images, we can include that in our primary build defnition at .github/workflows/dotnetcore.yml.

      publish_container_image:
    needs: [deploy_aks]
    uses: ./.github/workflows/publish_container_image.yml
    with:
    containerRegistryName: ${{ needs.deploy_aks.outputs.containerRegistryName }}
    containerRegistryUrl: ${{ needs.deploy_aks.outputs.containerRegistryUrl }}
    githubSha: ${{ github.sha }}
    secrets:
    AZURE_CLIENT_ID: ${{ secrets.AZURE_CLIENT_ID }}
    AZURE_TENANT_ID: ${{ secrets.AZURE_TENANT_ID }}
    AZURE_SUBSCRIPTION_ID: ${{ secrets.AZURE_SUBSCRIPTION_ID }}

    Deploying to Kubernetes

    Finally, we've gotten enough set up that a commit to the target branch will:

    • build and test our application code
    • set up (or validate) our AKS and ACR environment
    • and create, scan, and publish our container images to ACR

    Our last step will be to deploy our application to Kubernetes. We'll use the basic building blocks we worked with last week, deployments and services.

    Starting the Reusable Workflow to Deploy to AKS

    We'll start our workflow with our parameters that we need, as well as the permissions to access the token to log in to Azure.

    We'll check out our code, then log in to Azure, and use the az CLI to get credentials for our AKS cluster.

    name: deploy_to_aks

    on:
    workflow_call:
    inputs:
    aksName:
    required: true
    type: string
    resourceGroupName:
    required: true
    type: string
    containerRegistryUrl:
    required: true
    type: string
    githubSha:
    required: true
    type: string
    secrets:
    AZURE_CLIENT_ID:
    required: true
    AZURE_TENANT_ID:
    required: true
    AZURE_SUBSCRIPTION_ID:
    required: true

    permissions:
    id-token: write
    contents: read

    jobs:
    deploy:
    runs-on: ubuntu-latest
    steps:
    - uses: actions/checkout@v2
    - uses: azure/login@v1
    name: Sign in to Azure
    with:
    client-id: ${{ secrets.AZURE_CLIENT_ID }}
    tenant-id: ${{ secrets.AZURE_TENANT_ID }}
    subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}
    - name: Get AKS Credentials
    run: |
    az aks get-credentials --resource-group ${{ inputs.resourceGroupName }} --name ${{ inputs.aksName }}

    Edit the Deployment For Our Current Image Tag

    Let's add the Kubernetes manifests to our repo. This post is long enough, so you can find the content for the manifests folder in the manifests folder in the source repo under the week3/day1 branch.

    tip

    If you only forked the main branch of the source repo, you can easily get the updated manifests by using the following git commands:

    git remote add upstream https://github.com/Azure-Samples/eShopOnAks
    git fetch upstream week3/day1
    git checkout upstream/week3/day1 manifests

    This will make the week3/day1 branch available locally and then we can update the manifests directory to match the state of that branch.

    The deployments and the service defintions should be familiar from last week's content (but not the same). This week, however, there's a new file in the manifests - ./manifests/kustomization.yaml

    This file helps us more dynamically edit our kubernetes manifests and support is baked right in to the kubectl command.

    Kustomize Definition

    Kustomize allows us to specify specific resource manifests and areas of that manifest to replace. We've put some placeholders in our file as well, so we can replace those for each run of our CI/CD system.

    In ./manifests/kustomization.yaml you will see:

    resources:
    - deployment-api.yaml
    - deployment-web.yaml

    # Change the image name and version
    images:
    - name: notavalidregistry.azurecr.io/api:v0.1.0
    newName: <YOUR_ACR_SERVER>/api
    newTag: <YOUR_IMAGE_TAG>
    - name: notavalidregistry.azurecr.io/web:v0.1.0
    newName: <YOUR_ACR_SERVER>/web
    newTag: <YOUR_IMAGE_TAG>

    Replacing Values in our Build

    Now, we encounter a little problem - our deployment files need to know the tag and ACR server. We can do a bit of sed magic to edit the file on the fly.

    In .github/workflows/deploy_to_aks.yml, we'll add:

          - name: replace_placeholders_with_current_run
    run: |
    sed -i "s/<YOUR_ACR_SERVER>/${{ inputs.containerRegistryUrl }}/g" ./manifests/kustomization.yaml
    sed -i "s/<YOUR_IMAGE_TAG>/${{ inputs.githubSha }}/g" ./manifests/kustomization.yaml

    Deploying the Manifests

    We have our manifests in place and our kustomization.yaml file (with commands to update it at runtime) ready to go, we can deploy our manifests.

    First, we'll deploy our database (deployment and service). Next, we'll use the -k parameter on kubectl to tell it to look for a kustomize configuration, transform the requested manifests and apply those. Finally, we apply the service defintions for the web and API deployments.

            run: |
    kubectl apply -f ./manifests/deployment-db.yaml \
    -f ./manifests/service-db.yaml
    kubectl apply -k ./manifests
    kubectl apply -f ./manifests/service-api.yaml \
    -f ./manifests/service-web.yaml

    Summary

    We've covered a lot of ground in today's post. We set up federated credentials with GitHub. Then we added reusable workflows to deploy an AKS environment and build/scan/publish our container images, and then to deploy them into our AKS environment.

    This sets us up to start making changes to our application and Kubernetes configuration and have those changes automatically validated and deployed by our CI/CD system. Tomorrow, we'll look at updating our application environment with runtime configuration, persistent storage, and more.

    Resources

    Take the Cloud Skills Challenge!

    Enroll in the Cloud Skills Challenge!

    Don't miss out on this opportunity to level up your skills and stay ahead of the curve in the world of cloud native.

    - + \ No newline at end of file diff --git a/cnny-2023/tags/cloud-native/page/11/index.html b/cnny-2023/tags/cloud-native/page/11/index.html index a6c7a30db9..07e69c6cbd 100644 --- a/cnny-2023/tags/cloud-native/page/11/index.html +++ b/cnny-2023/tags/cloud-native/page/11/index.html @@ -14,13 +14,13 @@ - +

    16 posts tagged with "cloud-native"

    View All Tags

    · 9 min read
    Steven Murawski

    Welcome to Day 4 of Week 3 of #CloudNativeNewYear!

    The theme for this week is Bringing Your Application to Kubernetes. Yesterday we exposed the eShopOnWeb app so that customers can reach it over the internet using a custom domain name and TLS. Today we'll explore the topic of debugging and instrumentation.

    Ask the Experts Thursday, February 9th at 9 AM PST
    Friday, February 10th at 11 AM PST

    Watch the recorded demo and conversation about this week's topics.

    We were live on YouTube walking through today's (and the rest of this week's) demos. Join us Friday, February 10th and bring your questions!

    What We'll Cover

    • Debugging
    • Bridge To Kubernetes
    • Instrumentation
    • Resources: For self-study!

    Debugging

    Debugging applications in a Kubernetes cluster can be challenging for several reasons:

    • Complexity: Kubernetes is a complex system with many moving parts, including pods, nodes, services, and config maps, all of which can interact in unexpected ways and cause issues.
    • Distributed Environment: Applications running in a Kubernetes cluster are often distributed across multiple nodes, which makes it harder to determine the root cause of an issue.
    • Logging and Monitoring: Debugging an application in a Kubernetes cluster requires access to logs and performance metrics, which can be difficult to obtain in a large and dynamic environment.
    • Resource Management: Kubernetes manages resources such as CPU and memory, which can impact the performance and behavior of applications. Debugging resource-related issues requires a deep understanding of the Kubernetes resource model and the underlying infrastructure.
    • Dynamic Nature: Kubernetes is designed to be dynamic, with the ability to add and remove resources as needed. This dynamic nature can make it difficult to reproduce issues and debug problems.

    However, there are many tools and practices that can help make debugging applications in a Kubernetes cluster easier, such as using centralized logging, monitoring, and tracing solutions, and following best practices for managing resources and deployment configurations.

    There's also another great tool in our toolbox - Bridge to Kubernetes.

    Bridge to Kubernetes

    Bridge to Kubernetes is a great tool for microservice development and debugging applications without having to locally replicate all the required microservices.

    Bridge to Kubernetes works with Visual Studio or Visual Studio Code.

    We'll walk through using it with Visual Studio Code.

    Connecting Bridge to Kubernetes to Our Cluster

    Ensure your AKS cluster is the default for kubectl

    If you've recently spun up a new AKS cluster or you have been working with a different cluster, you may need to change what cluster credentials you have configured.

    If it's a new cluster, we can use:

    RESOURCE_GROUP=<YOUR RESOURCE GROUP NAME>
    CLUSTER_NAME=<YOUR AKS CLUSTER NAME>
    az aks get-credentials az aks get-credentials --resource-group $RESOURCE_GROUP --name $CLUSTER_NAME

    Open the command palette

    Open the command palette and find Bridge to Kubernetes: Configure. You may need to start typing the name to get it to show up.

    The command palette for Visual Studio Code is open and the first item is Bridge to Kubernetes: Configure

    Pick the service you want to debug

    Bridge to Kubernetes will redirect a service for you. Pick the service you want to redirect, in this case we'll pick web.

    Selecting the `web` service to redirect in Visual Studio Code

    Identify the port your application runs on

    Next, we'll be prompted to identify what port our application will run on locally. For this application it'll be 5001, but that's just specific to this application (and the default for ASP.NET 7, I believe).

    Setting port 5001 as the port to redirect to the `web` Kubernetes service in Visual Studio Code

    Pick a debug configuration to extend

    Bridge to Kubernetes has a couple of ways to run - it can inject it's setup and teardown to your existing debug configurations. We'll pick .NET Core Launch (web).

    Telling Bridge to Kubernetes to use the .NET Core Launch (web) debug configuration in Visual Studio Code

    Forward Traffic for All Requests

    The last prompt you'll get in the configuration is about how you want Bridge to Kubernetes to handle re-routing traffic. The default is that all requests into the service will get your local version.

    You can also redirect specific traffic. Bridge to Kubernetes will set up a subdomain and route specific traffic to your local service, while allowing other traffic to the deployed service.

    Allowing the launch of Endpoint Manager on Windows

    Using Bridge to Kubernetes to Debug Our Service

    Now that we've configured Bridge to Kubernetes, we see that tasks and a new launch configuration have been added.

    Added to .vscode/tasks.json:

            {
    "label": "bridge-to-kubernetes.resource",
    "type": "bridge-to-kubernetes.resource",
    "resource": "web",
    "resourceType": "service",
    "ports": [
    5001
    ],
    "targetCluster": "aks1",
    "targetNamespace": "default",
    "useKubernetesServiceEnvironmentVariables": false
    },
    {
    "label": "bridge-to-kubernetes.compound",
    "dependsOn": [
    "bridge-to-kubernetes.resource",
    "build"
    ],
    "dependsOrder": "sequence"
    }

    And added to .vscode/launch.json:

    {
    "name": ".NET Core Launch (web) with Kubernetes",
    "type": "coreclr",
    "request": "launch",
    "preLaunchTask": "bridge-to-kubernetes.compound",
    "program": "${workspaceFolder}/src/Web/bin/Debug/net7.0/Web.dll",
    "args": [],
    "cwd": "${workspaceFolder}/src/Web",
    "stopAtEntry": false,
    "env": {
    "ASPNETCORE_ENVIRONMENT": "Development",
    "ASPNETCORE_URLS": "http://+:5001"
    },
    "sourceFileMap": {
    "/Views": "${workspaceFolder}/Views"
    }
    }

    Launch the debug configuration

    We can start the process with the .NET Core Launch (web) with Kubernetes launch configuration in the Debug pane in Visual Studio Code.

    Launch the `.NET Core Launch (web) with Kubernetes` from the Debug pane in Visual Studio Code

    Enable the Endpoint Manager

    Part of this process includes a local service to help manage the traffic routing and your hosts file. This will require admin or sudo privileges. On Windows, you'll get a prompt like:

    Prompt to launch the endpoint manager.

    Access your Kubernetes cluster "locally"

    Bridge to Kubernetes will set up a tunnel (thanks to port forwarding) to your local workstation and create local endpoints for the other Kubernetes hosted services in your cluster, as well as pretending to be a pod in that cluster (for the application you are debugging).

    Output from Bridge To Kubernetes setup task.

    After making the connection to your Kubernetes cluster, the launch configuration will continue. In this case, we'll make a debug build of the application and attach the debugger. (This process may cause the terminal in VS Code to scroll with build output. You can find the Bridge to Kubernetes output with the local IP addresses and ports in the Output pane for Bridge to Kubernetes.)

    You can set breakpoints, use your debug console, set watches, run tests against your local version of the service.

    Exploring the Running Application Environment

    One of the cool things that Bridge to Kubernetes does for our debugging experience is bring the environment configuration that our deployed pod would inherit. When we launch our app, it'll see configuration and secrets that we'd expect our pod to be running with.

    To test this, we'll set a breakpoint in our application's start up to see what SQL Server is being used. We'll set a breakpoint at src/Infrastructure/Dependencies.cs on line 32.

    Then, we will start debugging the application with Bridge to Kubernetes. When it hits the breakpoint, we'll open the Debug pane and type configuration.GetConnectionString("CatalogConnection").

    When we run locally (not with Bridge to Kubernetes), we'd see:

    configuration.GetConnectionString("CatalogConnection")
    "Server=(localdb)\\mssqllocaldb;Integrated Security=true;Initial Catalog=Microsoft.eShopOnWeb.CatalogDb;"

    But, with Bridge to Kubernetes we see something more like (yours will vary based on the password ):

    configuration.GetConnectionString("CatalogConnection")
    "Server=db;Database=Microsoft.eShopOnWeb.CatalogDb;User Id=sa;Password=*****************;TrustServerCertificate=True;"

    Debugging our local application connected to Kubernetes.

    We can see that the database server configured is based on our db service and the password is pulled from our secret in Azure KeyVault (via AKS).

    This helps us run our local application just like it was actually in our cluster.

    Going Further

    Bridge to Kubernetes also supports more advanced scenarios and, as you need to start routing traffic around inside your cluster, may require you to modify your application to pass along a kubernetes-route-as header to help ensure that traffic for your debugging workloads is properly handled. The docs go into much greater detail about that.

    Instrumentation

    Now that we've figured out our debugging story, we'll need to ensure that we have the right context clues to find where we need to debug or to give us a better idea of how well our microservices are running.

    Logging and Tracing

    Logging and tracing become even more critical in Kubernetes, where your application could be running in a number of pods across different nodes. When you have an issue, in addition to the normal application data, you'll want to know what pod and what node had the issue, what the state of those resources were (were you resource constrained or were shared resources unavailable?), and if autoscaling is enabled, you'll want to know if a scale event has been triggered. There are a multitude of other concerns based on your application and the environment you maintain.

    Given these informational needs, it's crucial to revisit your existing logging and instrumentation. Most frameworks and languages have extensible logging, tracing, and instrumentation libraries that you can iteratively add information to, such as pod and node states, and ensuring that requests can be traced across your microservices. This will pay you back time and time again when you have to troubleshoot issues in your existing environment.

    Centralized Logging

    To enhance the troubleshooting process further, it's important to implement centralized logging to consolidate logs from all your microservices into a single location. This makes it easier to search and analyze logs when you're troubleshooting an issue.

    Automated Alerting

    Additionally, implementing automated alerting, such as sending notifications when specific conditions occur in the logs, can help you detect issues before they escalate.

    End to end Visibility

    End-to-end visibility is also essential in understanding the flow of requests and responses between microservices in a distributed system. With end-to-end visibility, you can quickly identify bottlenecks and slowdowns in the system, helping you to resolve issues more efficiently.

    Resources

    Take the Cloud Skills Challenge!

    Enroll in the Cloud Skills Challenge!

    Don't miss out on this opportunity to level up your skills and stay ahead of the curve in the world of cloud native.

    - + \ No newline at end of file diff --git a/cnny-2023/tags/cloud-native/page/12/index.html b/cnny-2023/tags/cloud-native/page/12/index.html index ed53c6fa00..62ea05bc5d 100644 --- a/cnny-2023/tags/cloud-native/page/12/index.html +++ b/cnny-2023/tags/cloud-native/page/12/index.html @@ -14,13 +14,13 @@ - +

    16 posts tagged with "cloud-native"

    View All Tags

    · 7 min read
    Nitya Narasimhan

    Welcome to Week 4 of #CloudNativeNewYear!

    This week we'll go further with Cloud-native by exploring advanced topics and best practices for the Cloud-native practitioner. We'll start with an exploration of Serverless Container Options - ranging from managed services to Azure Kubernetes Service (AKS) and Azure Container Apps (ACA), to options that allow more granular control!

    What We'll Cover

    • The Azure Compute Landscape
    • Serverless Compute on Azure
    • Comparing Container Options On Azure
    • Other Considerations
    • Exercise: Try this yourself!
    • Resources: For self-study!


    We started this series with an introduction to core concepts:

    • In Containers 101, we learned why containerization matters. Think portability, isolation, scalability, resource-efficiency and cost-effectiveness. But not all apps can be containerized.
    • In Kubernetes 101, we learned how orchestration works. Think systems to automate container deployment, scaling, and management. But using Kubernetes directly can be complex.
    • In Exploring Cloud Native Options we asked the real questions: can we containerize - and should we?. The first depends on app characteristics, the second on your requirements.

    For example:

    • Can we containerize? The answer might be no if your app has system or OS dependencies, requires access to low-level hardware, or maintains complex state across sessions.
    • Should we containerize? The answer might be yes if your app is microservices-based, is stateless by default, requires portability, or is a legaacy app that can benefit from container isolation.

    As with every technology adoption decision process, there are no clear yes/no answers - just tradeoffs that you need to evaluate based on your architecture and application requirements. In today's post, we try to look at this from two main perspectives:

    1. Should you go serverless? Think: managed services that let you focus on app, not infra.
    2. What Azure Compute should I use? Think: best fit for my architecture & technology choices.

    Azure Compute Landscape

    Let's answer the second question first by exploring all available compute options on Azure. The illustrated decision-flow below is my favorite ways to navigate the choices, with questions like:

    • Are you migrating an existing app or building a new one?
    • Can you app be containerized?
    • Does it use a specific technology (Spring Boot, Red Hat Openshift)?
    • Do you need access to the Kubernetes API?
    • What characterizes the workload? (event-driven, web app, microservices etc.)

    Read the docs to understand how your choices can be influenced by the hosting model (IaaS, PaaS, FaaS), supported features (Networking, DevOps, Scalability, Availability, Security), architectural styles (Microservices, Event-driven, High-Performance Compute, Task Automation,Web-Queue Worker) etc.

    Compute Choices

    Now that we know all available compute options, let's address the second question: why go serverless? and what are my serverless compute options on Azure?

    Azure Serverless Compute

    Serverless gets defined many ways, but from a compute perspective, we can focus on a few key characteristics that are key to influencing this decision:

    • managed services - focus on application, let cloud provider handle infrastructure.
    • pay for what you use - get cost-effective resource utilization, flexible pricing options.
    • autoscaling on demand - take advantage of built-in features like KEDA-compliant triggers.
    • use preferred languages - write code in Java, JS, C#, Python etc. (specifics based on service)
    • cloud-native architectures - can support event-driven solutions, APIs, Microservices, DevOps!

    So what are some of the key options for Serverless Compute on Azure? The article dives into serverless support for fully-managed end-to-end serverless solutions with comprehensive support for DevOps, DevTools, AI/ML, Database, Storage, Monitoring and Analytics integrations. But we'll just focus on the 4 categories of applications when we look at Compute!

    1. Serverless Containerized Microservices using Azure Container Apps. Code in your preferred language, exploit full Dapr support, scale easily with any KEDA-compliant trigger.
    2. Serverless Application Environments using Azure App Service. Suitable for hosting monolithic apps (vs. microservices) in a managed service, with built-in support for on-demand scaling.
    3. Serverless Kubernetes using Azure Kubernetes Service (AKS). Spin up pods inside container instances and deploy Kubernetes-based applications with built-in KEDA-compliant autoscaling.
    4. Serverless Functions using Azure Functions. Execute "code at the granularity of functions" in your preferred language, and scale on demand with event-driven compute.

    We'll talk about these, and other compute comparisons, at the end of the article. But let's start with the core option you might choose if you want a managed serverless compute solution with built-in features for delivering containerized microservices at scale. Hello, Azure Container Apps!.

    Azure Container Apps

    Azure Container Apps (ACA) became generally available in May 2022 - providing customers with the ability to run microservices and containerized applications on a serverless, consumption-based platform. The figure below showcases the different types of applications that can be built with ACA. Note that it comes with built-in KEDA-compliant autoscaling triggers, and other auto-scale criteria that may be better-suited to the type of application you are building.

    About ACA

    So far in the series, we've put the spotlight on Azure Kubernetes Service (AKS) - so you're probably asking yourself: How does ACA compare to AKS?. We're glad you asked. Check out our Go Cloud-native with Azure Container Apps post from the #ServerlessSeptember series last year for a deeper-dive, or review the figure below for the main comparison points.

    The key takeaway is this. Azure Container Apps (ACA) also runs on Kubernetes but abstracts away its complexity in a managed service offering that lets you get productive quickly without requiring detailed knowledge of Kubernetes workings or APIs. However, if you want full access and control over the Kubernetes API then go with Azure Kubernetes Service (AKS) instead.

    Comparison

    Other Container Options

    Azure Container Apps is the preferred Platform As a Service (PaaS) option for a fully-managed serverless solution on Azure that is purpose-built for cloud-native microservices-based application workloads. But - there are other options that may be suitable for your specific needs, from a requirements and tradeoffs perspective. Let's review them quickly:

    1. Azure Functions is the serverless Functions-as-a-Service (FaaS) option, as opposed to ACA which supports a PaaS approach. It's optimized for running event-driven applications built at the granularity of ephemeral functions that can be deployed as code or containers.
    2. Azure App Service provides fully managed hosting for web applications that may be deployed using code or containers. It can be integrated with other services including Azure Container Apps and Azure Functions. It's optimized for deploying traditional web apps.
    3. Azure Kubernetes Service provides a fully managed Kubernetes option capable of running any Kubernetes workload, with direct access to the Kubernetes API.
    4. Azure Container Instances provides a single pod of Hyper-V isolated containers on demand, making them more of a low-level "building block" option compared to ACA.

    Based on the technology choices you made for application development you may also have more specialized options you want to consider. For instance:

    1. Azure Spring Apps is ideal if you're running Spring Boot or Spring Cloud workloads on Azure,
    2. Azure Red Hat OpenShift is ideal for integrated Kubernetes-powered OpenShift on Azure.
    3. Azure Confidential Computing is ideal if you have data/code integrity and confidentiality needs.
    4. Kubernetes At The Edge is ideal for bare-metal options that extend compute to edge devices.

    This is just the tip of the iceberg in your decision-making journey - but hopefully, it gave you a good sense of the options and criteria that influences your final choices. Let's wrap this up with a look at self-study resources for skilling up further.

    Exercise

    Want to get hands on learning related to these technologies?

    TAKE THE CLOUD SKILLS CHALLENGE

    Register today and level up your skills by completing free learning modules, while competing with your peers for a place on the leaderboards!

    Resources

    - + \ No newline at end of file diff --git a/cnny-2023/tags/cloud-native/page/13/index.html b/cnny-2023/tags/cloud-native/page/13/index.html index d32671eaaa..af6dd791cc 100644 --- a/cnny-2023/tags/cloud-native/page/13/index.html +++ b/cnny-2023/tags/cloud-native/page/13/index.html @@ -14,13 +14,13 @@ - +

    16 posts tagged with "cloud-native"

    View All Tags

    · 3 min read
    Cory Skimming

    It's the final week of #CloudNativeNewYear! This week we'll go further with Cloud-native by exploring advanced topics and best practices for the Cloud-native practitioner. In today's post, we will introduce you to the basics of the open-source project Draft and how it can be used to easily create and deploy applications to Kubernetes.

    It's not too late to sign up for and complete the Cloud Skills Challenge!

    What We'll Cover

    • What is Draft?
    • Draft basics
    • Demo: Developing to AKS with Draft
    • Resources


    What is Draft?

    Draft is an open-source tool that can be used to streamline the development and deployment of applications on Kubernetes clusters. It provides a simple and easy-to-use workflow for creating and deploying applications, making it easier for developers to focus on writing code and building features, rather than worrying about the underlying infrastructure. This is great for users who are just getting started with Kubernetes, or those who are just looking to simplify their experience.

    New to Kubernetes?

    Draft basics

    Draft streamlines Kubernetes development by taking a non-containerized application and generating the Dockerfiles, K8s manifests, Helm charts, and other artifacts associated with a containerized application. Draft can also create a GitHub Action workflow file to quickly build and deploy your application onto any Kubernetes cluster.

    1. 'draft create'': Create a new Draft project by simply running the 'draft create' command - this command will walk you through a series of questions on your application specification (such as the application language) and create a Dockerfile, Helm char, and Kubernetes
    2. 'draft generate-workflow'': Automatically build out a GitHub Action using the 'draft generate-workflow' command
    3. 'draft setup-gh'': If you are using Azure, use this command to automate the GitHub OIDC set up process to ensure that you will be able to deploy your application using your GitHub Action.

    At this point, you will have all the files needed to deploy your app onto a Kubernetes cluster (we told you it was easy!).

    You can also use the 'draft info' command if you are looking for information on supported languages and deployment types. Let's see it in action, shall we?


    Developing to AKS with Draft

    In this Microsoft Reactor session below, we'll briefly introduce Kubernetes and the Azure Kubernetes Service (AKS) and then demo how enable your applications for Kubernetes using the open-source tool Draft. We'll show how Draft can help you create the boilerplate code to containerize your applications and add routing and scaling behaviours.

    ##Conclusion

    Overall, Draft simplifies the process of building, deploying, and managing applications on Kubernetes, and can make the overall journey from code to Kubernetes significantly easier.


    Resources


    - + \ No newline at end of file diff --git a/cnny-2023/tags/cloud-native/page/14/index.html b/cnny-2023/tags/cloud-native/page/14/index.html index f0c07e19a6..8aba22d649 100644 --- a/cnny-2023/tags/cloud-native/page/14/index.html +++ b/cnny-2023/tags/cloud-native/page/14/index.html @@ -14,14 +14,14 @@ - +

    16 posts tagged with "cloud-native"

    View All Tags

    · 7 min read
    Vinicius Apolinario

    Welcome to Day 3 of Week 4 of #CloudNativeNewYear!

    The theme for this week is going further with Cloud Native. Yesterday we talked about using Draft to accelerate your Kubernetes adoption. Today we'll explore the topic of Windows containers.

    What We'll Cover

    • Introduction
    • Windows containers overview
    • Windows base container images
    • Isolation
    • Exercise: Try this yourself!
    • Resources: For self-study!

    Introduction

    Windows containers were launched along with Windows Server 2016, and have evolved since. In its latest release, Windows Server 2022, Windows containers have reached a great level of maturity and allow for customers to run production grade workloads.

    While suitable for new developments, Windows containers also provide developers and operations with a different approach than Linux containers. It allows for existing Windows applications to be containerized with little or no code changes. It also allows for professionals that are more comfortable with the Windows platform and OS, to leverage their skill set, while taking advantage of the containers platform.

    Windows container overview

    In essence, Windows containers are very similar to Linux. Since Windows containers use the same foundation of Docker containers, you can expect that the same architecture applies - with the specific notes of the Windows OS. For example, when running a Windows container via Docker, you use the same commands, such as docker run. To pull a container image, you can use docker pull, just like on Linux. However, to run a Windows container, you also need a Windows container host. This requirement is there because, as you might remember, a container shares the OS kernel with its container host.

    On Kubernetes, Windows containers are supported since Windows Server 2019. Just like with Docker, you can manage Windows containers like any other resource on the Kubernetes ecosystem. A Windows node can be part of a Kubernetes cluster, allowing you to run Windows container based applications on services like Azure Kubernetes Service. To deploy an Windows application to a Windows pod in Kubernetes, you can author a YAML specification much like you would for Linux. The main difference is that you would point to an image that runs on Windows, and you need to specify a node selection tag to indicate said pod needs to run on a Windows node.

    Windows base container images

    On Windows containers, you will always use a base container image provided by Microsoft. This base container image contains the OS binaries for the container to run. This image can be as large as 3GB+, or small as ~300MB. The difference in the size is a consequence of the APIs and components available in each Windows container base container image. There are primarily, three images: Nano Server, Server Core, and Server.

    Nano Server is the smallest image, ranging around 300MB. It's a base container image for new developments and cloud-native scenarios. Applications need to target Nano Server as the Windows OS, so not all frameworks will work. For example, .Net works on Nano Server, but .Net Framework doesn't. Other third-party frameworks also work on Nano Server, such as Apache, NodeJS, Phyton, Tomcat, Java runtime, JBoss, Redis, among others.

    Server Core is a much larger base container image, ranging around 1.25GB. It's larger size is compensated by it's application compatibility. Simply put, any application that meets the requirements to be run on a Windows container, can be containerized with this image.

    The Server image builds on the Server Core one. It ranges around 3.1GB and has even greater application compatibility than the Server Core image. In addition to the traditional Windows APIs and components, this image allows for scenarios such as Machine Learning via DirectX with GPU access.

    The best image for your scenario is dependent on the requirements your application has on the Windows OS inside a container. However, there are some scenarios that are not supported at all on Windows containers - such as GUI or RDP dependent applications, some Windows Server infrastructure roles, such as Active Directory, among others.

    Isolation

    When running containers, the kernel of the container host is shared with the containers running on it. While extremely convenient, this poses a potential risk for multi-tenant scenarios. If one container is compromised and has access to the host, it could potentially compromise other containers in the same system.

    For enterprise customers running on-premises (or even in the cloud), this can be mitigated by using a VM as a container host and considering the VM itself a security boundary. However, if multiple workloads from different tenants need to share the same host, Windows containers offer another option: Hyper-V isolation. While the name Hyper-V is associated with VMs, its virtualization capabilities extend to other services, including containers. Hyper-V isolated containers run on a purpose built, extremely small, highly performant VM. However, you manage a container running with Hyper-V isolation the same way you do with a process isolated one. In fact, the only notable difference is that you need to append the --isolation=hyperv tag to the docker run command.

    Exercise

    Here are a few examples of how to use Windows containers:

    Run Windows containers via Docker on your machine

    To pull a Windows base container image:

    docker pull mcr.microsoft.com/windows/servercore:ltsc2022

    To run a basic IIS container:

    #This command will pull and start a IIS container. You can access it from http://<your local IP>:8080
    docker run -d -p 8080:80 mcr.microsoft.com/windows/servercore/iis:windowsservercore-ltsc2022

    Run the same IIS container with Hyper-V isolation

    #This command will pull and start a IIS container. You can access it from http://<your local IP>:8080
    docker run -d -p 8080:80 --isolation=hyperv mcr.microsoft.com/windows/servercore/iis:windowsservercore-ltsc2022

    To run a Windows container interactively:

    docker run -it mcr.microsoft.com/windows/servercore:ltsc2022 powershell

    Run Windows containers on Kubernetes

    To prepare an AKS cluster for Windows containers: Note: Replace the values on the example below with the ones from your environment.

    echo "Please enter the username to use as administrator credentials for Windows Server nodes on your cluster: " && read WINDOWS_USERNAME
    az aks create \
    --resource-group myResourceGroup \
    --name myAKSCluster \
    --node-count 2 \
    --generate-ssh-keys \
    --windows-admin-username $WINDOWS_USERNAME \
    --vm-set-type VirtualMachineScaleSets \
    --network-plugin azure

    To add a Windows node pool for Windows containers:

    az aks nodepool add \
    --resource-group myResourceGroup \
    --cluster-name myAKSCluster \
    --os-type Windows \
    --name npwin \
    --node-count 1

    Deploy a sample ASP.Net application to the AKS cluster above using the YAML file below:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
    name: sample
    labels:
    app: sample
    spec:
    replicas: 1
    template:
    metadata:
    name: sample
    labels:
    app: sample
    spec:
    nodeSelector:
    "kubernetes.io/os": windows
    containers:
    - name: sample
    image: mcr.microsoft.com/dotnet/framework/samples:aspnetapp
    resources:
    limits:
    cpu: 1
    memory: 800M
    ports:
    - containerPort: 80
    selector:
    matchLabels:
    app: sample
    ---
    apiVersion: v1
    kind: Service
    metadata:
    name: sample
    spec:
    type: LoadBalancer
    ports:
    - protocol: TCP
    port: 80
    selector:
    app: sample

    Save the file above and run the command below on your Kubernetes cluster:

    kubectl apply -f <filename> .

    Once deployed, you can access the application by getting the IP address of your service:

    kubectl get service

    Resources

    It's not too late to sign up for and complete the Cloud Skills Challenge!
    - + \ No newline at end of file diff --git a/cnny-2023/tags/cloud-native/page/15/index.html b/cnny-2023/tags/cloud-native/page/15/index.html index f4dbba871c..f6ff795eb7 100644 --- a/cnny-2023/tags/cloud-native/page/15/index.html +++ b/cnny-2023/tags/cloud-native/page/15/index.html @@ -14,13 +14,13 @@ - +

    16 posts tagged with "cloud-native"

    View All Tags

    · 4 min read
    Jorge Arteiro

    Welcome to Day 4 of Week 4 of #CloudNativeNewYear!

    The theme for this week is going further with Cloud Native. Yesterday we talked about Windows Containers. Today we'll explore addons and extensions available to Azure Kubernetes Services (AKS).

    What We'll Cover

    • Introduction
    • Add-ons
    • Extensions
    • Add-ons vs Extensions
    • Resources

    Introduction

    Azure Kubernetes Service (AKS) is a fully managed container orchestration service that makes it easy to deploy and manage containerized applications on Azure. AKS offers a number of features and capabilities, including the ability to extend its supported functionality through the use of add-ons and extensions.

    There are also integrations available from open-source projects and third parties, but they are not covered by the AKS support policy.

    Add-ons

    Add-ons provide a supported way to extend AKS. Installation, configuration and lifecycle are managed by AKS following pre-determine updates rules.

    As an example, let's enable Container Insights with the monitoring addon. on an existing AKS cluster using az aks enable-addons --addons CLI command

    az aks enable-addons \
    --name MyManagedCluster \
    --resource-group MyResourceGroup \
    --addons monitoring

    or you can use az aks create --enable-addons when creating new clusters

    az aks create \
    --name MyManagedCluster \
    --resource-group MyResourceGroup \
    --enable-addons monitoring

    The current available add-ons are:

    1. http_application_routing - Configure ingress with automatic public DNS name creation. Only recommended for development.
    2. monitoring - Container Insights monitoring.
    3. virtual-node - CNCF virtual nodes open source project.
    4. azure-policy - Azure Policy for AKS.
    5. ingress-appgw - Application Gateway Ingress Controller (AGIC).
    6. open-service-mesh - CNCF Open Service Mesh project.
    7. azure-keyvault-secrets-provider - Azure Key Vault Secrets Provider for Secret Store CSI Driver.
    8. web_application_routing - Managed NGINX ingress Controller.
    9. keda - CNCF Event-driven autoscaling project.

    For more details, get the updated list of AKS Add-ons here

    Extensions

    Cluster Extensions uses Helm charts and integrates with Azure Resource Manager (ARM) to provide installation and lifecycle management of capabilities on top of AKS.

    Extensions can be auto upgraded using minor versions, but it requires extra management and configuration. Using Scope parameter, it can be installed on the whole cluster or per namespace.

    AKS Extensions requires an Azure CLI extension to be installed. To add or update this CLI extension use the following commands:

    az extension add --name k8s-extension

    and to update an existing extension

    az extension update --name k8s-extension

    There are only 3 available extensions:

    1. Dapr - CNCF Dapr project.
    2. Azure ML - Integrate Azure Machine Learning with AKS to train, inference and manage ML models.
    3. Flux (GitOps) - CNCF Flux project integrated with AKS to enable cluster configuration and application deployment using GitOps.

    As an example, you can install Azure ML using the following command:

    az k8s-extension create \
    --name aml-compute --extension-type Microsoft.AzureML.Kubernetes \
    --scope cluster --cluster-name <clusterName> \
    --resource-group <resourceGroupName> \
    --cluster-type managedClusters \
    --configuration-settings enableInference=True allowInsecureConnections=True

    For more details, get the updated list of AKS Extensions here

    Add-ons vs Extensions

    AKS Add-ons brings an advantage of been fully managed by AKS itself, and AKS Extensions are more flexible and configurable but requires extra level of management.

    Add-ons are part of the AKS resource provider in the Azure API, and AKS Extensions are a separate resource provider on the Azure API.

    Resources

    It's not too late to sign up for and complete the Cloud Skills Challenge!
    - + \ No newline at end of file diff --git a/cnny-2023/tags/cloud-native/page/16/index.html b/cnny-2023/tags/cloud-native/page/16/index.html index 7c9c81992d..fd98ef9403 100644 --- a/cnny-2023/tags/cloud-native/page/16/index.html +++ b/cnny-2023/tags/cloud-native/page/16/index.html @@ -14,13 +14,13 @@ - +

    16 posts tagged with "cloud-native"

    View All Tags

    · 6 min read
    Cory Skimming
    Steven Murawski
    Paul Yu
    Josh Duffney
    Nitya Narasimhan
    Vinicius Apolinario
    Jorge Arteiro
    Devanshi Joshi

    And that's a wrap on the inaugural #CloudNativeNewYear! Thank you for joining us to kick off the new year with this learning journey into cloud-native! In this final post of the 2023 celebration of all things cloud-native, we'll do two things:

    • Look Back - with a quick retrospective of what was covered.
    • Look Ahead - with resources and suggestions for how you can continue your skilling journey!

    We appreciate your time and attention and we hope you found this curated learning valuable. Feedback and suggestions are always welcome. From our entire team, we wish you good luck with the learning journey - now go build some apps and share your knowledge! 🎉


    What We'll Cover

    • Cloud-native fundamentals
    • Kubernetes fundamentals
    • Bringing your applications to Kubernetes
    • Go further with cloud-native
    • Resources to keep the celebration going!

    Week 1: Cloud-native Fundamentals

    In Week 1, we took a tour through the fundamentals of cloud-native technologies, including a walkthrough of the core concepts of containers, microservices, and Kubernetes.

    • Jan 23 - Cloud-native Fundamentals: The answers to life and all the universe - what is cloud-native? What makes an application cloud-native? What are the benefits? (yes, we all know it's 42, but hey, gotta start somewhere!)
    • Jan 24 - Containers 101: Containers are an essential component of cloud-native development. In this intro post, we cover how containers work and why they have become so popular.
    • Jan 25 - Kubernetes 101: Kuber-what-now? Learn the basics of Kubernetes and how it enables us to deploy and manage our applications effectively and consistently.
    A QUICKSTART GUIDE TO KUBERNETES CONCEPTS

    Missed it Live? Tune in to A Quickstart Guide to Kubernetes Concepts on demand, now!

    • Jan 26 - Microservices 101: What is a microservices architecture and how can we go about designing one?
    • Jan 27 - Exploring your Cloud Native Options: Cloud-native, while catchy, can be a very broad term. What technologies should you use? Learn some basic guidelines for when it is optimal to use different technologies for your project.

    Week 2: Kubernetes Fundamentals

    In Week 2, we took a deeper dive into the Fundamentals of Kubernetes. The posts and live demo from this week took us through how to build a simple application on Kubernetes, covering everything from deployment to networking and scaling. Note: for our samples and demo we have used Azure Kubernetes Service, but the principles apply to any Kubernetes!

    • Jan 30 - Pods and Deployments: how to use pods and deployments in Kubernetes.
    • Jan 31 - Services and Ingress: how to use services and ingress and a walk through the steps of making our containers accessible internally and externally!
    • Feb 1 - ConfigMaps and Secrets: how to of passing configuration and secrets to our applications in Kubernetes with ConfigMaps and Secrets.
    • Feb 2 - Volumes, Mounts, and Claims: how to use persistent storage on Kubernetes (and ensure your data can survive container restarts!).
    • Feb 3 - Scaling Pods and Nodes: how to scale pods and nodes in our Kubernetes cluster.
    ASK THE EXPERTS: AZURE KUBERNETES SERVICE

    Missed it Live? Tune in to Ask the Expert with Azure Kubernetes Service on demand, now!


    Week 3: Bringing your applications to Kubernetes

    So, you have learned how to build an application on Kubernetes. What about your existing applications? In Week 3, we explored how to take an existing application and set it up to run in Kubernetes:

    • Feb 6 - CI/CD: learn how to get an existing application running in Kubernetes with a full pipeline in GitHub Actions.
    • Feb 7 - Adapting Storage, Secrets, and Configuration: how to evaluate our sample application's configuration, storage, and networking requirements and implement using Kubernetes.
    • Feb 8 - Opening your Application with Ingress: how to expose the eShopOnWeb app so that customers can reach it over the internet using a custom domain name and TLS.
    • Feb 9 - Debugging and Instrumentation: how to debug and instrument your application now that it is on Kubernetes.
    • Feb 10 - CI/CD Secure Supply Chain: now that we have set up our application on Kubernetes, let's talk about container image signing and how to set up a secure supply change.

    Week 4: Go Further with Cloud-Native

    This week we have gone further with Cloud-native by exploring advanced topics and best practices for the Cloud-native practitioner.

    And today, February 17th, with this one post to rule (er, collect) them all!


    Keep the Learning Going!

    Learning is great, so why stop here? We have a host of great resources and samples for you to continue your cloud-native journey with Azure below:


    - + \ No newline at end of file diff --git a/cnny-2023/tags/cloud-native/page/2/index.html b/cnny-2023/tags/cloud-native/page/2/index.html index a0026e8cdd..badc46a4a7 100644 --- a/cnny-2023/tags/cloud-native/page/2/index.html +++ b/cnny-2023/tags/cloud-native/page/2/index.html @@ -14,14 +14,14 @@ - +

    16 posts tagged with "cloud-native"

    View All Tags

    · 5 min read
    Cory Skimming

    Welcome to Week 1 of #CloudNativeNewYear!

    Cloud-native New Year

    You will often hear the term "cloud-native" when discussing modern application development, but even a quick online search will return a huge number of articles, tweets, and web pages with a variety of definitions. So, what does cloud-native actually mean? Also, what makes an application a cloud-native application versus a "regular" application?

    Today, we will address these questions and more as we kickstart our learning journey (and our new year!) with an introductory dive into the wonderful world of cloud-native.


    What We'll Cover

    • What is cloud-native?
    • What is a cloud-native application?
    • The benefits of cloud-native
    • The five pillars of cloud-native
    • Exercise: Take the Cloud Skills Challenge!

    1. What is cloud-native?

    The term "cloud-native" can seem pretty self-evident (yes, hello, native to the cloud?), and in a way, it is. While there are lots of definitions of cloud-native floating around, at it's core, cloud-native simply refers to a modern approach to building software that takes advantage of cloud services and environments. This includes using cloud-native technologies, such as containers, microservices, and serverless, and following best practices for deploying, scaling, and managing applications in a cloud environment.

    Official definition from the Cloud Native Computing Foundation:

    Cloud-native technologies empower organizations to build and run scalable applications in modern, dynamic environments such as public, private, and hybrid clouds. Containers, service meshes, microservices, immutable infrastructure, and declarative APIs exemplify this approach.

    These techniques enable loosely coupled systems that are resilient, manageable, and observable. Combined with robust automation, they allow engineers to make high-impact changes frequently and predictably with minimal toil. Source


    2. So, what exactly is a cloud-native application?

    Cloud-native applications are specifically designed to take advantage of the scalability, resiliency, and distributed nature of modern cloud infrastructure. But how does this differ from a "traditional" application?

    Traditional applications are generally been built, tested, and deployed as a single, monolithic unit. The monolithic nature of this type of architecture creates close dependencies between components. This complexity and interweaving only increases as an application grows and can make it difficult to evolve (not to mention troubleshoot) and challenging to operate over time.

    To contrast, in cloud-native architectures the application components are decomposed into loosely coupled services, rather than built and deployed as one block of code. This decomposition into multiple self-contained services enables teams to manage complexity and improve the speed, agility, and scale of software delivery. Many small parts enables teams to make targeted updates, deliver new features, and fix any issues without leading to broader service disruption.


    3. The benefits of cloud-native

    Cloud-native architectures can bring many benefits to an organization, including:

    1. Scalability: easily scale up or down based on demand, allowing organizations to adjust their resource usage and costs as needed.
    2. Flexibility: deploy and run on any cloud platform, and easily move between clouds and on-premises environments.
    3. High-availability: techniques such as redundancy, self-healing, and automatic failover help ensure that cloud-native applications are designed to be highly-available and fault tolerant.
    4. Reduced costs: take advantage of the pay-as-you-go model of cloud computing, reducing the need for expensive infrastructure investments.
    5. Improved security: tap in to cloud security features, such as encryption and identity management, to improve the security of the application.
    6. Increased agility: easily add new features or services to your applications to meet changing business needs and market demand.

    4. The pillars of cloud-native

    There are five areas that are generally cited as the core building blocks of cloud-native architecture:

    1. Microservices: Breaking down monolithic applications into smaller, independent, and loosely-coupled services that can be developed, deployed, and scaled independently.
    2. Containers: Packaging software in lightweight, portable, and self-sufficient containers that can run consistently across different environments.
    3. Automation: Using automation tools and DevOps processes to manage and operate the cloud-native infrastructure and applications, including deployment, scaling, monitoring, and self-healing.
    4. Service discovery: Using service discovery mechanisms, such as APIs & service meshes, to enable services to discover and communicate with each other.
    5. Observability: Collecting and analyzing data from the infrastructure and applications to understand and optimize the performance, behavior, and health of the system.

    These can (and should!) be used in combination to deliver cloud-native solutions that are highly scalable, flexible, and available.

    WHAT'S NEXT

    Stay tuned, as we will be diving deeper into these topics in the coming weeks:

    • Jan 24: Containers 101
    • Jan 25: Adopting Microservices with Kubernetes
    • Jan 26: Kubernetes 101
    • Jan 27: Exploring your Cloud-native Options

    Resources


    Don't forget to subscribe to the blog to get daily posts delivered directly to your favorite feed reader!


    - + \ No newline at end of file diff --git a/cnny-2023/tags/cloud-native/page/3/index.html b/cnny-2023/tags/cloud-native/page/3/index.html index 4e63a26bf9..3b80cc3f0d 100644 --- a/cnny-2023/tags/cloud-native/page/3/index.html +++ b/cnny-2023/tags/cloud-native/page/3/index.html @@ -14,14 +14,14 @@ - +

    16 posts tagged with "cloud-native"

    View All Tags

    · 4 min read
    Steven Murawski
    Paul Yu
    Josh Duffney

    Welcome to Day 2 of Week 1 of #CloudNativeNewYear!

    Today, we'll focus on building an understanding of containers.

    What We'll Cover

    • Introduction
    • How do Containers Work?
    • Why are Containers Becoming so Popular?
    • Conclusion
    • Resources
    • Learning Path

    REGISTER & LEARN: KUBERNETES 101

    Interested in a dive into Kubernetes and a chance to talk to experts?

    🎙: Join us Jan 26 @1pm PST by registering here

    Here's what you will learn:

    • Key concepts and core principles of Kubernetes.
    • How to deploy, scale and manage containerized workloads.
    • Live Demo of the concepts explained
    • How to get started with Azure Kubernetes Service for free.

    Start your free Azure Kubernetes Trial Today!!: aka.ms/TryAKS

    Introduction

    In the beginning, we deployed our applications onto physical servers. We only had a certain number of those servers, so often they hosted multiple applications. This led to some problems when those applications shared dependencies. Upgrading one application could break another application on the same server.

    Enter virtualization. Virtualization allowed us to run our applications in an isolated operating system instance. This removed much of the risk of updating shared dependencies. However, it increased our overhead since we had to run a full operating system for each application environment.

    To address the challenges created by virtualization, containerization was created to improve isolation without duplicating kernel level resources. Containers provide efficient and consistent deployment and runtime experiences for our applications and have become very popular as a way of packaging and distributing applications.

    How do Containers Work?

    Containers build on two capabilities in the Linux operating system, namespaces and cgroups. These constructs allow the operating system to provide isolation to a process or group of processes, keeping their access to filesystem resources separate and providing controls on resource utilization. This, combined with tooling to help package, deploy, and run container images has led to their popularity in today’s operating environment. This provides us our isolation without the overhead of additional operating system resources.

    When a container host is deployed on an operating system, it works at scheduling the access to the OS (operating systems) components. This is done by providing a logical isolated group that can contain processes for a given application, called a namespace. The container host then manages /schedules access from the namespace to the host OS. The container host then uses cgroups to allocate compute resources. Together, the container host with the help of cgroups and namespaces can schedule multiple applications to access host OS resources.

    Overall, this gives the illusion of virtualizing the host OS, where each application gets its own OS. In actuality, all the applications are running on the same operating system and sharing the same kernel as the container host.

    Containers are popular in the software development industry because they provide several benefits over traditional virtualization methods. Some of these benefits include:

    • Portability: Containers make it easy to move an application from one environment to another without having to worry about compatibility issues or missing dependencies.
    • Isolation: Containers provide a level of isolation between the application and the host system, which means that the application running in the container cannot access the host system's resources.
    • Scalability: Containers make it easy to scale an application up or down as needed, which is useful for applications that experience a lot of traffic or need to handle a lot of data.
    • Resource Efficiency: Containers are more resource-efficient than traditional virtualization methods because they don't require a full operating system to be running on each virtual machine.
    • Cost-Effective: Containers are more cost-effective than traditional virtualization methods because they don't require expensive hardware or licensing fees.

    Conclusion

    Containers are a powerful technology that allows developers to package and deploy applications in a portable and isolated environment. This technology is becoming increasingly popular in the world of software development and is being used by many companies and organizations to improve their application deployment and management processes. With the benefits of portability, isolation, scalability, resource efficiency, and cost-effectiveness, containers are definitely worth considering for your next application development project.

    Containerizing applications is a key step in modernizing them, and there are many other patterns that can be adopted to achieve cloud-native architectures, including using serverless platforms, Kubernetes, and implementing DevOps practices.

    Resources

    Learning Path

    - + \ No newline at end of file diff --git a/cnny-2023/tags/cloud-native/page/4/index.html b/cnny-2023/tags/cloud-native/page/4/index.html index 12167149d3..308ef2116b 100644 --- a/cnny-2023/tags/cloud-native/page/4/index.html +++ b/cnny-2023/tags/cloud-native/page/4/index.html @@ -14,14 +14,14 @@ - +

    16 posts tagged with "cloud-native"

    View All Tags

    · 3 min read
    Steven Murawski

    Welcome to Day 3 of Week 1 of #CloudNativeNewYear!

    This week we'll focus on what Kubernetes is.

    What We'll Cover

    • Introduction
    • What is Kubernetes? (Video)
    • How does Kubernetes Work? (Video)
    • Conclusion


    REGISTER & LEARN: KUBERNETES 101

    Interested in a dive into Kubernetes and a chance to talk to experts?

    🎙: Join us Jan 26 @1pm PST by registering here

    Here's what you will learn:

    • Key concepts and core principles of Kubernetes.
    • How to deploy, scale and manage containerized workloads.
    • Live Demo of the concepts explained
    • How to get started with Azure Kubernetes Service for free.

    Start your free Azure Kubernetes Trial Today!!: aka.ms/TryAKS

    Introduction

    Kubernetes is an open source container orchestration engine that can help with automated deployment, scaling, and management of our applications.

    Kubernetes takes physical (or virtual) resources and provides a consistent API over them, bringing a consistency to the management and runtime experience for our applications. Kubernetes provides us with a number of capabilities such as:

    • Container scheduling
    • Service discovery and load balancing
    • Storage orchestration
    • Automated rollouts and rollbacks
    • Automatic bin packing
    • Self-healing
    • Secret and configuration management

    We'll learn more about most of these topics as we progress through Cloud Native New Year.

    What is Kubernetes?

    Let's hear from Brendan Burns, one of the founders of Kubernetes as to what Kubernetes actually is.

    How does Kubernetes Work?

    And Brendan shares a bit more with us about how Kubernetes works.

    Conclusion

    Kubernetes allows us to deploy and manage our applications effectively and consistently.

    By providing a consistent API across many of the concerns our applications have, like load balancing, networking, storage, and compute, Kubernetes improves both our ability to build and ship new software.

    There are standards for the applications to depend on for resources needed. Deployments, metrics, and logs are provided in a standardized fashion allowing more effecient operations across our application environments.

    And since Kubernetes is an open source platform, it can be found in just about every type of operating environment - cloud, virtual machines, physical hardware, shared data centers, even small devices like Rasberry Pi's!

    Want to learn more? Join us for a webinar on Kubernetes Concepts (or catch the playback) on Thursday, January 26th at 1 PM PST and watch for the rest of this series right here!

    - + \ No newline at end of file diff --git a/cnny-2023/tags/cloud-native/page/5/index.html b/cnny-2023/tags/cloud-native/page/5/index.html index 6aa8e7e9fa..debe692ca9 100644 --- a/cnny-2023/tags/cloud-native/page/5/index.html +++ b/cnny-2023/tags/cloud-native/page/5/index.html @@ -14,13 +14,13 @@ - +

    16 posts tagged with "cloud-native"

    View All Tags

    · 6 min read
    Josh Duffney

    Welcome to Day 4 of Week 1 of #CloudNativeNewYear!

    This week we'll focus on advanced topics and best practices for Cloud-Native practitioners, kicking off with this post on Serverless Container Options with Azure. We'll look at technologies, tools and best practices that range from managed services like Azure Kubernetes Service, to options allowing finer granularity of control and oversight.

    What We'll Cover

    • What is Microservice Architecture?
    • How do you design a Microservice?
    • What challenges do Microservices introduce?
    • Conclusion
    • Resources


    Microservices are a modern way of designing and building software that increases deployment velocity by decomposing an application into small autonomous services that can be deployed independently.

    By deploying loosely coupled microservices your applications can be developed, deployed, and scaled independently. Because each service is independent, it can be updated or replaced without having to worry about the impact on the rest of the application. This means that if a bug is found in one service, it can be fixed without having to redeploy the entire application. All of which gives an organization the ability to deliver value to their customers faster.

    In this article, we will explore the basics of microservices architecture, its benefits and challenges, and how it can help improve the development, deployment, and maintenance of software applications.

    What is Microservice Architecture?

    Before explaining what Microservice architecture is, it’s important to understand what problems microservices aim to address.

    Traditional software development is centered around building monolithic applications. Monolithic applications are built as a single, large codebase. Meaning your code is tightly coupled causing the monolithic app to suffer from the following:

    Too much Complexity: Monolithic applications can become complex and difficult to understand and maintain as they grow. This can make it hard to identify and fix bugs and add new features.

    Difficult to Scale: Monolithic applications can be difficult to scale as they often have a single point of failure, which can cause the whole application to crash if a service fails.

    Slow Deployment: Deploying a monolithic application can be risky and time-consuming, as a small change in one part of the codebase can affect the entire application.

    Microservice architecture (often called microservices) is an architecture style that addresses the challenges created by Monolithic applications. Microservices architecture is a way of designing and building software applications as a collection of small, independent services that communicate with each other through APIs. This allows for faster development and deployment cycles, as well as easier scaling and maintenance than is possible with a monolithic application.

    How do you design a Microservice?

    Building applications with Microservices architecture requires a different approach. Microservices architecture focuses on business capabilities rather than technical layers, such as data access or messaging. Doing so requires that you shift your focus away from the technical stack and model your applications based upon the various domains that exist within the business.

    Domain-driven design (DDD) is a way to design software by focusing on the business needs. You can use Domain-driven design as a framework that guides the development of well-designed microservices by building services that encapsulate knowledge in each domain and abstract that knowledge from clients.

    In Domain-driven design you start by modeling the business domain and creating a domain model. A domain model is an abstract model of the business model that distills and organizes a domain of knowledge and provides a common language for developers and domain experts. It’s the resulting domain model that microservices a best suited to be built around because it helps establish a well-defined boundary between external systems and other internal applications.

    In short, before you begin designing microservices, start by mapping the functions of the business and their connections to create a domain model for the microservice(s) to be built around.

    What challenges do Microservices introduce?

    Microservices solve a lot of problems and have several advantages, but the grass isn’t always greener on the other side.

    One of the key challenges of microservices is managing communication between services. Because services are independent, they need to communicate with each other through APIs. This can be complex and difficult to manage, especially as the number of services grows. To address this challenge, it is important to have a clear API design, with well-defined inputs and outputs for each service. It is also important to have a system for managing and monitoring communication between services, to ensure that everything is running smoothly.

    Another challenge of microservices is managing the deployment and scaling of services. Because each service is independent, it needs to be deployed and scaled separately from the rest of the application. This can be complex and difficult to manage, especially as the number of services grows. To address this challenge, it is important to have a clear and consistent deployment process, with well-defined steps for deploying and scaling each service. Furthermore, it is advisable to host them on a system with self-healing capabilities to reduce operational burden.

    It is also important to have a system for monitoring and managing the deployment and scaling of services, to ensure optimal performance.

    Each of these challenges has created fertile ground for tooling and process that exists in the cloud-native ecosystem. Kubernetes, CI CD, and other DevOps practices are part of the package of adopting the microservices architecture.

    Conclusion

    In summary, microservices architecture focuses on software applications as a collection of small, independent services that communicate with each other over well-defined APIs.

    The main advantages of microservices include:

    • increased flexibility and scalability per microservice,
    • efficient resource utilization (with help from a container orchestrator like Kubernetes),
    • and faster development cycles.

    Continue following along with this series to see how you can use Kubernetes to help adopt microservices patterns in your own environments!

    Resources

    - + \ No newline at end of file diff --git a/cnny-2023/tags/cloud-native/page/6/index.html b/cnny-2023/tags/cloud-native/page/6/index.html index b75a6aaeb3..27ef9da7d0 100644 --- a/cnny-2023/tags/cloud-native/page/6/index.html +++ b/cnny-2023/tags/cloud-native/page/6/index.html @@ -14,14 +14,14 @@ - +

    16 posts tagged with "cloud-native"

    View All Tags

    · 6 min read
    Cory Skimming

    We are excited to be wrapping up our first week of #CloudNativeNewYear! This week, we have tried to set the stage by covering the fundamentals of cloud-native practices and technologies, including primers on containerization, microservices, and Kubernetes.

    Don't forget to sign up for the the Cloud Skills Challenge!

    Today, we will do a brief recap of some of these technologies and provide some basic guidelines for when it is optimal to use each.


    What We'll Cover

    • To Containerize or not to Containerize?
    • The power of Kubernetes
    • Where does Serverless fit?
    • Resources
    • What's coming next!


    Just joining us now? Check out these other Week 1 posts:

    To Containerize or not to Containerize?

    As mentioned in our Containers 101 post earlier this week, containers can provide several benefits over traditional virtualization methods, which has made them popular within the software development community. Containers provide a consistent and predictable runtime environment, which can help reduce the risk of compatibility issues and simplify the deployment process. Additionally, containers can improve resource efficiency by allowing multiple applications to run on the same host while isolating their dependencies.

    Some types of apps that are a particularly good fit for containerization include:

    1. Microservices: Containers are particularly well-suited for microservices-based applications, as they can be used to isolate and deploy individual components of the system. This allows for more flexibility and scalability in the deployment process.
    2. Stateless applications: Applications that do not maintain state across multiple sessions, such as web applications, are well-suited for containers. Containers can be easily scaled up or down as needed and replaced with new instances, without losing data.
    3. Portable applications: Applications that need to be deployed in different environments, such as on-premises, in the cloud, or on edge devices, can benefit from containerization. The consistent and portable runtime environment of containers can make it easier to move the application between different environments.
    4. Legacy applications: Applications that are built using older technologies or that have compatibility issues can be containerized to run in an isolated environment, without impacting other applications or the host system.
    5. Dev and testing environments: Containerization can be used to create isolated development and testing environments, which can be easily created and destroyed as needed.

    While there are many types of applications that can benefit from a containerized approach, it's worth noting that containerization is not always the best option, and it's important to weigh the benefits and trade-offs before deciding to containerize an application. Additionally, some types of applications may not be a good fit for containers including:

    • Apps that require full access to host resources: Containers are isolated from the host system, so if an application needs direct access to hardware resources such as GPUs or specialized devices, it might not work well in a containerized environment.
    • Apps that require low-level system access: If an application requires deep access to the underlying operating system, it may not be suitable for running in a container.
    • Applications that have specific OS dependencies: Apps that have specific dependencies on a certain version of an operating system or libraries may not be able to run in a container.
    • Stateful applications: Apps that maintain state across multiple sessions, such as databases, may not be well suited for containers. Containers are ephemeral by design, so the data stored inside a container may not persist between restarts.

    The good news is that some of these limitations can be overcome with the use of specialized containerization technologies such as Kubernetes, and by carefully designing the architecture of the application.


    The power of Kubernetes

    Speaking of Kubernetes...

    Kubernetes is a powerful tool for managing and deploying containerized applications in production environments, particularly for applications that need to scale, handle large numbers of requests, or run in multi-cloud or hybrid environments.

    Kubernetes is well-suited for a wide variety of applications, but it is particularly well-suited for the following types of applications:

    1. Microservices-based applications: Kubernetes provides a powerful set of tools for managing and deploying microservices-based applications, making it easy to scale, update, and manage the individual components of the application.
    2. Stateful applications: Kubernetes provides support for stateful applications through the use of Persistent Volumes and StatefulSets, allowing for applications that need to maintain state across multiple instances.
    3. Large-scale, highly-available systems: Kubernetes provides built-in support for scaling, self-healing, and rolling updates, making it an ideal choice for large-scale, highly-available systems that need to handle large numbers of users and requests.
    4. Multi-cloud and hybrid environments: Kubernetes can be used to deploy and manage applications across multiple cloud providers and on-premises environments, making it a good choice for organizations that want to take advantage of the benefits of multiple cloud providers or that need to deploy applications in a hybrid environment.
    New to Kubernetes?

    Where does Serverless fit in?

    Serverless is a cloud computing model where the cloud provider (like Azure) is responsible for executing a piece of code by dynamically allocating the resources. With serverless, you only pay for the exact amount of compute time that you use, rather than paying for a fixed amount of resources. This can lead to significant cost savings, particularly for applications with variable or unpredictable workloads.

    Serverless is commonly used for building applications like web or mobile apps, IoT, data processing, and real-time streaming - apps where the workloads are variable and high scalability is required. It's important to note that serverless is not a replacement for all types of workloads - it's best suited for stateless, short-lived and small-scale workloads.

    For a detailed look into the world of Serverless and lots of great learning content, revisit #30DaysofServerless.


    Resources


    What's up next in #CloudNativeNewYear?

    Week 1 has been all about the fundamentals of cloud-native. Next week, the team will be diving in to application deployment with Azure Kubernetes Service. Don't forget to subscribe to the blog to get daily posts delivered directly to your favorite feed reader!


    - + \ No newline at end of file diff --git a/cnny-2023/tags/cloud-native/page/7/index.html b/cnny-2023/tags/cloud-native/page/7/index.html index 478f49c02f..5e43793aba 100644 --- a/cnny-2023/tags/cloud-native/page/7/index.html +++ b/cnny-2023/tags/cloud-native/page/7/index.html @@ -14,13 +14,13 @@ - +

    16 posts tagged with "cloud-native"

    View All Tags

    · 14 min read
    Steven Murawski

    Welcome to Day #1 of Week 2 of #CloudNativeNewYear!

    The theme for this week is Kubernetes fundamentals. Last week we talked about Cloud Native architectures and the Cloud Native landscape. Today we'll explore the topic of Pods and Deployments in Kubernetes.

    Ask the Experts Thursday, February 9th at 9 AM PST
    Catch the Replay of the Live Demo

    Watch the recorded demo and conversation about this week's topics.

    We were live on YouTube walking through today's (and the rest of this week's) demos.

    What We'll Cover

    • Setting Up A Kubernetes Environment in Azure
    • Running Containers in Kubernetes Pods
    • Making the Pods Resilient with Deployments
    • Exercise
    • Resources

    Setting Up A Kubernetes Environment in Azure

    For this week, we'll be working with a simple app - the Azure Voting App. My teammate Paul Yu ported the app to Rust and we tweaked it a bit to let us highlight some of the basic features of Kubernetes.

    You should be able to replicate this in just about any Kubernetes environment, but we'll use Azure Kubernetes Service (AKS) as our working environment for this week.

    To make it easier to get started, there's a Bicep template to deploy an AKS cluster, an Azure Container Registry (ACR) (to host our container image), and connect the two so that we can easily deploy our application.

    Step 0 - Prerequisites

    There are a few things you'll need if you want to work through this and the following examples this week.

    Required:

    • Git (and probably a GitHub account if you want to persist your work outside of your computer)
    • Azure CLI
    • An Azure subscription (if you want to follow along with the Azure steps)
    • Kubectl (the command line tool for managing Kubernetes)

    Helpful:

    • Visual Studio Code (or equivalent editor)

    Step 1 - Clone the application repository

    First, I forked the source repository to my account.

    $GitHubOrg = 'smurawski' # Replace this with your GitHub account name or org name
    git clone "https://github.com/$GitHubOrg/azure-voting-app-rust"
    cd azure-voting-app-rust

    Leave your shell opened with your current location inside the application repository.

    Step 2 - Set up AKS

    Running the template deployment from the demo script (I'm using the PowerShell example in cnny23-week2-day1.ps1, but there's a Bash variant at cnny23-week2-day1.sh) stands up the environment. The second, third, and fourth commands take some of the output from the Bicep deployment to set up for later commands, so don't close out your shell after you run these commands.

    az deployment sub create --template-file ./deploy/main.bicep --location eastus --parameters 'resourceGroup=cnny-week2'
    $AcrName = az deployment sub show --name main --query 'properties.outputs.acr_name.value' -o tsv
    $AksName = az deployment sub show --name main --query 'properties.outputs.aks_name.value' -o tsv
    $ResourceGroup = az deployment sub show --name main --query 'properties.outputs.resource_group_name.value' -o tsv

    az aks get-credentials --resource-group $ResourceGroup --name $AksName

    Step 3 - Build our application container

    Since we have an Azure Container Registry set up, I'll use ACR Build Tasks to build and store my container image.

    az acr build --registry $AcrName --% --image cnny2023/azure-voting-app-rust:{{.Run.ID}} .
    $BuildTag = az acr repository show-tags `
    --name $AcrName `
    --repository cnny2023/azure-voting-app-rust `
    --orderby time_desc `
    --query '[0]' -o tsv
    tip

    Wondering what the --% is in the first command line? That tells the PowerShell interpreter to pass the input after it "as is" to the command without parsing/evaluating it. Otherwise, PowerShell messes a bit with the templated {{.Run.ID}} bit.

    Running Containers in Kubernetes Pods

    Now that we have our AKS cluster and application image ready to go, let's look into how Kubernetes runs containers.

    If you've been in tech for any length of time, you've seen that every framework, runtime, orchestrator, etc.. can have their own naming scheme for their concepts. So let's get into some of what Kubernetes calls things.

    The Pod

    A container running in Kubernetes is called a Pod. A Pod is basically a running container on a Node or VM. It can be more. For example you can run multiple containers and specify some funky configuration, but we'll keep it simple for now - add the complexity when you need it.

    Our Pod definition can be created via the kubectl command imperatively from arguments or declaratively from a configuration file. We'll do a little of both. We'll use the kubectl command to help us write our configuration files. Kubernetes configuration files are YAML, so having an editor that supports and can help you syntax check YAML is really helpful.

    Creating a Pod Definition

    Let's create a few Pod definitions. Our application requires two containers to get working - the application and a database.

    Let's create the database Pod first. And before you comment, the configuration isn't secure nor best practice. We'll fix that later this week. For now, let's focus on getting up and running.

    This is a trick I learned from one of my teammates - Paul. By using the --output yaml and --dry-run=client options, we can have the command help us write our YAML. And with a bit of output redirection, we can stash it safely in a file for later use.

    kubectl run azure-voting-db `
    --image "postgres:15.0-alpine" `
    --env "POSTGRES_PASSWORD=mypassword" `
    --output yaml `
    --dry-run=client > manifests/pod-db.yaml

    This creates a file that looks like:

    apiVersion: v1
    kind: Pod
    metadata:
    creationTimestamp: null
    labels:
    run: azure-voting-db
    name: azure-voting-db
    spec:
    containers:
    - env:
    - name: POSTGRES_PASSWORD
    value: mypassword
    image: postgres:15.0-alpine
    name: azure-voting-db
    resources: {}
    dnsPolicy: ClusterFirst
    restartPolicy: Always
    status: {}

    The file, when supplied to the Kubernetes API, will identify what kind of resource to create, the API version to use, and the details of the container (as well as an environment variable to be supplied).

    We'll get that container image started with the kubectl command. Because the details of what to create are in the file, we don't need to specify much else to the kubectl command but the path to the file.

    kubectl apply -f ./manifests/pod-db.yaml

    I'm going to need the IP address of the Pod, so that my application can connect to it, so we can use kubectl to get some information about our pod. By default, kubectl get pod only displays certain information but it retrieves a lot more. We can use the JSONPath syntax to index into the response and get the information you want.

    tip

    To see what you can get, I usually run the kubectl command with the output type (-o JSON) of JSON and then I can find where the data I want is and create my JSONPath query to get it.

    $DB_IP = kubectl get pod azure-voting-db -o jsonpath='{.status.podIP}'

    Now, let's create our Pod definition for our application. We'll use the same technique as before.

    kubectl run azure-voting-app `
    --image "$AcrName.azurecr.io/cnny2023/azure-voting-app-rust:$BuildTag" `
    --env "DATABASE_SERVER=$DB_IP" `
    --env "DATABASE_PASSWORD=mypassword`
    --output yaml `
    --dry-run=client > manifests/pod-app.yaml

    That command gets us a similar YAML file to the database container - you can see the full file here

    Let's get our application container running.

    kubectl apply -f ./manifests/pod-app.yaml

    Now that the Application is Running

    We can check the status of our Pods with:

    kubectl get pods

    And we should see something like:

    azure-voting-app-rust ❯  kubectl get pods
    NAME READY STATUS RESTARTS AGE
    azure-voting-app 1/1 Running 0 36s
    azure-voting-db 1/1 Running 0 84s

    Once our pod is running, we can check to make sure everything is working by letting kubectl proxy network connections to our Pod running the application. If we get the voting web page, we'll know the application found the database and we can start voting!

    kubectl port-forward pod/azure-voting-app 8080:8080

    Azure voting website in a browser with three buttons, one for Dogs, one for Cats, and one for Reset.  The counter is Dogs - 0 and Cats - 0.

    When you are done voting, you can stop the port forwarding by using Control-C to break the command.

    Clean Up

    Let's clean up after ourselves and see if we can't get Kubernetes to help us keep our application running. We can use the same configuration files to ensure that Kubernetes only removes what we want removed.

    kubectl delete -f ./manifests/pod-app.yaml
    kubectl delete -f ./manifests/pod-db.yaml

    Summary - Pods

    A Pod is the most basic unit of work inside Kubernetes. Once the Pod is deleted, it's gone. That leads us to our next topic (and final topic for today.)

    Making the Pods Resilient with Deployments

    We've seen how easy it is to deploy a Pod and get our containers running on Nodes in our Kubernetes cluster. But there's a problem with that. Let's illustrate it.

    Breaking Stuff

    Setting Back Up

    First, let's redeploy our application environment. We'll start with our application container.

    kubectl apply -f ./manifests/pod-db.yaml
    kubectl get pod azure-voting-db -o jsonpath='{.status.podIP}'

    The second command will report out the new IP Address for our database container. Let's open ./manifests/pod-app.yaml and update the container IP to our new one.

    - name: DATABASE_SERVER
    value: YOUR_NEW_IP_HERE

    Then we can deploy the application with the information it needs to find its database. We'll also list out our pods to see what is running.

    kubectl apply -f ./manifests/pod-app.yaml
    kubectl get pods

    Feel free to look back and use the port forwarding trick to make sure your app is running if you'd like.

    Knocking It Down

    The first thing we'll try to break is our application pod. Let's delete it.

    kubectl delete pod azure-voting-app

    Then, we'll check our pod's status:

    kubectl get pods

    Which should show something like:

    azure-voting-app-rust ❯  kubectl get pods
    NAME READY STATUS RESTARTS AGE
    azure-voting-db 1/1 Running 0 50s

    We should be able to recreate our application pod deployment with no problem, since it has the current database IP address and nothing else depends on it.

    kubectl apply -f ./manifests/pod-app.yaml

    Again, feel free to do some fun port forwarding and check your site is running.

    Uncomfortable Truths

    Here's where it gets a bit stickier, what if we delete the database container?

    If we delete our database container and recreate it, it'll likely have a new IP address, which would force us to update our application configuration. We'll look at some solutions for these problems in the next three posts this week.

    Because our database problem is a bit tricky, we'll primarily focus on making our application layer more resilient and prepare our database layer for those other techniques over the next few days.

    Let's clean back up and look into making things more resilient.

    kubectl delete -f ./manifests/pod-app.yaml
    kubectl delete -f ./manifests/pod-db.yaml

    The Deployment

    One of the reasons you may want to use Kubernetes is it's ability to orchestrate workloads. Part of that orchestration includes being able to ensure that certain workloads are running (regardless of what Node they might be on).

    We saw that we could delete our application pod and then restart it from the manifest with little problem. It just meant that we had to run a command to restart it. We can use the Deployment in Kubernetes to tell the orchestrator to ensure we have our application pod running.

    The Deployment also can encompass a lot of extra configuration - controlling how many containers of a particular type should be running, how upgrades of container images should proceed, and more.

    Creating the Deployment

    First, we'll create a Deployment for our database. We'll use a technique similar to what we did for the Pod, with just a bit of difference.

    kubectl create deployment azure-voting-db `
    --image "postgres:15.0-alpine" `
    --port 5432 `
    --output yaml `
    --dry-run=client > manifests/deployment-db.yaml

    Unlike our Pod definition creation, we can't pass in environment variable configuration from the command line. We'll have to edit the YAML file to add that.

    So, let's open ./manifests/deployment-db.yaml in our editor and add the following in the spec/containers configuration.

            env:
    - name: POSTGRES_PASSWORD
    value: "mypassword"

    Your file should look like this deployment-db.yaml.

    Once we have our configuration file updated, we can deploy our database container image.

    kubectl apply -f ./manifests/deployment-db.yaml

    For our application, we'll use the same technique.

    kubectl create deployment azure-voting-app `
    --image "$AcrName.azurecr.io/cnny2023/azure-voting-app-rust:$BuildTag" `
    --port 8080 `
    --output yaml `
    --dry-run=client > manifests/deployment-app.yaml

    Next, we'll need to add an environment variable to the generated configuration. We'll also need the new IP address for the database deployment.

    Previously, we named the pod and were able to ask for the IP address with kubectl and a bit of JSONPath. Now, the deployment created the pod for us, so there's a bit of random in the naming. Check out:

    kubectl get pods

    Should return something like:

    azure-voting-app-rust ❯  kubectl get pods
    NAME READY STATUS RESTARTS AGE
    azure-voting-db-686d758fbf-8jnq8 1/1 Running 0 7s

    We can either ask for the IP with the new pod name, or we can use a selector to find our desired pod.

    kubectl get pod --selector app=azure-voting-db -o jsonpath='{.items[0].status.podIP}'

    Now, we can update our application deployment configuration file with:

            env:
    - name: DATABASE_SERVER
    value: YOUR_NEW_IP_HERE
    - name: DATABASE_PASSWORD
    value: mypassword

    Your file should look like this deployment-app.yaml (but with IPs and image names matching your environment).

    After we save those changes, we can deploy our application.

    kubectl apply -f ./manifests/deployment-app.yaml

    Let's test the resilience of our app now. First, we'll delete the pod running our application, then we'll check to make sure Kubernetes restarted our application pod.

    kubectl get pods
    azure-voting-app-rust ❯  kubectl get pods
    NAME READY STATUS RESTARTS AGE
    azure-voting-app-56c9ccc89d-skv7x 1/1 Running 0 71s
    azure-voting-db-686d758fbf-8jnq8 1/1 Running 0 12m
    kubectl delete pod azure-voting-app-56c9ccc89d-skv7x
    kubectl get pods
    azure-voting-app-rust ❯  kubectl delete pod azure-voting-app-56c9ccc89d-skv7x
    >> kubectl get pods
    pod "azure-voting-app-56c9ccc89d-skv7x" deleted
    NAME READY STATUS RESTARTS AGE
    azure-voting-app-56c9ccc89d-2b5mx 1/1 Running 0 2s
    azure-voting-db-686d758fbf-8jnq8 1/1 Running 0 14m
    info

    Your Pods will likely have different identifiers at the end, so adjust your commands to match the names in your environment.

    As you can see, by the time the kubectl get pods command was run, Kubernetes had already spun up a new pod for the application container image. Thanks Kubernetes!

    Clean up

    Since we can't just delete the pods, we have to delete the deployments.

    kubectl delete -f ./manifests/deployment-app.yaml
    kubectl delete -f ./manifests/deployment-db.yaml

    Summary - Deployments

    Deployments allow us to create more durable configuration for the workloads we deploy into Kubernetes. As we dig deeper, we'll discover more capabilities the deployments offer. Check out the Resources below for more.

    Exercise

    If you want to try these steps, head over to the source repository, fork it, clone it locally, and give it a spin!

    You can check your manifests against the manifests in the week2/day1 branch of the source repository.

    Resources

    Take the Cloud Skills Challenge!

    Enroll in the Cloud Skills Challenge!

    Don't miss out on this opportunity to level up your skills and stay ahead of the curve in the world of cloud native.

    Documentation

    Training

    - + \ No newline at end of file diff --git a/cnny-2023/tags/cloud-native/page/8/index.html b/cnny-2023/tags/cloud-native/page/8/index.html index face22ddd0..024a583630 100644 --- a/cnny-2023/tags/cloud-native/page/8/index.html +++ b/cnny-2023/tags/cloud-native/page/8/index.html @@ -14,14 +14,14 @@ - +

    16 posts tagged with "cloud-native"

    View All Tags

    · 6 min read
    Josh Duffney

    Welcome to Day 3 of Week 2 of #CloudNativeNewYear!

    The theme for this week is Kubernetes fundamentals. Yesterday we talked about Services and Ingress. Today we'll explore the topic of passing configuration and secrets to our applications in Kubernetes with ConfigMaps and Secrets.

    Ask the Experts Thursday, February 9th at 9 AM PST
    Catch the Replay of the Live Demo

    Watch the recorded demo and conversation about this week's topics.

    We were live on YouTube walking through today's (and the rest of this week's) demos.

    What We'll Cover

    • Decouple configurations with ConfigMaps and Secerts
    • Passing Environment Data with ConfigMaps and Secrets
    • Conclusion

    Decouple configurations with ConfigMaps and Secerts

    A ConfigMap is a Kubernetes object that decouples configuration data from pod definitions. Kubernetes secerts are similar, but were designed to decouple senstive information.

    Separating the configuration and secerts from your application promotes better organization and security of your Kubernetes environment. It also enables you to share the same configuration and different secerts across multiple pods and deployments which can simplify scaling and management. Using ConfigMaps and Secerts in Kubernetes is a best practice that can help to improve the scalability, security, and maintainability of your cluster.

    By the end of this tutorial, you'll have added a Kubernetes ConfigMap and Secret to the Azure Voting deployment.

    Passing Environment Data with ConfigMaps and Secrets

    📝 NOTE: If you don't have an AKS cluster deployed, please head over to Azure-Samples/azure-voting-app-rust, clone the repo, and follow the instructions in the README.md to execute the Azure deployment and setup your kubectl context. Check out the first post this week for more on the environment setup.

    Create the ConfigMap

    ConfigMaps can be used in one of two ways; as environment variables or volumes.

    For this tutorial you'll use a ConfigMap to create three environment variables inside the pod; DATABASE_SERVER, FISRT_VALUE, and SECOND_VALUE. The DATABASE_SERVER provides part of connection string to a Postgres. FIRST_VALUE and SECOND_VALUE are configuration options that change what voting options the application presents to the users.

    Follow the below steps to create a new ConfigMap:

    1. Create a YAML file named 'config-map.yaml'. In this file, specify the environment variables for the application.

      apiVersion: v1
      kind: ConfigMap
      metadata:
      name: azure-voting-config
      data:
      DATABASE_SERVER: azure-voting-db
      FIRST_VALUE: "Go"
      SECOND_VALUE: "Rust"
    2. Create the config map in your Kubernetes cluster by running the following command:

      kubectl create -f config-map.yaml

    Create the Secret

    The deployment-db.yaml and deployment-app.yaml are Kubernetes manifests that deploy the Azure Voting App. Currently, those deployment manifests contain the environment variables POSTGRES_PASSWORD and DATABASE_PASSWORD with the value stored as plain text. Your task is to replace that environment variable with a Kubernetes Secret.

    Create a Secret running the following commands:

    1. Encode mypassword.

      echo -n "mypassword" | base64
    2. Create a YAML file named secret.yaml. In this file, add POSTGRES_PASSWORD as the key and the encoded value returned above under as the value in the data section.

      apiVersion: v1
      kind: Secret
      metadata:
      name: azure-voting-secret
      type: Opaque
      data:
      POSTGRES_PASSWORD: bXlwYXNzd29yZA==
    3. Create the Secret in your Kubernetes cluster by running the following command:

      kubectl create -f secret.yaml

    [!WARNING] base64 encoding is a simple and widely supported way to obscure plaintext data, it is not secure, as it can easily be decoded. If you want to store sensitive data like password, you should use a more secure method like encrypting with a Key Management Service (KMS) before storing it in the Secret.

    Modify the app deployment manifest

    With the ConfigMap and Secert both created the next step is to replace the environment variables provided in the application deployment manuscript with the values stored in the ConfigMap and the Secert.

    Complete the following steps to add the ConfigMap and Secert to the deployment mainifest:

    1. Open the Kubernetes manifest file deployment-app.yaml.

    2. In the containers section, add an envFrom section and upate the env section.

      envFrom:
      - configMapRef:
      name: azure-voting-config
      env:
      - name: DATABASE_PASSWORD
      valueFrom:
      secretKeyRef:
      name: azure-voting-secret
      key: POSTGRES_PASSWORD

      Using envFrom exposes all the values witin the ConfigMap as environment variables. Making it so you don't have to list them individually.

    3. Save the changes to the deployment manifest file.

    4. Apply the changes to the deployment by running the following command:

      kubectl apply -f deployment-app.yaml

    Modify the database deployment manifest

    Next, update the database deployment manifest and replace the plain text environment variable with the Kubernetes Secert.

    1. Open the deployment-db.yaml.

    2. To add the secret to the deployment, replace the env section with the following code:

      env:
      - name: POSTGRES_PASSWORD
      valueFrom:
      secretKeyRef:
      name: azure-voting-secret
      key: POSTGRES_PASSWORD
    3. Apply the updated manifest.

      kubectl apply -f deployment-db.yaml

    Verify the ConfigMap and output environment variables

    Verify that the ConfigMap was added to your deploy by running the following command:

    ```bash
    kubectl describe deployment azure-voting-app
    ```

    Browse the output until you find the envFrom section with the config map reference.

    You can also verify that the environment variables from the config map are being passed to the container by running the command kubectl exec -it <pod-name> -- printenv. This command will show you all the environment variables passed to the pod including the one from configmap.

    By following these steps, you will have successfully added a config map to the Azure Voting App Kubernetes deployment, and the environment variables defined in the config map will be passed to the container running in the pod.

    Verify the Secret and describe the deployment

    Once the secret has been created you can verify it exists by running the following command:

    kubectl get secrets

    You can view additional information, such as labels, annotations, type, and the Data by running kubectl describe:

    kubectl describe secret azure-voting-secret

    By default, the describe command doesn't output the encoded value, but if you output the results as JSON or YAML you'll be able to see the secret's encoded value.

     kubectl get secret azure-voting-secret -o json

    Conclusion

    In conclusion, using ConfigMaps and Secrets in Kubernetes can help to improve the scalability, security, and maintainability of your cluster. By decoupling configuration data and sensitive information from pod definitions, you can promote better organization and security in your Kubernetes environment. Additionally, separating these elements allows for sharing the same configuration and different secrets across multiple pods and deployments, simplifying scaling and management.

    Take the Cloud Skills Challenge!

    Enroll in the Cloud Skills Challenge!

    Don't miss out on this opportunity to level up your skills and stay ahead of the curve in the world of cloud native.

    - + \ No newline at end of file diff --git a/cnny-2023/tags/cloud-native/page/9/index.html b/cnny-2023/tags/cloud-native/page/9/index.html index ae3fb1318f..4156c56a5b 100644 --- a/cnny-2023/tags/cloud-native/page/9/index.html +++ b/cnny-2023/tags/cloud-native/page/9/index.html @@ -14,13 +14,13 @@ - +

    16 posts tagged with "cloud-native"

    View All Tags

    · 10 min read
    Steven Murawski

    Welcome to Day 5 of Week 2 of #CloudNativeNewYear!

    The theme for this week is Kubernetes fundamentals. Yesterday we talked about adding persistent storage to our deployment. Today we'll explore the topic of scaling pods and nodes in our Kubernetes cluster.

    Ask the Experts Thursday, February 9th at 9 AM PST
    Catch the Replay of the Live Demo

    Watch the recorded demo and conversation about this week's topics.

    We were live on YouTube walking through today's (and the rest of this week's) demos.

    What We'll Cover

    • Scaling Our Application
    • Scaling Pods
    • Scaling Nodes
    • Exercise
    • Resources

    Scaling Our Application

    One of our primary reasons to use a service like Kubernetes to orchestrate our workloads is the ability to scale. We've approached scaling in a multitude of ways over the years, taking advantage of the ever-evolving levels of hardware and software. Kubernetes allows us to scale our units of work, Pods, and the Nodes they run on. This allows us to take advantage of both hardware and software scaling abilities. Kubernetes can help improve the utilization of existing hardware (by scheduling Pods on Nodes that have resource capacity). And, with the capabilities of virtualization and/or cloud hosting (or a bit more work, if you have a pool of physical machines), Kubernetes can expand (or contract) the number of Nodes capable of hosting Pods. Scaling is primarily driven by resource utilization, but can be triggered by a variety of other sources thanks to projects like Kubernetes Event-driven Autoscaling (KEDA).

    Scaling Pods

    Our first level of scaling is with our Pods. Earlier, when we worked on our deployment, we talked about how the Kubernetes would use the deployment configuration to ensure that we had the desired workloads running. One thing we didn't explore was running more than one instance of a pod. We can define a number of replicas of a pod in our Deployment.

    Manually Scale Pods

    So, if we wanted to define more pods right at the start (or at any point really), we could update our deployment configuration file with the number of replicas and apply that configuration file.

    spec:
    replicas: 5

    Or we could use the kubectl scale command to update the deployment with a number of pods to create.

    kubectl scale --replicas=5 deployment/azure-voting-app

    Both of these approaches modify the running configuration of our Kubernetes cluster and request that it ensure that we have that set number of replicas running. Because this was a manual change, the Kubernetes cluster won't automatically increase or decrease the number of pods. It'll just ensure that there are always the specified number of pods running.

    Autoscale Pods with the Horizontal Pod Autoscaler

    Another approach to scaling our pods is to allow the Horizontal Pod Autoscaler to help us scale in response to resources being used by the pod. This requires a bit more configuration up front. When we define our pod in our deployment, we need to include resource requests and limits. The requests help Kubernetes determine what nodes may have capacity for a new instance of a pod. The limit tells us where the node should cap utilization for a particular instance of a pod. For example, we'll update our deployment to request 0.25 CPU and set a limit of 0.5 CPU.

        spec:
    containers:
    - image: acrudavoz.azurecr.io/cnny2023/azure-voting-app-rust:ca4
    name: azure-voting-app-rust
    ports:
    - containerPort: 8080
    env:
    - name: DATABASE_URL
    value: postgres://postgres:mypassword@10.244.0.29
    resources:
    requests:
    cpu: 250m
    limits:
    cpu: 500m

    Now that we've given Kubernetes an allowed range and an idea of what free resources a node should have to place new pods, we can set up autoscaling. Because autoscaling is a persistent configuration, I like to define it in a configuration file that I'll be able to keep with the rest of my cluster configuration. We'll use the kubectl command to help us write the configuration file. We'll request that Kubernetes watch our pods and when the average CPU utilization if 50% of the requested usage (in our case if it's using more than 0.375 CPU across the current number of pods), it can grow the number of pods serving requests up to 10. If the utilization drops, Kubernetes will have the permission to deprovision pods down to the minimum (three in our example).

    kubectl autoscale deployment azure-voting-app --cpu-percent=50 --min=3 --max=10 -o YAML --dry-run=client

    Which would give us:

    apiVersion: autoscaling/v1
    kind: HorizontalPodAutoscaler
    metadata:
    creationTimestamp: null
    name: azure-voting-app
    spec:
    maxReplicas: 10
    minReplicas: 3
    scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: azure-voting-app
    targetCPUUtilizationPercentage: 50
    status:
    currentReplicas: 0
    desiredReplicas: 0

    So, how often does the autoscaler check the metrics being monitored? The autoscaler checks the Metrics API every 15 seconds, however the pods stats are only updated every 60 seconds. This means that an autoscale event may be evaluated about once a minute. Once an autoscale down event happens however, Kubernetes has a cooldown period to give the new pods a chance to distribute the workload and let the new metrics accumulate. There is no delay on scale up events.

    Application Architecture Considerations

    We've focused in this example on our front end, which is an easier scaling story. When we start talking about scaling our database layers or anything that deals with persistent storage or has primary/replica configuration requirements things get a bit more complicated. Some of these applications may have built-in leader election or could use sidecars to help use existing features in Kubernetes to perform that function. For shared storage scenarios, persistent volumes (or persistent volumes with Azure) can be of help, if the application knows how to play well with shared file access.

    Ultimately, you know your application architecture and, while Kubernetes may not have an exact match to how you are doing things today, the underlying capability is probably there under a different name. This abstraction allows you to more effectively use Kubernetes to operate a variety of workloads with the levels of controls you need.

    Scaling Nodes

    We've looked at how to scale our pods, but that assumes we have enough resources in our existing pool of nodes to accomodate those scaling requests. Kubernetes can also help scale our available nodes to ensure that our applications have the necessary resources to meet their performance requirements.

    Manually Scale Nodes

    Manually scaling nodes isn't a direct function of Kubernetes, so your operating environment instructions may vary. On Azure, it's pretty straight forward. Using the Azure CLI (or other tools), we can tell our AKS cluster to scale up or scale down the number of nodes in our node pool.

    First, we'll check out how many nodes we currently have in our working environment.

    kubectl get nodes

    This will show us

    azure-voting-app-rust ❯  kubectl get nodes
    NAME STATUS ROLES AGE VERSION
    aks-pool0-37917684-vmss000000 Ready agent 5d21h v1.24.6

    Then, we'll scale it up to three nodes.

    az aks scale --resource-group $ResourceGroup --name $AksName --node-count 3

    Then, we'll check out how many nodes we now have in our working environment.

    kubectl get nodes

    Which returns:

    azure-voting-app-rust ❯  kubectl get nodes
    NAME STATUS ROLES AGE VERSION
    aks-pool0-37917684-vmss000000 Ready agent 5d21h v1.24.6
    aks-pool0-37917684-vmss000001 Ready agent 5m27s v1.24.6
    aks-pool0-37917684-vmss000002 Ready agent 5m10s v1.24.6

    Autoscale Nodes with the Cluster Autoscaler

    Things get more interesting when we start working with the Cluster Autoscaler. The Cluster Autoscaler watches for the inability of Kubernetes to schedule the required number of pods due to resource constraints (and a few other criteria like affinity/anti-affinity). If there are insufficient resources available on the existing nodes, the autoscaler can provision new nodes into the nodepool. Likewise, the autoscaler watches to see if the existing pods could be consolidated to a smaller set of nodes and can remove excess nodes.

    Enabling the autoscaler is likewise an update that can be dependent on where and how your Kubernetes cluster is hosted. Azure makes it easy with a simple Azure CLI command.

    az aks update `
    --resource-group $ResourceGroup `
    --name $AksName `
    --update-cluster-autoscaler `
    --min-count 1 `
    --max-count 5

    There are a variety of settings that can be configured to tune how the autoscaler works.

    Scaling on Different Events

    CPU and memory utilization are the primary drivers for the Horizontal Pod Autoscaler, but those might not be the best measures as to when you might want to scale workloads. There are other options for scaling triggers and one of the more common plugins to help with that is the Kubernetes Event-driven Autoscaling (KEDA) project. The KEDA project makes it easy to plug in different event sources to help drive scaling. Find more information about using KEDA on AKS here.

    Exercise

    Let's try out the scaling configurations that we just walked through using our sample application. If you still have your environment from Day 1, you can use that.

    📝 NOTE: If you don't have an AKS cluster deployed, please head over to Azure-Samples/azure-voting-app-rust, clone the repo, and follow the instructions in the README.md to execute the Azure deployment and setup your kubectl context. Check out the first post this week for more on the environment setup.

    Configure Horizontal Pod Autoscaler

    • Edit ./manifests/deployment-app.yaml to include resource requests and limits.
            resources:
    requests:
    cpu: 250m
    limits:
    cpu: 500m
    • Apply the updated deployment configuration.
    kubectl apply -f ./manifests/deployment-app.yaml
    • Create the horizontal pod autoscaler configuration and apply it
    kubectl autoscale deployment azure-voting-app --cpu-percent=50 --min=3 --max=10 -o YAML --dry-run=client > ./manifests/scaler-app.yaml
    kubectl apply -f ./manifests/scaler-app.yaml
    • Check to see your pods scale out to the minimum.
    kubectl get pods

    Configure Cluster Autoscaler

    Configuring the basic behavior of the Cluster Autoscaler is a bit simpler. We just need to run the Azure CLI command to enable the autoscaler and define our lower and upper limits.

    • Check the current nodes available (should be 1).
    kubectl get nodes
    • Update the cluster to enable the autoscaler
    az aks update `
    --resource-group $ResourceGroup `
    --name $AksName `
    --update-cluster-autoscaler `
    --min-count 2 `
    --max-count 5
    • Check to see the current number of nodes (should be 2 now).
    kubectl get nodes

    Resources

    Take the Cloud Skills Challenge!

    Enroll in the Cloud Skills Challenge!

    Don't miss out on this opportunity to level up your skills and stay ahead of the curve in the world of cloud native.

    Documentation

    Training

    - + \ No newline at end of file diff --git a/cnny-2023/tags/configmaps/index.html b/cnny-2023/tags/configmaps/index.html index 4a1f64e9e9..1ada79383e 100644 --- a/cnny-2023/tags/configmaps/index.html +++ b/cnny-2023/tags/configmaps/index.html @@ -14,13 +14,13 @@ - +

    One post tagged with "configmaps"

    View All Tags

    · 12 min read
    Paul Yu

    Welcome to Day 2 of Week 3 of #CloudNativeNewYear!

    The theme for this week is Bringing Your Application to Kubernetes. Yesterday we talked about getting an existing application running in Kubernetes with a full pipeline in GitHub Actions. Today we'll evaluate our sample application's configuration, storage, and networking requirements and implement using Kubernetes and Azure resources.

    Ask the Experts Thursday, February 9th at 9 AM PST
    Friday, February 10th at 11 AM PST

    Watch the recorded demo and conversation about this week's topics.

    We were live on YouTube walking through today's (and the rest of this week's) demos. Join us Friday, February 10th and bring your questions!

    What We'll Cover

    • Gather requirements
    • Implement environment variables using ConfigMaps
    • Implement persistent volumes using Azure Files
    • Implement secrets using Azure Key Vault
    • Re-package deployments
    • Conclusion
    • Resources
    caution

    Before you begin, make sure you've gone through yesterday's post to set up your AKS cluster.

    Gather requirements

    The eShopOnWeb application is written in .NET 7 and has two major pieces of functionality. The web UI is where customers can browse and shop. The web UI also includes an admin portal for managing the product catalog. This admin portal, is packaged as a WebAssembly application and relies on a separate REST API service. Both the web UI and the REST API connect to the same SQL Server container.

    Looking through the source code which can be found here we can identify requirements for configs, persistent storage, and secrets.

    Database server

    • Need to store the password for the sa account as a secure secret
    • Need persistent storage volume for data directory
    • Need to inject environment variables for SQL Server license type and EULA acceptance

    Web UI and REST API service

    • Need to store database connection string as a secure secret
    • Need to inject ASP.NET environment variables to override app settings
    • Need persistent storage volume for ASP.NET key storage

    Implement environment variables using ConfigMaps

    ConfigMaps are relatively straight-forward to create. If you were following along with the examples last week, this should be review 😉

    Create a ConfigMap to store database environment variables.

    kubectl apply -f - <<EOF
    apiVersion: v1
    kind: ConfigMap
    metadata:
    name: mssql-settings
    data:
    MSSQL_PID: Developer
    ACCEPT_EULA: "Y"
    EOF

    Create another ConfigMap to store ASP.NET environment variables.

    kubectl apply -f - <<EOF
    apiVersion: v1
    kind: ConfigMap
    metadata:
    name: aspnet-settings
    data:
    ASPNETCORE_ENVIRONMENT: Development
    EOF

    Implement persistent volumes using Azure Files

    Similar to last week, we'll take advantage of storage classes built into AKS. For our SQL Server data, we'll use the azurefile-csi-premium storage class and leverage an Azure Files resource as our PersistentVolume.

    Create a PersistentVolumeClaim (PVC) for persisting SQL Server data.

    kubectl apply -f - <<EOF
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
    name: mssql-data
    spec:
    accessModes:
    - ReadWriteMany
    storageClassName: azurefile-csi-premium
    resources:
    requests:
    storage: 5Gi
    EOF

    Create another PVC for persisting ASP.NET data.

    kubectl apply -f - <<EOF
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
    name: aspnet-data
    spec:
    accessModes:
    - ReadWriteMany
    storageClassName: azurefile-csi-premium
    resources:
    requests:
    storage: 5Gi
    EOF

    Implement secrets using Azure Key Vault

    It's a well known fact that Kubernetes secretes are not really secrets. They're just base64-encoded values and not secure, especially if malicious users have access to your Kubernetes cluster.

    In a production scenario, you will want to leverage an external vault like Azure Key Vault or HashiCorp Vault to encrypt and store secrets.

    With AKS, we can enable the Secrets Store CSI driver add-on which will allow us to leverage Azure Key Vault.

    # Set some variables
    RG_NAME=<YOUR_RESOURCE_GROUP_NAME>
    AKS_NAME=<YOUR_AKS_CLUSTER_NAME>
    ACR_NAME=<YOUR_ACR_NAME>

    az aks enable-addons \
    --addons azure-keyvault-secrets-provider \
    --name $AKS_NAME \
    --resource-group $RG_NAME

    With the add-on enabled, you should see aks-secrets-store-csi-driver and aks-secrets-store-provider-azure resources installed on each node in your Kubernetes cluster.

    Run the command below to verify.

    kubectl get pods \
    --namespace kube-system \
    --selector 'app in (secrets-store-csi-driver, secrets-store-provider-azure)'

    The Secrets Store CSI driver allows us to use secret stores via Container Storage Interface (CSI) volumes. This provider offers capabilities such as mounting and syncing between the secure vault and Kubernetes Secrets. On AKS, the Azure Key Vault Provider for Secrets Store CSI Driver enables integration with Azure Key Vault.

    You may not have an Azure Key Vault created yet, so let's create one and add some secrets to it.

    AKV_NAME=$(az keyvault create \
    --name akv-eshop$RANDOM \
    --resource-group $RG_NAME \
    --query name -o tsv)

    # Database server password
    az keyvault secret set \
    --vault-name $AKV_NAME \
    --name mssql-password \
    --value "@someThingComplicated1234"

    # Catalog database connection string
    az keyvault secret set \
    --vault-name $AKV_NAME \
    --name mssql-connection-catalog \
    --value "Server=db;Database=Microsoft.eShopOnWeb.CatalogDb;User Id=sa;Password=@someThingComplicated1234;TrustServerCertificate=True;"

    # Identity database connection string
    az keyvault secret set \
    --vault-name $AKV_NAME \
    --name mssql-connection-identity \
    --value "Server=db;Database=Microsoft.eShopOnWeb.Identity;User Id=sa;Password=@someThingComplicated1234;TrustServerCertificate=True;"

    Pods authentication using Azure Workload Identity

    In order for our Pods to retrieve secrets from Azure Key Vault, we'll need to set up a way for the Pod to authenticate against Azure AD. This can be achieved by implementing the new Azure Workload Identity feature of AKS.

    info

    At the time of this writing, the workload identity feature of AKS is in Preview.

    The workload identity feature within AKS allows us to leverage native Kubernetes resources and link a Kubernetes ServiceAccount to an Azure Managed Identity to authenticate against Azure AD.

    For the authentication flow, our Kubernetes cluster will act as an Open ID Connect (OIDC) issuer and will be able issue identity tokens to ServiceAccounts which will be assigned to our Pods.

    The Azure Managed Identity will be granted permission to access secrets in our Azure Key Vault and with the ServiceAccount being assigned to our Pods, they will be able to retrieve our secrets.

    For more information on how the authentication mechanism all works, check out this doc.

    To implement all this, start by enabling the new preview feature for AKS.

    az feature register \
    --namespace "Microsoft.ContainerService" \
    --name "EnableWorkloadIdentityPreview"
    caution

    This can take several minutes to complete.

    Check the status and ensure the state shows Regestered before moving forward.

    az feature show \
    --namespace "Microsoft.ContainerService" \
    --name "EnableWorkloadIdentityPreview"

    Update your AKS cluster to enable the workload identity feature and enable the OIDC issuer endpoint.

    az aks update \
    --name $AKS_NAME \
    --resource-group $RG_NAME \
    --enable-workload-identity \
    --enable-oidc-issuer

    Create an Azure Managed Identity and retrieve its client ID.

    MANAGED_IDENTITY_CLIENT_ID=$(az identity create \
    --name aks-workload-identity \
    --resource-group $RG_NAME \
    --subscription $(az account show --query id -o tsv) \
    --query 'clientId' -o tsv)

    Create the Kubernetes ServiceAccount.

    # Set namespace (this must align with the namespace that your app is deployed into)
    SERVICE_ACCOUNT_NAMESPACE=default

    # Set the service account name
    SERVICE_ACCOUNT_NAME=eshop-serviceaccount

    # Create the service account
    kubectl apply -f - <<EOF
    apiVersion: v1
    kind: ServiceAccount
    metadata:
    annotations:
    azure.workload.identity/client-id: ${MANAGED_IDENTITY_CLIENT_ID}
    labels:
    azure.workload.identity/use: "true"
    name: ${SERVICE_ACCOUNT_NAME}
    namespace: ${SERVICE_ACCOUNT_NAMESPACE}
    EOF
    info

    Note to enable this ServiceAccount to work with Azure Workload Identity, you must annotate the resource with azure.workload.identity/client-id, and add a label of azure.workload.identity/use: "true"

    That was a lot... Let's review what we just did.

    We have an Azure Managed Identity (object in Azure AD), an OIDC issuer URL (endpoint in our Kubernetes cluster), and a Kubernetes ServiceAccount.

    The next step is to "tie" these components together and establish a Federated Identity Credential so that Azure AD can trust authentication requests from your Kubernetes cluster.

    info

    This identity federation can be established between Azure AD any Kubernetes cluster; not just AKS 🤗

    To establish the federated credential, we'll need the OIDC issuer URL, and a subject which points to your Kubernetes ServiceAccount.

    # Get the OIDC issuer URL
    OIDC_ISSUER_URL=$(az aks show \
    --name $AKS_NAME \
    --resource-group $RG_NAME \
    --query "oidcIssuerProfile.issuerUrl" -o tsv)

    # Set the subject name using this format: `system:serviceaccount:<YOUR_SERVICE_ACCOUNT_NAMESPACE>:<YOUR_SERVICE_ACCOUNT_NAME>`
    SUBJECT=system:serviceaccount:$SERVICE_ACCOUNT_NAMESPACE:$SERVICE_ACCOUNT_NAME

    az identity federated-credential create \
    --name aks-federated-credential \
    --identity-name aks-workload-identity \
    --resource-group $RG_NAME \
    --issuer $OIDC_ISSUER_URL \
    --subject $SUBJECT

    With the authentication components set, we can now create a SecretProviderClass which includes details about the Azure Key Vault, the secrets to pull out from the vault, and identity used to access the vault.

    # Get the tenant id for the key vault
    TENANT_ID=$(az keyvault show \
    --name $AKV_NAME \
    --resource-group $RG_NAME \
    --query properties.tenantId -o tsv)

    # Create the secret provider for azure key vault
    kubectl apply -f - <<EOF
    apiVersion: secrets-store.csi.x-k8s.io/v1
    kind: SecretProviderClass
    metadata:
    name: eshop-azure-keyvault
    spec:
    provider: azure
    parameters:
    usePodIdentity: "false"
    useVMManagedIdentity: "false"
    clientID: "${MANAGED_IDENTITY_CLIENT_ID}"
    keyvaultName: "${AKV_NAME}"
    cloudName: ""
    objects: |
    array:
    - |
    objectName: mssql-password
    objectType: secret
    objectVersion: ""
    - |
    objectName: mssql-connection-catalog
    objectType: secret
    objectVersion: ""
    - |
    objectName: mssql-connection-identity
    objectType: secret
    objectVersion: ""
    tenantId: "${TENANT_ID}"
    secretObjects:
    - secretName: eshop-secrets
    type: Opaque
    data:
    - objectName: mssql-password
    key: mssql-password
    - objectName: mssql-connection-catalog
    key: mssql-connection-catalog
    - objectName: mssql-connection-identity
    key: mssql-connection-identity
    EOF

    Finally, lets grant the Azure Managed Identity permissions to retrieve secrets from the Azure Key Vault.

    az keyvault set-policy \
    --name $AKV_NAME \
    --secret-permissions get \
    --spn $MANAGED_IDENTITY_CLIENT_ID

    Re-package deployments

    Update your database deployment to load environment variables from our ConfigMap, attach the PVC and SecretProviderClass as volumes, mount the volumes into the Pod, and use the ServiceAccount to retrieve secrets.

    Additionally, you may notice the database Pod is set to use fsGroup:10001 as part of the securityContext. This is required as the MSSQL container runs using a non-root account called mssql and this account has the proper permissions to read/write data at the /var/opt/mssql mount path.

    kubectl apply -f - <<EOF
    apiVersion: apps/v1
    kind: Deployment
    metadata:
    name: db
    labels:
    app: db
    spec:
    replicas: 1
    selector:
    matchLabels:
    app: db
    template:
    metadata:
    labels:
    app: db
    spec:
    securityContext:
    fsGroup: 10001
    serviceAccountName: ${SERVICE_ACCOUNT_NAME}
    containers:
    - name: db
    image: mcr.microsoft.com/mssql/server:2019-latest
    ports:
    - containerPort: 1433
    envFrom:
    - configMapRef:
    name: mssql-settings
    env:
    - name: MSSQL_SA_PASSWORD
    valueFrom:
    secretKeyRef:
    name: eshop-secrets
    key: mssql-password
    resources: {}
    volumeMounts:
    - name: mssqldb
    mountPath: /var/opt/mssql
    - name: eshop-secrets
    mountPath: "/mnt/secrets-store"
    readOnly: true
    volumes:
    - name: mssqldb
    persistentVolumeClaim:
    claimName: mssql-data
    - name: eshop-secrets
    csi:
    driver: secrets-store.csi.k8s.io
    readOnly: true
    volumeAttributes:
    secretProviderClass: eshop-azure-keyvault
    EOF

    We'll update the API and Web deployments in a similar way.

    # Set the image tag
    IMAGE_TAG=<YOUR_IMAGE_TAG>

    # API deployment
    kubectl apply -f - <<EOF
    apiVersion: apps/v1
    kind: Deployment
    metadata:
    name: api
    labels:
    app: api
    spec:
    replicas: 1
    selector:
    matchLabels:
    app: api
    template:
    metadata:
    labels:
    app: api
    spec:
    serviceAccount: ${SERVICE_ACCOUNT_NAME}
    containers:
    - name: api
    image: ${ACR_NAME}.azurecr.io/api:${IMAGE_TAG}
    ports:
    - containerPort: 80
    envFrom:
    - configMapRef:
    name: aspnet-settings
    env:
    - name: ConnectionStrings__CatalogConnection
    valueFrom:
    secretKeyRef:
    name: eshop-secrets
    key: mssql-connection-catalog
    - name: ConnectionStrings__IdentityConnection
    valueFrom:
    secretKeyRef:
    name: eshop-secrets
    key: mssql-connection-identity
    resources: {}
    volumeMounts:
    - name: aspnet
    mountPath: ~/.aspnet/https:/root/.aspnet/https:ro
    - name: eshop-secrets
    mountPath: "/mnt/secrets-store"
    readOnly: true
    volumes:
    - name: aspnet
    persistentVolumeClaim:
    claimName: aspnet-data
    - name: eshop-secrets
    csi:
    driver: secrets-store.csi.k8s.io
    readOnly: true
    volumeAttributes:
    secretProviderClass: eshop-azure-keyvault
    EOF

    ## Web deployment
    kubectl apply -f - <<EOF
    apiVersion: apps/v1
    kind: Deployment
    metadata:
    name: web
    labels:
    app: web
    spec:
    replicas: 1
    selector:
    matchLabels:
    app: web
    template:
    metadata:
    labels:
    app: web
    spec:
    serviceAccount: ${SERVICE_ACCOUNT_NAME}
    containers:
    - name: web
    image: ${ACR_NAME}.azurecr.io/web:${IMAGE_TAG}
    ports:
    - containerPort: 80
    envFrom:
    - configMapRef:
    name: aspnet-settings
    env:
    - name: ConnectionStrings__CatalogConnection
    valueFrom:
    secretKeyRef:
    name: eshop-secrets
    key: mssql-connection-catalog
    - name: ConnectionStrings__IdentityConnection
    valueFrom:
    secretKeyRef:
    name: eshop-secrets
    key: mssql-connection-identity
    resources: {}
    volumeMounts:
    - name: aspnet
    mountPath: ~/.aspnet/https:/root/.aspnet/https:ro
    - name: eshop-secrets
    mountPath: "/mnt/secrets-store"
    readOnly: true
    volumes:
    - name: aspnet
    persistentVolumeClaim:
    claimName: aspnet-data
    - name: eshop-secrets
    csi:
    driver: secrets-store.csi.k8s.io
    readOnly: true
    volumeAttributes:
    secretProviderClass: eshop-azure-keyvault
    EOF

    If all went well with your deployment updates, you should be able to browse to your website and buy some merchandise again 🥳

    echo "http://$(kubectl get service web -o jsonpath='{.status.loadBalancer.ingress[0].ip}')"

    Conclusion

    Although there is no visible changes on with our website, we've made a ton of changes on the Kubernetes backend to make this application much more secure and resilient.

    We used a combination of Kubernetes resources and AKS-specific features to achieve our goal of securing our secrets and ensuring data is not lost on container crashes and restarts.

    To learn more about the components we leveraged here today, checkout the resources and additional tutorials listed below.

    You can also find manifests with all the changes made in today's post in the Azure-Samples/eShopOnAKS repository.

    See you in the next post!

    Resources

    Take the Cloud Skills Challenge!

    Enroll in the Cloud Skills Challenge!

    Don't miss out on this opportunity to level up your skills and stay ahead of the curve in the world of cloud native.

    - + \ No newline at end of file diff --git a/cnny-2023/tags/containers/index.html b/cnny-2023/tags/containers/index.html index 0730bf67fe..a224323c65 100644 --- a/cnny-2023/tags/containers/index.html +++ b/cnny-2023/tags/containers/index.html @@ -14,14 +14,14 @@ - +

    3 posts tagged with "containers"

    View All Tags

    · 4 min read
    Steven Murawski
    Paul Yu
    Josh Duffney

    Welcome to Day 2 of Week 1 of #CloudNativeNewYear!

    Today, we'll focus on building an understanding of containers.

    What We'll Cover

    • Introduction
    • How do Containers Work?
    • Why are Containers Becoming so Popular?
    • Conclusion
    • Resources
    • Learning Path

    REGISTER & LEARN: KUBERNETES 101

    Interested in a dive into Kubernetes and a chance to talk to experts?

    🎙: Join us Jan 26 @1pm PST by registering here

    Here's what you will learn:

    • Key concepts and core principles of Kubernetes.
    • How to deploy, scale and manage containerized workloads.
    • Live Demo of the concepts explained
    • How to get started with Azure Kubernetes Service for free.

    Start your free Azure Kubernetes Trial Today!!: aka.ms/TryAKS

    Introduction

    In the beginning, we deployed our applications onto physical servers. We only had a certain number of those servers, so often they hosted multiple applications. This led to some problems when those applications shared dependencies. Upgrading one application could break another application on the same server.

    Enter virtualization. Virtualization allowed us to run our applications in an isolated operating system instance. This removed much of the risk of updating shared dependencies. However, it increased our overhead since we had to run a full operating system for each application environment.

    To address the challenges created by virtualization, containerization was created to improve isolation without duplicating kernel level resources. Containers provide efficient and consistent deployment and runtime experiences for our applications and have become very popular as a way of packaging and distributing applications.

    How do Containers Work?

    Containers build on two capabilities in the Linux operating system, namespaces and cgroups. These constructs allow the operating system to provide isolation to a process or group of processes, keeping their access to filesystem resources separate and providing controls on resource utilization. This, combined with tooling to help package, deploy, and run container images has led to their popularity in today’s operating environment. This provides us our isolation without the overhead of additional operating system resources.

    When a container host is deployed on an operating system, it works at scheduling the access to the OS (operating systems) components. This is done by providing a logical isolated group that can contain processes for a given application, called a namespace. The container host then manages /schedules access from the namespace to the host OS. The container host then uses cgroups to allocate compute resources. Together, the container host with the help of cgroups and namespaces can schedule multiple applications to access host OS resources.

    Overall, this gives the illusion of virtualizing the host OS, where each application gets its own OS. In actuality, all the applications are running on the same operating system and sharing the same kernel as the container host.

    Containers are popular in the software development industry because they provide several benefits over traditional virtualization methods. Some of these benefits include:

    • Portability: Containers make it easy to move an application from one environment to another without having to worry about compatibility issues or missing dependencies.
    • Isolation: Containers provide a level of isolation between the application and the host system, which means that the application running in the container cannot access the host system's resources.
    • Scalability: Containers make it easy to scale an application up or down as needed, which is useful for applications that experience a lot of traffic or need to handle a lot of data.
    • Resource Efficiency: Containers are more resource-efficient than traditional virtualization methods because they don't require a full operating system to be running on each virtual machine.
    • Cost-Effective: Containers are more cost-effective than traditional virtualization methods because they don't require expensive hardware or licensing fees.

    Conclusion

    Containers are a powerful technology that allows developers to package and deploy applications in a portable and isolated environment. This technology is becoming increasingly popular in the world of software development and is being used by many companies and organizations to improve their application deployment and management processes. With the benefits of portability, isolation, scalability, resource efficiency, and cost-effectiveness, containers are definitely worth considering for your next application development project.

    Containerizing applications is a key step in modernizing them, and there are many other patterns that can be adopted to achieve cloud-native architectures, including using serverless platforms, Kubernetes, and implementing DevOps practices.

    Resources

    Learning Path

    - + \ No newline at end of file diff --git a/cnny-2023/tags/containers/page/2/index.html b/cnny-2023/tags/containers/page/2/index.html index 384b0e57ee..70a3654098 100644 --- a/cnny-2023/tags/containers/page/2/index.html +++ b/cnny-2023/tags/containers/page/2/index.html @@ -14,14 +14,14 @@ - +

    3 posts tagged with "containers"

    View All Tags

    · 7 min read
    Vinicius Apolinario

    Welcome to Day 3 of Week 4 of #CloudNativeNewYear!

    The theme for this week is going further with Cloud Native. Yesterday we talked about using Draft to accelerate your Kubernetes adoption. Today we'll explore the topic of Windows containers.

    What We'll Cover

    • Introduction
    • Windows containers overview
    • Windows base container images
    • Isolation
    • Exercise: Try this yourself!
    • Resources: For self-study!

    Introduction

    Windows containers were launched along with Windows Server 2016, and have evolved since. In its latest release, Windows Server 2022, Windows containers have reached a great level of maturity and allow for customers to run production grade workloads.

    While suitable for new developments, Windows containers also provide developers and operations with a different approach than Linux containers. It allows for existing Windows applications to be containerized with little or no code changes. It also allows for professionals that are more comfortable with the Windows platform and OS, to leverage their skill set, while taking advantage of the containers platform.

    Windows container overview

    In essence, Windows containers are very similar to Linux. Since Windows containers use the same foundation of Docker containers, you can expect that the same architecture applies - with the specific notes of the Windows OS. For example, when running a Windows container via Docker, you use the same commands, such as docker run. To pull a container image, you can use docker pull, just like on Linux. However, to run a Windows container, you also need a Windows container host. This requirement is there because, as you might remember, a container shares the OS kernel with its container host.

    On Kubernetes, Windows containers are supported since Windows Server 2019. Just like with Docker, you can manage Windows containers like any other resource on the Kubernetes ecosystem. A Windows node can be part of a Kubernetes cluster, allowing you to run Windows container based applications on services like Azure Kubernetes Service. To deploy an Windows application to a Windows pod in Kubernetes, you can author a YAML specification much like you would for Linux. The main difference is that you would point to an image that runs on Windows, and you need to specify a node selection tag to indicate said pod needs to run on a Windows node.

    Windows base container images

    On Windows containers, you will always use a base container image provided by Microsoft. This base container image contains the OS binaries for the container to run. This image can be as large as 3GB+, or small as ~300MB. The difference in the size is a consequence of the APIs and components available in each Windows container base container image. There are primarily, three images: Nano Server, Server Core, and Server.

    Nano Server is the smallest image, ranging around 300MB. It's a base container image for new developments and cloud-native scenarios. Applications need to target Nano Server as the Windows OS, so not all frameworks will work. For example, .Net works on Nano Server, but .Net Framework doesn't. Other third-party frameworks also work on Nano Server, such as Apache, NodeJS, Phyton, Tomcat, Java runtime, JBoss, Redis, among others.

    Server Core is a much larger base container image, ranging around 1.25GB. It's larger size is compensated by it's application compatibility. Simply put, any application that meets the requirements to be run on a Windows container, can be containerized with this image.

    The Server image builds on the Server Core one. It ranges around 3.1GB and has even greater application compatibility than the Server Core image. In addition to the traditional Windows APIs and components, this image allows for scenarios such as Machine Learning via DirectX with GPU access.

    The best image for your scenario is dependent on the requirements your application has on the Windows OS inside a container. However, there are some scenarios that are not supported at all on Windows containers - such as GUI or RDP dependent applications, some Windows Server infrastructure roles, such as Active Directory, among others.

    Isolation

    When running containers, the kernel of the container host is shared with the containers running on it. While extremely convenient, this poses a potential risk for multi-tenant scenarios. If one container is compromised and has access to the host, it could potentially compromise other containers in the same system.

    For enterprise customers running on-premises (or even in the cloud), this can be mitigated by using a VM as a container host and considering the VM itself a security boundary. However, if multiple workloads from different tenants need to share the same host, Windows containers offer another option: Hyper-V isolation. While the name Hyper-V is associated with VMs, its virtualization capabilities extend to other services, including containers. Hyper-V isolated containers run on a purpose built, extremely small, highly performant VM. However, you manage a container running with Hyper-V isolation the same way you do with a process isolated one. In fact, the only notable difference is that you need to append the --isolation=hyperv tag to the docker run command.

    Exercise

    Here are a few examples of how to use Windows containers:

    Run Windows containers via Docker on your machine

    To pull a Windows base container image:

    docker pull mcr.microsoft.com/windows/servercore:ltsc2022

    To run a basic IIS container:

    #This command will pull and start a IIS container. You can access it from http://<your local IP>:8080
    docker run -d -p 8080:80 mcr.microsoft.com/windows/servercore/iis:windowsservercore-ltsc2022

    Run the same IIS container with Hyper-V isolation

    #This command will pull and start a IIS container. You can access it from http://<your local IP>:8080
    docker run -d -p 8080:80 --isolation=hyperv mcr.microsoft.com/windows/servercore/iis:windowsservercore-ltsc2022

    To run a Windows container interactively:

    docker run -it mcr.microsoft.com/windows/servercore:ltsc2022 powershell

    Run Windows containers on Kubernetes

    To prepare an AKS cluster for Windows containers: Note: Replace the values on the example below with the ones from your environment.

    echo "Please enter the username to use as administrator credentials for Windows Server nodes on your cluster: " && read WINDOWS_USERNAME
    az aks create \
    --resource-group myResourceGroup \
    --name myAKSCluster \
    --node-count 2 \
    --generate-ssh-keys \
    --windows-admin-username $WINDOWS_USERNAME \
    --vm-set-type VirtualMachineScaleSets \
    --network-plugin azure

    To add a Windows node pool for Windows containers:

    az aks nodepool add \
    --resource-group myResourceGroup \
    --cluster-name myAKSCluster \
    --os-type Windows \
    --name npwin \
    --node-count 1

    Deploy a sample ASP.Net application to the AKS cluster above using the YAML file below:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
    name: sample
    labels:
    app: sample
    spec:
    replicas: 1
    template:
    metadata:
    name: sample
    labels:
    app: sample
    spec:
    nodeSelector:
    "kubernetes.io/os": windows
    containers:
    - name: sample
    image: mcr.microsoft.com/dotnet/framework/samples:aspnetapp
    resources:
    limits:
    cpu: 1
    memory: 800M
    ports:
    - containerPort: 80
    selector:
    matchLabels:
    app: sample
    ---
    apiVersion: v1
    kind: Service
    metadata:
    name: sample
    spec:
    type: LoadBalancer
    ports:
    - protocol: TCP
    port: 80
    selector:
    app: sample

    Save the file above and run the command below on your Kubernetes cluster:

    kubectl apply -f <filename> .

    Once deployed, you can access the application by getting the IP address of your service:

    kubectl get service

    Resources

    It's not too late to sign up for and complete the Cloud Skills Challenge!
    - + \ No newline at end of file diff --git a/cnny-2023/tags/containers/page/3/index.html b/cnny-2023/tags/containers/page/3/index.html index 997f2abf63..a9495943ab 100644 --- a/cnny-2023/tags/containers/page/3/index.html +++ b/cnny-2023/tags/containers/page/3/index.html @@ -14,13 +14,13 @@ - +

    3 posts tagged with "containers"

    View All Tags

    · 4 min read
    Jorge Arteiro

    Welcome to Day 4 of Week 4 of #CloudNativeNewYear!

    The theme for this week is going further with Cloud Native. Yesterday we talked about Windows Containers. Today we'll explore addons and extensions available to Azure Kubernetes Services (AKS).

    What We'll Cover

    • Introduction
    • Add-ons
    • Extensions
    • Add-ons vs Extensions
    • Resources

    Introduction

    Azure Kubernetes Service (AKS) is a fully managed container orchestration service that makes it easy to deploy and manage containerized applications on Azure. AKS offers a number of features and capabilities, including the ability to extend its supported functionality through the use of add-ons and extensions.

    There are also integrations available from open-source projects and third parties, but they are not covered by the AKS support policy.

    Add-ons

    Add-ons provide a supported way to extend AKS. Installation, configuration and lifecycle are managed by AKS following pre-determine updates rules.

    As an example, let's enable Container Insights with the monitoring addon. on an existing AKS cluster using az aks enable-addons --addons CLI command

    az aks enable-addons \
    --name MyManagedCluster \
    --resource-group MyResourceGroup \
    --addons monitoring

    or you can use az aks create --enable-addons when creating new clusters

    az aks create \
    --name MyManagedCluster \
    --resource-group MyResourceGroup \
    --enable-addons monitoring

    The current available add-ons are:

    1. http_application_routing - Configure ingress with automatic public DNS name creation. Only recommended for development.
    2. monitoring - Container Insights monitoring.
    3. virtual-node - CNCF virtual nodes open source project.
    4. azure-policy - Azure Policy for AKS.
    5. ingress-appgw - Application Gateway Ingress Controller (AGIC).
    6. open-service-mesh - CNCF Open Service Mesh project.
    7. azure-keyvault-secrets-provider - Azure Key Vault Secrets Provider for Secret Store CSI Driver.
    8. web_application_routing - Managed NGINX ingress Controller.
    9. keda - CNCF Event-driven autoscaling project.

    For more details, get the updated list of AKS Add-ons here

    Extensions

    Cluster Extensions uses Helm charts and integrates with Azure Resource Manager (ARM) to provide installation and lifecycle management of capabilities on top of AKS.

    Extensions can be auto upgraded using minor versions, but it requires extra management and configuration. Using Scope parameter, it can be installed on the whole cluster or per namespace.

    AKS Extensions requires an Azure CLI extension to be installed. To add or update this CLI extension use the following commands:

    az extension add --name k8s-extension

    and to update an existing extension

    az extension update --name k8s-extension

    There are only 3 available extensions:

    1. Dapr - CNCF Dapr project.
    2. Azure ML - Integrate Azure Machine Learning with AKS to train, inference and manage ML models.
    3. Flux (GitOps) - CNCF Flux project integrated with AKS to enable cluster configuration and application deployment using GitOps.

    As an example, you can install Azure ML using the following command:

    az k8s-extension create \
    --name aml-compute --extension-type Microsoft.AzureML.Kubernetes \
    --scope cluster --cluster-name <clusterName> \
    --resource-group <resourceGroupName> \
    --cluster-type managedClusters \
    --configuration-settings enableInference=True allowInsecureConnections=True

    For more details, get the updated list of AKS Extensions here

    Add-ons vs Extensions

    AKS Add-ons brings an advantage of been fully managed by AKS itself, and AKS Extensions are more flexible and configurable but requires extra level of management.

    Add-ons are part of the AKS resource provider in the Azure API, and AKS Extensions are a separate resource provider on the Azure API.

    Resources

    It's not too late to sign up for and complete the Cloud Skills Challenge!
    - + \ No newline at end of file diff --git a/cnny-2023/tags/extensions/index.html b/cnny-2023/tags/extensions/index.html index 7548195fe4..c87cb5184b 100644 --- a/cnny-2023/tags/extensions/index.html +++ b/cnny-2023/tags/extensions/index.html @@ -14,13 +14,13 @@ - +

    One post tagged with "extensions"

    View All Tags

    · 4 min read
    Jorge Arteiro

    Welcome to Day 4 of Week 4 of #CloudNativeNewYear!

    The theme for this week is going further with Cloud Native. Yesterday we talked about Windows Containers. Today we'll explore addons and extensions available to Azure Kubernetes Services (AKS).

    What We'll Cover

    • Introduction
    • Add-ons
    • Extensions
    • Add-ons vs Extensions
    • Resources

    Introduction

    Azure Kubernetes Service (AKS) is a fully managed container orchestration service that makes it easy to deploy and manage containerized applications on Azure. AKS offers a number of features and capabilities, including the ability to extend its supported functionality through the use of add-ons and extensions.

    There are also integrations available from open-source projects and third parties, but they are not covered by the AKS support policy.

    Add-ons

    Add-ons provide a supported way to extend AKS. Installation, configuration and lifecycle are managed by AKS following pre-determine updates rules.

    As an example, let's enable Container Insights with the monitoring addon. on an existing AKS cluster using az aks enable-addons --addons CLI command

    az aks enable-addons \
    --name MyManagedCluster \
    --resource-group MyResourceGroup \
    --addons monitoring

    or you can use az aks create --enable-addons when creating new clusters

    az aks create \
    --name MyManagedCluster \
    --resource-group MyResourceGroup \
    --enable-addons monitoring

    The current available add-ons are:

    1. http_application_routing - Configure ingress with automatic public DNS name creation. Only recommended for development.
    2. monitoring - Container Insights monitoring.
    3. virtual-node - CNCF virtual nodes open source project.
    4. azure-policy - Azure Policy for AKS.
    5. ingress-appgw - Application Gateway Ingress Controller (AGIC).
    6. open-service-mesh - CNCF Open Service Mesh project.
    7. azure-keyvault-secrets-provider - Azure Key Vault Secrets Provider for Secret Store CSI Driver.
    8. web_application_routing - Managed NGINX ingress Controller.
    9. keda - CNCF Event-driven autoscaling project.

    For more details, get the updated list of AKS Add-ons here

    Extensions

    Cluster Extensions uses Helm charts and integrates with Azure Resource Manager (ARM) to provide installation and lifecycle management of capabilities on top of AKS.

    Extensions can be auto upgraded using minor versions, but it requires extra management and configuration. Using Scope parameter, it can be installed on the whole cluster or per namespace.

    AKS Extensions requires an Azure CLI extension to be installed. To add or update this CLI extension use the following commands:

    az extension add --name k8s-extension

    and to update an existing extension

    az extension update --name k8s-extension

    There are only 3 available extensions:

    1. Dapr - CNCF Dapr project.
    2. Azure ML - Integrate Azure Machine Learning with AKS to train, inference and manage ML models.
    3. Flux (GitOps) - CNCF Flux project integrated with AKS to enable cluster configuration and application deployment using GitOps.

    As an example, you can install Azure ML using the following command:

    az k8s-extension create \
    --name aml-compute --extension-type Microsoft.AzureML.Kubernetes \
    --scope cluster --cluster-name <clusterName> \
    --resource-group <resourceGroupName> \
    --cluster-type managedClusters \
    --configuration-settings enableInference=True allowInsecureConnections=True

    For more details, get the updated list of AKS Extensions here

    Add-ons vs Extensions

    AKS Add-ons brings an advantage of been fully managed by AKS itself, and AKS Extensions are more flexible and configurable but requires extra level of management.

    Add-ons are part of the AKS resource provider in the Azure API, and AKS Extensions are a separate resource provider on the Azure API.

    Resources

    It's not too late to sign up for and complete the Cloud Skills Challenge!
    - + \ No newline at end of file diff --git a/cnny-2023/tags/index.html b/cnny-2023/tags/index.html index 786faf35e7..d469f6072e 100644 --- a/cnny-2023/tags/index.html +++ b/cnny-2023/tags/index.html @@ -14,13 +14,13 @@ - +
    - + \ No newline at end of file diff --git a/cnny-2023/tags/ingress/index.html b/cnny-2023/tags/ingress/index.html index b09a7ab6cb..501263996c 100644 --- a/cnny-2023/tags/ingress/index.html +++ b/cnny-2023/tags/ingress/index.html @@ -14,13 +14,13 @@ - +

    2 posts tagged with "ingress"

    View All Tags

    · 11 min read
    Paul Yu

    Welcome to Day 2 of Week 2 of #CloudNativeNewYear!

    The theme for this week is #Kubernetes fundamentals. Yesterday we talked about how to deploy a containerized web app workload to Azure Kubernetes Service (AKS). Today we'll explore the topic of services and ingress and walk through the steps of making our containers accessible both internally as well as over the internet so that you can share it with the world 😊

    Ask the Experts Thursday, February 9th at 9 AM PST
    Catch the Replay of the Live Demo

    Watch the recorded demo and conversation about this week's topics.

    We were live on YouTube walking through today's (and the rest of this week's) demos.

    What We'll Cover

    • Exposing Pods via Service
    • Exposing Services via Ingress
    • Takeaways
    • Resources

    Exposing Pods via Service

    There are a few ways to expose your pod in Kubernetes. One way is to take an imperative approach and use the kubectl expose command. This is probably the quickest way to achieve your goal but it isn't the best way. A better way to expose your pod by taking a declarative approach by creating a services manifest file and deploying it using the kubectl apply command.

    Don't worry if you are unsure of how to make this manifest, we'll use kubectl to help generate it.

    First, let's ensure we have the database deployed on our AKS cluster.

    📝 NOTE: If you don't have an AKS cluster deployed, please head over to Azure-Samples/azure-voting-app-rust, clone the repo, and follow the instructions in the README.md to execute the Azure deployment and setup your kubectl context. Check out the first post this week for more on the environment setup.

    kubectl apply -f ./manifests/deployment-db.yaml

    Next, let's deploy the application. If you are following along from yesterday's content, there isn't anything you need to change; however, if you are deploy the app from scratch, you'll need to modify the deployment-app.yaml manifest and update it with your image tag and database pod's IP address.

    kubectl apply -f ./manifests/deployment-app.yaml

    Now, let's expose the database using a service so that we can leverage Kubernetes' built-in service discovery to be able to reference it by name; not pod IP. Run the following command.

    kubectl expose deployment azure-voting-db \
    --port=5432 \
    --target-port=5432

    With the database exposed using service, we can update the app deployment manifest to use the service name instead of pod IP. This way, if the pod ever gets assigned a new IP, we don't have to worry about updating the IP each time and redeploying our web application. Kubernetes has internal service discovery mechanism in place that allows us to reference a service by its name.

    Let's make an update to the manifest. Replace the environment variable for DATABASE_SERVER with the following:

    - name: DATABASE_SERVER
    value: azure-voting-db

    Re-deploy the app with the updated configuration.

    kubectl apply -f ./manifests/deployment-app.yaml

    One service down, one to go. Run the following command to expose the web application.

    kubectl expose deployment azure-voting-app \
    --type=LoadBalancer \
    --port=80 \
    --target-port=8080

    Notice the --type argument has a value of LoadBalancer. This service type is implemented by the cloud-controller-manager which is part of the Kubernetes control plane. When using a managed Kubernetes cluster such as Azure Kubernetes Service, a public standard load balancer will be able to provisioned when the service type is set to LoadBalancer. The load balancer will also have a public IP assigned which will make your deployment publicly available.

    Kubernetes supports four service types:

    • ClusterIP: this is the default and limits service access to internal traffic within the cluster
    • NodePort: this assigns a port mapping on the node's IP address and allows traffic from the virtual network (outside the cluster)
    • LoadBalancer: as mentioned above, this creates a cloud-based load balancer
    • ExternalName: this is used in special case scenarios where you want to map a service to an external DNS name

    📝 NOTE: When exposing a web application to the internet, allowing external users to connect to your Service directly is not the best approach. Instead, you should use an Ingress, which we'll cover in the next section.

    Now, let's confirm you can reach the web app from the internet. You can use the following command to print the URL to your terminal.

    echo "http://$(kubectl get service azure-voting-app -o jsonpath='{.status.loadBalancer.ingress[0].ip}')"

    Great! The kubectl expose command gets the job done, but as mentioned above, it is not the best method of exposing deployments. It is better to expose deployments declaratively using a service manifest, so let's delete the services and redeploy using manifests.

    kubectl delete service azure-voting-db azure-voting-app

    To use kubectl to generate our manifest file, we can use the same kubectl expose command that we ran earlier but this time, we'll include --output=yaml and --dry-run=client. This will instruct the command to output the manifest that would be sent to the kube-api server in YAML format to the terminal.

    Generate the manifest for the database service.

    kubectl expose deployment azure-voting-db \
    --type=ClusterIP \
    --port=5432 \
    --target-port=5432 \
    --output=yaml \
    --dry-run=client > ./manifests/service-db.yaml

    Generate the manifest for the application service.

    kubectl expose deployment azure-voting-app \
    --type=LoadBalancer \
    --port=80 \
    --target-port=8080 \
    --output=yaml \
    --dry-run=client > ./manifests/service-app.yaml

    The command above redirected the YAML output to your manifests directory. Here is what the web application service looks like.

    apiVersion: v1
    kind: Service
    metadata:
    creationTimestamp: null
    labels:
    app: azure-voting-app
    name: azure-voting-app
    spec:
    ports:
    - port: 80
    protocol: TCP
    targetPort: 8080
    selector:
    app: azure-voting-app
    type: LoadBalancer
    status:
    loadBalancer: {}

    💡 TIP: To view the schema of any api-resource in Kubernetes, you can use the kubectl explain command. In this case the kubectl explain service command will tell us exactly what each of these fields do.

    Re-deploy the services using the new service manifests.

    kubectl apply -f ./manifests/service-db.yaml -f ./manifests/service-app.yaml

    # You should see TYPE is set to LoadBalancer and the EXTERNAL-IP is set
    kubectl get service azure-voting-db azure-voting-app

    Confirm again that our application is accessible again. Run the following command to print the URL to the terminal.

    echo "http://$(kubectl get service azure-voting-app -o jsonpath='{.status.loadBalancer.ingress[0].ip}')"

    That was easy, right? We just exposed both of our pods using Kubernetes services. The database only needs to be accessible from within the cluster so ClusterIP is perfect for that. For the web application, we specified the type to be LoadBalancer so that we can access the application over the public internet.

    But wait... remember that if you want to expose web applications over the public internet, a Service with a public IP is not the best way; the better approach is to use an Ingress resource.

    Exposing Services via Ingress

    If you read through the Kubernetes documentation on Ingress you will see a diagram that depicts the Ingress sitting in front of the Service resource with a routing rule between it. In order to use Ingress, you need to deploy an Ingress Controller and it can be configured with many routing rules to forward traffic to one or many backend services. So effectively, an Ingress is a load balancer for your Services.

    With that said, we no longer need a service type of LoadBalancer since the service does not need to be accessible from the internet. It only needs to be accessible from the Ingress Controller (internal to the cluster) so we can change the service type to ClusterIP.

    Update your service.yaml file to look like this:

    apiVersion: v1
    kind: Service
    metadata:
    creationTimestamp: null
    labels:
    app: azure-voting-app
    name: azure-voting-app
    spec:
    ports:
    - port: 80
    protocol: TCP
    targetPort: 8080
    selector:
    app: azure-voting-app

    📝 NOTE: The default service type is ClusterIP so we can omit the type altogether.

    Re-apply the app service manifest.

    kubectl apply -f ./manifests/service-app.yaml

    # You should see TYPE set to ClusterIP and EXTERNAL-IP set to <none>
    kubectl get service azure-voting-app

    Next, we need to install an Ingress Controller. There are quite a few options, and the Kubernetes-maintained NGINX Ingress Controller is commonly deployed.

    You could install this manually by following these instructions, but if you do that you'll be responsible for maintaining and supporting the resource.

    I like to take advantage of free maintenance and support when I can get it, so I'll opt to use the Web Application Routing add-on for AKS.

    💡 TIP: Whenever you install an AKS add-on, it will be maintained and fully supported by Azure Support.

    Enable the web application routing add-on in our AKS cluster with the following command.

    az aks addon enable \
    --name <YOUR_AKS_NAME> \
    --resource-group <YOUR_AKS_RESOURCE_GROUP>
    --addon web_application_routing

    ⚠️ WARNING: This command can take a few minutes to complete

    Now, let's use the same approach we took in creating our service to create our Ingress resource. Run the following command to generate the Ingress manifest.

    kubectl create ingress azure-voting-app \
    --class=webapprouting.kubernetes.azure.com \
    --rule="/*=azure-voting-app:80" \
    --output yaml \
    --dry-run=client > ./manifests/ingress.yaml

    The --class=webapprouting.kubernetes.azure.com option activates the AKS web application routing add-on. This AKS add-on can also integrate with other Azure services such as Azure DNS and Azure Key Vault for TLS certificate management and this special class makes it all work.

    The --rule="/*=azure-voting-app:80" option looks confusing but we can use kubectl again to help us understand how to format the value for the option.

    kubectl create ingress --help

    In the output you will see the following:

    --rule=[]:
    Rule in format host/path=service:port[,tls=secretname]. Paths containing the leading character '*' are
    considered pathType=Prefix. tls argument is optional.

    It expects a host and path separated by a forward-slash, then expects the backend service name and port separated by a colon. We're not using a hostname for this demo so we can omit it. For the path, an asterisk is used to specify a wildcard path prefix.

    So, the value of /*=azure-voting-app:80 creates a routing rule for all paths following the domain (or in our case since we don't have a hostname specified, the IP) to route traffic to our azure-voting-app backend service on port 80.

    📝 NOTE: Configuring the hostname and TLS is outside the scope of this demo but please visit this URL https://bit.ly/aks-webapp-routing for an in-depth hands-on lab centered around Web Application Routing on AKS.

    Your ingress.yaml file should look like this:

    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
    creationTimestamp: null
    name: azure-voting-app
    spec:
    ingressClassName: webapprouting.kubernetes.azure.com
    rules:
    - http:
    paths:
    - backend:
    service:
    name: azure-voting-app
    port:
    number: 80
    path: /
    pathType: Prefix
    status:
    loadBalancer: {}

    Apply the app ingress manifest.

    kubectl apply -f ./manifests/ingress.yaml

    Validate the web application is available from the internet again. You can run the following command to print the URL to the terminal.

    echo "http://$(kubectl get ingress azure-voting-app -o jsonpath='{.status.loadBalancer.ingress[0].ip}')"

    Takeaways

    Exposing your applications both internally and externally can be easily achieved using Service and Ingress resources respectively. If your service is HTTP or HTTPS based and needs to be accessible from outsie the cluster, use Ingress with an internal Service (i.e., ClusterIP or NodePort); otherwise, use the Service resource. If your TCP-based Service needs to be publicly accessible, you set the type to LoadBalancer to expose a public IP for it. To learn more about these resources, please visit the links listed below.

    Lastly, if you are unsure how to begin writing your service manifest, you can use kubectl and have it do most of the work for you 🥳

    Resources

    Take the Cloud Skills Challenge!

    Enroll in the Cloud Skills Challenge!

    Don't miss out on this opportunity to level up your skills and stay ahead of the curve in the world of cloud native.

    - + \ No newline at end of file diff --git a/cnny-2023/tags/ingress/page/2/index.html b/cnny-2023/tags/ingress/page/2/index.html index 31141c0ba4..348d60812e 100644 --- a/cnny-2023/tags/ingress/page/2/index.html +++ b/cnny-2023/tags/ingress/page/2/index.html @@ -14,13 +14,13 @@ - +

    2 posts tagged with "ingress"

    View All Tags

    · 10 min read
    Paul Yu

    Welcome to Day 3 of Week 3 of #CloudNativeNewYear!

    The theme for this week is Bringing Your Application to Kubernetes. Yesterday we added configuration, secrets, and storage to our app. Today we'll explore how to expose the eShopOnWeb app so that customers can reach it over the internet using a custom domain name and TLS.

    Ask the Experts Thursday, February 9th at 9 AM PST
    Friday, February 10th at 11 AM PST

    Watch the recorded demo and conversation about this week's topics.

    We were live on YouTube walking through today's (and the rest of this week's) demos. Join us Friday, February 10th and bring your questions!

    What We'll Cover

    • Gather requirements
    • Generate TLS certificate and store in Azure Key Vault
    • Implement custom DNS using Azure DNS
    • Enable Web Application Routing add-on for AKS
    • Implement Ingress for the web application
    • Conclusion
    • Resources

    Gather requirements

    Currently, our eShopOnWeb app has three Kubernetes services deployed:

    1. db exposed internally via ClusterIP
    2. api exposed externally via LoadBalancer
    3. web exposed externally via LoadBalancer

    As mentioned in my post last week, Services allow applications to communicate with each other using DNS names. Kubernetes has service discovery capabilities built-in that allows Pods to resolve Services simply by using their names.

    In the case of our api and web deployments, they can simply reach the database by calling its name. The service type of ClusterIP for the db can remain as-is since it only needs to be accessed by the api and web apps.

    On the other hand, api and web both need to be accessed over the public internet. Currently, these services are using service type LoadBalancer which tells AKS to provision an Azure Load Balancer with a public IP address. No one is going to remember the IP addresses, so we need to make the app more accessible by adding a custom domain name and securing it with a TLS certificate.

    Here's what we're going to need:

    • Custom domain name for our app
    • TLS certificate for the custom domain name
    • Routing rule to ensure requests with /api/ in the URL is routed to the backend REST API
    • Routing rule to ensure requests without /api/ in the URL is routing to the web UI

    Just like last week, we will use the Web Application Routing add-on for AKS. But this time, we'll integrate it with Azure DNS and Azure Key Vault to satisfy all of our requirements above.

    info

    At the time of this writing the add-on is still in Public Preview

    Generate TLS certificate and store in Azure Key Vault

    We deployed an Azure Key Vault yesterday to store secrets. We'll use it again to store a TLS certificate too.

    Let's create and export a self-signed certificate for the custom domain.

    DNS_NAME=eshoponweb$RANDOM.com
    openssl req -new -x509 -nodes -out web-tls.crt -keyout web-tls.key -subj "/CN=${DNS_NAME}" -addext "subjectAltName=DNS:${DNS_NAME}"
    openssl pkcs12 -export -in web-tls.crt -inkey web-tls.key -out web-tls.pfx -password pass:
    info

    For learning purposes we'll use a self-signed certificate and a fake custom domain name.

    To browse to the site using the fake domain, we'll mimic a DNS lookup by adding an entry to your host file which maps the public IP address assigned to the ingress controller to the custom domain.

    In a production scenario, you will need to have a real domain delegated to Azure DNS and a valid TLS certificate for the domain.

    Grab your Azure Key Vault name and set the value in a variable for later use.

    RESOURCE_GROUP=cnny-week3

    AKV_NAME=$(az resource list \
    --resource-group $RESOURCE_GROUP \
    --resource-type Microsoft.KeyVault/vaults \
    --query "[0].name" -o tsv)

    Grant yourself permissions to get, list, and import certificates.

    MY_USER_NAME=$(az account show --query user.name -o tsv)
    MY_USER_OBJECT_ID=$(az ad user show --id $MY_USER_NAME --query id -o tsv)

    az keyvault set-policy \
    --name $AKV_NAME \
    --object-id $MY_USER_OBJECT_ID \
    --certificate-permissions get list import

    Upload the TLS certificate to Azure Key Vault and grab its certificate URI.

    WEB_TLS_CERT_ID=$(az keyvault certificate import \
    --vault-name $AKV_NAME \
    --name web-tls \
    --file web-tls.pfx \
    --query id \
    --output tsv)

    Implement custom DNS with Azure DNS

    Create a custom domain for our application and grab its Azure resource id.

    DNS_ZONE_ID=$(az network dns zone create \
    --name $DNS_NAME \
    --resource-group $RESOURCE_GROUP \
    --query id \
    --output tsv)

    Enable Web Application Routing add-on for AKS

    As we enable the Web Application Routing add-on, we'll also pass in the Azure DNS Zone resource id which triggers the installation of the external-dns controller in your Kubernetes cluster. This controller will be able to write Azure DNS zone entries on your behalf as you deploy Ingress manifests.

    AKS_NAME=$(az resource list \
    --resource-group $RESOURCE_GROUP \
    --resource-type Microsoft.ContainerService/managedClusters \
    --query "[0].name" -o tsv)

    az aks enable-addons \
    --name $AKS_NAME \
    --resource-group $RESOURCE_GROUP \
    --addons web_application_routing \
    --dns-zone-resource-id=$DNS_ZONE_ID \
    --enable-secret-rotation

    The add-on will also deploy a new Azure Managed Identity which is used by the external-dns controller when writing Azure DNS zone entries. Currently, it does not have permission to do that, so let's grant it permission.

    # This is where resources are automatically deployed by AKS
    NODE_RESOURCE_GROUP=$(az aks show \
    --name $AKS_NAME \
    --resource-group $RESOURCE_GROUP \
    --query nodeResourceGroup -o tsv)

    # This is the managed identity created by the Web Application Routing add-on
    MANAGED_IDENTTIY_OBJECT_ID=$(az resource show \
    --name webapprouting-${AKS_NAME} \
    --resource-group $NODE_RESOURCE_GROUP \
    --resource-type Microsoft.ManagedIdentity/userAssignedIdentities \
    --query properties.principalId \
    --output tsv)

    # Grant the managed identity permissions to write DNS entries
    az role assignment create \
    --role "DNS Zone Contributor" \
    --assignee $MANAGED_IDENTTIY_OBJECT_ID \
    --scope $DNS_ZONE_ID

    The Azure Managed Identity will also be used to retrieve and rotate TLS certificates from Azure Key Vault. So we'll need to grant it permission for that too.

    az keyvault set-policy \
    --name $AKV_NAME \
    --object-id $MANAGED_IDENTTIY_OBJECT_ID \
    --secret-permissions get \
    --certificate-permissions get

    Implement Ingress for the web application

    Before we create a new Ingress manifest, let's update the existing services to use ClusterIP instead of LoadBalancer. With an Ingress in place, there is no reason why we need the Service resources to be accessible from outside the cluster. The new Ingress will be the only entrypoint for external users.

    We can use the kubectl patch command to update the services

    kubectl patch service api -p '{"spec": {"type": "ClusterIP"}}'
    kubectl patch service web -p '{"spec": {"type": "ClusterIP"}}'

    Deploy a new Ingress to place in front of the web Service. Notice there is a special annotations entry for kubernetes.azure.com/tls-cert-keyvault-uri which points back to our self-signed certificate that was uploaded to Azure Key Vault.

    kubectl apply -f - <<EOF
    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
    annotations:
    kubernetes.azure.com/tls-cert-keyvault-uri: ${WEB_TLS_CERT_ID}
    name: web
    spec:
    ingressClassName: webapprouting.kubernetes.azure.com
    rules:
    - host: ${DNS_NAME}
    http:
    paths:
    - backend:
    service:
    name: web
    port:
    number: 80
    path: /
    pathType: Prefix
    - backend:
    service:
    name: api
    port:
    number: 80
    path: /api
    pathType: Prefix
    tls:
    - hosts:
    - ${DNS_NAME}
    secretName: web-tls
    EOF

    In our manifest above, we've also configured the Ingress route the traffic to either the web or api services based on the URL path requested. If the request URL includes /api/ then it will send traffic to the api backend service. Otherwise, it will send traffic to the web service.

    Within a few minutes, the external-dns controller will add an A record to Azure DNS which points to the Ingress resource's public IP. With the custom domain in place, we can simply browse using this domain name.

    info

    As mentioned above, since this is not a real domain name, we need to modify our host file to make it seem like our custom domain is resolving to the Ingress' public IP address.

    To get the ingress public IP, run the following:

    # Get the IP
    kubectl get ingress web -o jsonpath="{.status.loadBalancer.ingress[0].ip}"

    # Get the hostname
    kubectl get ingress web -o jsonpath="{.spec.tls[0].hosts[0]}"

    Next, open your host file and add an entry using the format <YOUR_PUBLIC_IP> <YOUR_CUSTOM_DOMAIN>. Below is an example of what it should look like.

    20.237.116.224 eshoponweb11265.com

    See this doc for more info on how to do this.

    When browsing to the website, you may be presented with a warning about the connection not being private. This is due to the fact that we are using a self-signed certificate. This is expected, so go ahead and proceed anyway to load up the page.

    Why is the Admin page broken?

    If you log in using the admin@microsoft.com account and browse to the Admin page, you'll notice no products are loaded on the page.

    This is because the admin page is built using Blazor and compiled as a WebAssembly application that runs in your browser. When the application was compiled, it packed the appsettings.Development.json file as an embedded resource. This file contains the base URL for the public API and it currently points to https://localhost:5099. Now that we have a domain name, we can update the base URL and point it to our custom domain.

    From the root of the eShopOnWeb repo, update the configuration file using a sed command.

    sed -i -e "s/localhost:5099/${DNS_NAME}/g" ./src/BlazorAdmin/wwwroot/appsettings.Development.json

    Rebuild and push the container to Azure Container Registry.

    # Grab the name of your Azure Container Registry
    ACR_NAME=$(az resource list \
    --resource-group $RESOURCE_GROUP \
    --resource-type Microsoft.ContainerRegistry/registries \
    --query "[0].name" -o tsv)

    # Invoke a build and publish job
    az acr build \
    --registry $ACR_NAME \
    --image $ACR_NAME.azurecr.io/web:v0.1.0 \
    --file ./src/Web/Dockerfile .

    Once the container build has completed, we can issue a kubectl patch command to quickly update the web deployment to test our change.

    kubectl patch deployment web -p "$(cat <<EOF
    {
    "spec": {
    "template": {
    "spec": {
    "containers": [
    {
    "name": "web",
    "image": "${ACR_NAME}.azurecr.io/web:v0.1.0"
    }
    ]
    }
    }
    }
    }
    EOF
    )"

    If all went well, you will be able to browse the admin page again and confirm product data is being loaded 🥳

    Conclusion

    The Web Application Routing add-on for AKS aims to streamline the process of exposing it to the public using the open-source NGINX Ingress Controller. With the add-on being managed by Azure, it natively integrates with other Azure services like Azure DNS and eliminates the need to manually create DNS entries. It can also integrate with Azure Key Vault to automatically pull in TLS certificates and rotate them as needed to further reduce operational overhead.

    We are one step closer to production and in the upcoming posts we'll further operationalize and secure our deployment, so stay tuned!

    In the meantime, check out the resources listed below for further reading.

    You can also find manifests with all the changes made in today's post in the Azure-Samples/eShopOnAKS repository.

    Resources

    Take the Cloud Skills Challenge!

    Enroll in the Cloud Skills Challenge!

    Don't miss out on this opportunity to level up your skills and stay ahead of the curve in the world of cloud native.

    - + \ No newline at end of file diff --git a/cnny-2023/tags/kubernetes/index.html b/cnny-2023/tags/kubernetes/index.html index 9759120905..cd3a8574d3 100644 --- a/cnny-2023/tags/kubernetes/index.html +++ b/cnny-2023/tags/kubernetes/index.html @@ -14,13 +14,13 @@ - +

    5 posts tagged with "kubernetes"

    View All Tags

    · 11 min read
    Paul Yu

    Welcome to Day 2 of Week 2 of #CloudNativeNewYear!

    The theme for this week is #Kubernetes fundamentals. Yesterday we talked about how to deploy a containerized web app workload to Azure Kubernetes Service (AKS). Today we'll explore the topic of services and ingress and walk through the steps of making our containers accessible both internally as well as over the internet so that you can share it with the world 😊

    Ask the Experts Thursday, February 9th at 9 AM PST
    Catch the Replay of the Live Demo

    Watch the recorded demo and conversation about this week's topics.

    We were live on YouTube walking through today's (and the rest of this week's) demos.

    What We'll Cover

    • Exposing Pods via Service
    • Exposing Services via Ingress
    • Takeaways
    • Resources

    Exposing Pods via Service

    There are a few ways to expose your pod in Kubernetes. One way is to take an imperative approach and use the kubectl expose command. This is probably the quickest way to achieve your goal but it isn't the best way. A better way to expose your pod by taking a declarative approach by creating a services manifest file and deploying it using the kubectl apply command.

    Don't worry if you are unsure of how to make this manifest, we'll use kubectl to help generate it.

    First, let's ensure we have the database deployed on our AKS cluster.

    📝 NOTE: If you don't have an AKS cluster deployed, please head over to Azure-Samples/azure-voting-app-rust, clone the repo, and follow the instructions in the README.md to execute the Azure deployment and setup your kubectl context. Check out the first post this week for more on the environment setup.

    kubectl apply -f ./manifests/deployment-db.yaml

    Next, let's deploy the application. If you are following along from yesterday's content, there isn't anything you need to change; however, if you are deploy the app from scratch, you'll need to modify the deployment-app.yaml manifest and update it with your image tag and database pod's IP address.

    kubectl apply -f ./manifests/deployment-app.yaml

    Now, let's expose the database using a service so that we can leverage Kubernetes' built-in service discovery to be able to reference it by name; not pod IP. Run the following command.

    kubectl expose deployment azure-voting-db \
    --port=5432 \
    --target-port=5432

    With the database exposed using service, we can update the app deployment manifest to use the service name instead of pod IP. This way, if the pod ever gets assigned a new IP, we don't have to worry about updating the IP each time and redeploying our web application. Kubernetes has internal service discovery mechanism in place that allows us to reference a service by its name.

    Let's make an update to the manifest. Replace the environment variable for DATABASE_SERVER with the following:

    - name: DATABASE_SERVER
    value: azure-voting-db

    Re-deploy the app with the updated configuration.

    kubectl apply -f ./manifests/deployment-app.yaml

    One service down, one to go. Run the following command to expose the web application.

    kubectl expose deployment azure-voting-app \
    --type=LoadBalancer \
    --port=80 \
    --target-port=8080

    Notice the --type argument has a value of LoadBalancer. This service type is implemented by the cloud-controller-manager which is part of the Kubernetes control plane. When using a managed Kubernetes cluster such as Azure Kubernetes Service, a public standard load balancer will be able to provisioned when the service type is set to LoadBalancer. The load balancer will also have a public IP assigned which will make your deployment publicly available.

    Kubernetes supports four service types:

    • ClusterIP: this is the default and limits service access to internal traffic within the cluster
    • NodePort: this assigns a port mapping on the node's IP address and allows traffic from the virtual network (outside the cluster)
    • LoadBalancer: as mentioned above, this creates a cloud-based load balancer
    • ExternalName: this is used in special case scenarios where you want to map a service to an external DNS name

    📝 NOTE: When exposing a web application to the internet, allowing external users to connect to your Service directly is not the best approach. Instead, you should use an Ingress, which we'll cover in the next section.

    Now, let's confirm you can reach the web app from the internet. You can use the following command to print the URL to your terminal.

    echo "http://$(kubectl get service azure-voting-app -o jsonpath='{.status.loadBalancer.ingress[0].ip}')"

    Great! The kubectl expose command gets the job done, but as mentioned above, it is not the best method of exposing deployments. It is better to expose deployments declaratively using a service manifest, so let's delete the services and redeploy using manifests.

    kubectl delete service azure-voting-db azure-voting-app

    To use kubectl to generate our manifest file, we can use the same kubectl expose command that we ran earlier but this time, we'll include --output=yaml and --dry-run=client. This will instruct the command to output the manifest that would be sent to the kube-api server in YAML format to the terminal.

    Generate the manifest for the database service.

    kubectl expose deployment azure-voting-db \
    --type=ClusterIP \
    --port=5432 \
    --target-port=5432 \
    --output=yaml \
    --dry-run=client > ./manifests/service-db.yaml

    Generate the manifest for the application service.

    kubectl expose deployment azure-voting-app \
    --type=LoadBalancer \
    --port=80 \
    --target-port=8080 \
    --output=yaml \
    --dry-run=client > ./manifests/service-app.yaml

    The command above redirected the YAML output to your manifests directory. Here is what the web application service looks like.

    apiVersion: v1
    kind: Service
    metadata:
    creationTimestamp: null
    labels:
    app: azure-voting-app
    name: azure-voting-app
    spec:
    ports:
    - port: 80
    protocol: TCP
    targetPort: 8080
    selector:
    app: azure-voting-app
    type: LoadBalancer
    status:
    loadBalancer: {}

    💡 TIP: To view the schema of any api-resource in Kubernetes, you can use the kubectl explain command. In this case the kubectl explain service command will tell us exactly what each of these fields do.

    Re-deploy the services using the new service manifests.

    kubectl apply -f ./manifests/service-db.yaml -f ./manifests/service-app.yaml

    # You should see TYPE is set to LoadBalancer and the EXTERNAL-IP is set
    kubectl get service azure-voting-db azure-voting-app

    Confirm again that our application is accessible again. Run the following command to print the URL to the terminal.

    echo "http://$(kubectl get service azure-voting-app -o jsonpath='{.status.loadBalancer.ingress[0].ip}')"

    That was easy, right? We just exposed both of our pods using Kubernetes services. The database only needs to be accessible from within the cluster so ClusterIP is perfect for that. For the web application, we specified the type to be LoadBalancer so that we can access the application over the public internet.

    But wait... remember that if you want to expose web applications over the public internet, a Service with a public IP is not the best way; the better approach is to use an Ingress resource.

    Exposing Services via Ingress

    If you read through the Kubernetes documentation on Ingress you will see a diagram that depicts the Ingress sitting in front of the Service resource with a routing rule between it. In order to use Ingress, you need to deploy an Ingress Controller and it can be configured with many routing rules to forward traffic to one or many backend services. So effectively, an Ingress is a load balancer for your Services.

    With that said, we no longer need a service type of LoadBalancer since the service does not need to be accessible from the internet. It only needs to be accessible from the Ingress Controller (internal to the cluster) so we can change the service type to ClusterIP.

    Update your service.yaml file to look like this:

    apiVersion: v1
    kind: Service
    metadata:
    creationTimestamp: null
    labels:
    app: azure-voting-app
    name: azure-voting-app
    spec:
    ports:
    - port: 80
    protocol: TCP
    targetPort: 8080
    selector:
    app: azure-voting-app

    📝 NOTE: The default service type is ClusterIP so we can omit the type altogether.

    Re-apply the app service manifest.

    kubectl apply -f ./manifests/service-app.yaml

    # You should see TYPE set to ClusterIP and EXTERNAL-IP set to <none>
    kubectl get service azure-voting-app

    Next, we need to install an Ingress Controller. There are quite a few options, and the Kubernetes-maintained NGINX Ingress Controller is commonly deployed.

    You could install this manually by following these instructions, but if you do that you'll be responsible for maintaining and supporting the resource.

    I like to take advantage of free maintenance and support when I can get it, so I'll opt to use the Web Application Routing add-on for AKS.

    💡 TIP: Whenever you install an AKS add-on, it will be maintained and fully supported by Azure Support.

    Enable the web application routing add-on in our AKS cluster with the following command.

    az aks addon enable \
    --name <YOUR_AKS_NAME> \
    --resource-group <YOUR_AKS_RESOURCE_GROUP>
    --addon web_application_routing

    ⚠️ WARNING: This command can take a few minutes to complete

    Now, let's use the same approach we took in creating our service to create our Ingress resource. Run the following command to generate the Ingress manifest.

    kubectl create ingress azure-voting-app \
    --class=webapprouting.kubernetes.azure.com \
    --rule="/*=azure-voting-app:80" \
    --output yaml \
    --dry-run=client > ./manifests/ingress.yaml

    The --class=webapprouting.kubernetes.azure.com option activates the AKS web application routing add-on. This AKS add-on can also integrate with other Azure services such as Azure DNS and Azure Key Vault for TLS certificate management and this special class makes it all work.

    The --rule="/*=azure-voting-app:80" option looks confusing but we can use kubectl again to help us understand how to format the value for the option.

    kubectl create ingress --help

    In the output you will see the following:

    --rule=[]:
    Rule in format host/path=service:port[,tls=secretname]. Paths containing the leading character '*' are
    considered pathType=Prefix. tls argument is optional.

    It expects a host and path separated by a forward-slash, then expects the backend service name and port separated by a colon. We're not using a hostname for this demo so we can omit it. For the path, an asterisk is used to specify a wildcard path prefix.

    So, the value of /*=azure-voting-app:80 creates a routing rule for all paths following the domain (or in our case since we don't have a hostname specified, the IP) to route traffic to our azure-voting-app backend service on port 80.

    📝 NOTE: Configuring the hostname and TLS is outside the scope of this demo but please visit this URL https://bit.ly/aks-webapp-routing for an in-depth hands-on lab centered around Web Application Routing on AKS.

    Your ingress.yaml file should look like this:

    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
    creationTimestamp: null
    name: azure-voting-app
    spec:
    ingressClassName: webapprouting.kubernetes.azure.com
    rules:
    - http:
    paths:
    - backend:
    service:
    name: azure-voting-app
    port:
    number: 80
    path: /
    pathType: Prefix
    status:
    loadBalancer: {}

    Apply the app ingress manifest.

    kubectl apply -f ./manifests/ingress.yaml

    Validate the web application is available from the internet again. You can run the following command to print the URL to the terminal.

    echo "http://$(kubectl get ingress azure-voting-app -o jsonpath='{.status.loadBalancer.ingress[0].ip}')"

    Takeaways

    Exposing your applications both internally and externally can be easily achieved using Service and Ingress resources respectively. If your service is HTTP or HTTPS based and needs to be accessible from outsie the cluster, use Ingress with an internal Service (i.e., ClusterIP or NodePort); otherwise, use the Service resource. If your TCP-based Service needs to be publicly accessible, you set the type to LoadBalancer to expose a public IP for it. To learn more about these resources, please visit the links listed below.

    Lastly, if you are unsure how to begin writing your service manifest, you can use kubectl and have it do most of the work for you 🥳

    Resources

    Take the Cloud Skills Challenge!

    Enroll in the Cloud Skills Challenge!

    Don't miss out on this opportunity to level up your skills and stay ahead of the curve in the world of cloud native.

    - + \ No newline at end of file diff --git a/cnny-2023/tags/kubernetes/page/2/index.html b/cnny-2023/tags/kubernetes/page/2/index.html index fb4e07d888..2a8150c13a 100644 --- a/cnny-2023/tags/kubernetes/page/2/index.html +++ b/cnny-2023/tags/kubernetes/page/2/index.html @@ -14,13 +14,13 @@ - +

    5 posts tagged with "kubernetes"

    View All Tags

    · 8 min read
    Paul Yu

    Welcome to Day 4 of Week 2 of #CloudNativeNewYear!

    The theme for this week is Kubernetes fundamentals. Yesterday we talked about how to set app configurations and secrets at runtime using Kubernetes ConfigMaps and Secrets. Today we'll explore the topic of persistent storage on Kubernetes and show you can leverage Persistent Volumes and Persistent Volume Claims to ensure your PostgreSQL data can survive container restarts.

    Ask the Experts Thursday, February 9th at 9 AM PST
    Catch the Replay of the Live Demo

    Watch the recorded demo and conversation about this week's topics.

    We were live on YouTube walking through today's (and the rest of this week's) demos.

    What We'll Cover

    • Containers are ephemeral
    • Persistent storage on Kubernetes
    • Persistent storage on AKS
    • Takeaways
    • Resources

    Containers are ephemeral

    In our sample application, the frontend UI writes vote values to a backend PostgreSQL database. By default the database container stores its data on the container's local file system, so there will be data loss when the pod is re-deployed or crashes as containers are meant to start with a clean slate each time.

    Let's re-deploy our sample app and experience the problem first hand.

    📝 NOTE: If you don't have an AKS cluster deployed, please head over to Azure-Samples/azure-voting-app-rust, clone the repo, and follow the instructions in the README.md to execute the Azure deployment and setup your kubectl context. Check out the first post this week for more on the environment setup.

    kubectl apply -f ./manifests

    Wait for the azure-voting-app service to be assigned a public IP then browse to the website and submit some votes. Use the command below to print the URL to the terminal.

    echo "http://$(kubectl get ingress azure-voting-app -o jsonpath='{.status.loadBalancer.ingress[0].ip}')"

    Now, let's delete the pods and watch Kubernetes do what it does best... that is, re-schedule pods.

    # wait for the pod to come up then ctrl+c to stop watching
    kubectl delete --all pod --wait=false && kubectl get po -w

    Once the pods have been recovered, reload the website and confirm the vote tally has been reset to zero.

    We need to fix this so that the data outlives the container.

    Persistent storage on Kubernetes

    In order for application data to survive crashes and restarts, you must implement Persistent Volumes and Persistent Volume Claims.

    A persistent volume represents storage that is available to the cluster. Storage volumes can be provisioned manually by an administrator or dynamically using Container Storage Interface (CSI) and storage classes, which includes information on how to provision CSI volumes.

    When a user needs to add persistent storage to their application, a persistent volume claim is made to allocate chunks of storage from the volume. This "claim" includes things like volume mode (e.g., file system or block storage), the amount of storage to allocate, the access mode, and optionally a storage class. Once a persistent volume claim has been deployed, users can add the volume to the pod and mount it in a container.

    In the next section, we'll demonstrate how to enable persistent storage on AKS.

    Persistent storage on AKS

    With AKS, CSI drivers and storage classes are pre-deployed into your cluster. This allows you to natively use Azure Disks, Azure Files, and Azure Blob Storage as persistent volumes. You can either bring your own Azure storage account and use it with AKS or have AKS provision an Azure storage account for you.

    To view the Storage CSI drivers that have been enabled in your AKS cluster, run the following command.

    az aks show \
    --name <YOUR_AKS_NAME> \
    --resource-group <YOUR_AKS_RESOURCE_GROUP> \
    --query storageProfile

    You should see output that looks like this.

    {
    "blobCsiDriver": null,
    "diskCsiDriver": {
    "enabled": true,
    "version": "v1"
    },
    "fileCsiDriver": {
    "enabled": true
    },
    "snapshotController": {
    "enabled": true
    }
    }

    To view the storage classes that have been installed in your cluster, run the following command.

    kubectl get storageclass

    Workload requirements will dictate which CSI driver and storage class you will need to use.

    If you need block storage, then you should use the blobCsiDriver. The driver may not be enabled by default but you can enable it by following instructions which can be found in the Resources section below.

    If you need file storage you should leverage either diskCsiDriver or fileCsiDriver. The decision between these two boils down to whether or not you need to have the underlying storage accessible by one pod or multiple pods. It is important to note that diskCsiDriver currently supports access from a single pod only. Therefore, if you need data to be accessible by multiple pods at the same time, then you should opt for fileCsiDriver.

    For our PostgreSQL deployment, we'll use the diskCsiDriver and have AKS create an Azure Disk resource for us. There is no need to create a PV resource, all we need to do to is create a PVC using the managed-csi-premium storage class.

    Run the following command to create the PVC.

    kubectl apply -f - <<EOF            
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
    name: pvc-azuredisk
    spec:
    accessModes:
    - ReadWriteOnce
    resources:
    requests:
    storage: 10Gi
    storageClassName: managed-csi-premium
    EOF

    When you check the PVC resource, you'll notice the STATUS is set to Pending. It will be set to Bound once the volume is mounted in the PostgreSQL container.

    kubectl get persistentvolumeclaim

    Let's delete the azure-voting-db deployment.

    kubectl delete deploy azure-voting-db

    Next, we need to apply an updated deployment manifest which includes our PVC.

    kubectl apply -f - <<EOF
    apiVersion: apps/v1
    kind: Deployment
    metadata:
    creationTimestamp: null
    labels:
    app: azure-voting-db
    name: azure-voting-db
    spec:
    replicas: 1
    selector:
    matchLabels:
    app: azure-voting-db
    strategy: {}
    template:
    metadata:
    creationTimestamp: null
    labels:
    app: azure-voting-db
    spec:
    containers:
    - image: postgres:15.0-alpine
    name: postgres
    ports:
    - containerPort: 5432
    env:
    - name: POSTGRES_PASSWORD
    valueFrom:
    secretKeyRef:
    name: azure-voting-secret
    key: POSTGRES_PASSWORD
    resources: {}
    volumeMounts:
    - name: mypvc
    mountPath: "/var/lib/postgresql/data"
    subPath: "data"
    volumes:
    - name: mypvc
    persistentVolumeClaim:
    claimName: pvc-azuredisk
    EOF

    In the manifest above, you'll see that we are mounting a new volume called mypvc (the name can be whatever you want) in the pod which points to a PVC named pvc-azuredisk. With the volume in place, we can mount it in the container by referencing the name of the volume mypvc and setting the mount path to /var/lib/postgresql/data (which is the default path).

    💡 IMPORTANT: When mounting a volume into a non-empty subdirectory, you must add subPath to the volume mount and point it to a subdirectory in the volume rather than mounting at root. In our case, when Azure Disk is formatted, it leaves a lost+found directory as documented here.

    Watch the pods and wait for the STATUS to show Running and the pod's READY status shows 1/1.

    # wait for the pod to come up then ctrl+c to stop watching
    kubectl get po -w

    Verify that the STATUS of the PVC is now set to Bound

    kubectl get persistentvolumeclaim

    With the new database container running, let's restart the application pod, wait for the pod's READY status to show 1/1, then head back over to our web browser and submit a few votes.

    kubectl delete pod -lapp=azure-voting-app --wait=false && kubectl get po -lapp=azure-voting-app -w

    Now the moment of truth... let's rip out the pods again, wait for the pods to be re-scheduled, and confirm our vote counts remain in tact.

    kubectl delete --all pod --wait=false && kubectl get po -w

    If you navigate back to the website, you'll find the vote are still there 🎉

    Takeaways

    By design, containers are meant to be ephemeral and stateless workloads are ideal on Kubernetes. However, there will come a time when your data needs to outlive the container. To persist data in your Kubernetes workloads, you need to leverage PV, PVC, and optionally storage classes. In our demo scenario, we leveraged CSI drivers built into AKS and created a PVC using pre-installed storage classes. From there, we updated the database deployment to mount the PVC in the container and AKS did the rest of the work in provisioning the underlying Azure Disk. If the built-in storage classes does not fit your needs; for example, you need to change the ReclaimPolicy or change the SKU for the Azure resource, then you can create your own custom storage class and configure it just the way you need it 😊

    We'll revisit this topic again next week but in the meantime, check out some of the resources listed below to learn more.

    See you in the next post!

    Resources

    Take the Cloud Skills Challenge!

    Enroll in the Cloud Skills Challenge!

    Don't miss out on this opportunity to level up your skills and stay ahead of the curve in the world of cloud native.

    - + \ No newline at end of file diff --git a/cnny-2023/tags/kubernetes/page/3/index.html b/cnny-2023/tags/kubernetes/page/3/index.html index 2a770b1745..9bb9246861 100644 --- a/cnny-2023/tags/kubernetes/page/3/index.html +++ b/cnny-2023/tags/kubernetes/page/3/index.html @@ -14,13 +14,13 @@ - +

    5 posts tagged with "kubernetes"

    View All Tags

    · 12 min read
    Paul Yu

    Welcome to Day 2 of Week 3 of #CloudNativeNewYear!

    The theme for this week is Bringing Your Application to Kubernetes. Yesterday we talked about getting an existing application running in Kubernetes with a full pipeline in GitHub Actions. Today we'll evaluate our sample application's configuration, storage, and networking requirements and implement using Kubernetes and Azure resources.

    Ask the Experts Thursday, February 9th at 9 AM PST
    Friday, February 10th at 11 AM PST

    Watch the recorded demo and conversation about this week's topics.

    We were live on YouTube walking through today's (and the rest of this week's) demos. Join us Friday, February 10th and bring your questions!

    What We'll Cover

    • Gather requirements
    • Implement environment variables using ConfigMaps
    • Implement persistent volumes using Azure Files
    • Implement secrets using Azure Key Vault
    • Re-package deployments
    • Conclusion
    • Resources
    caution

    Before you begin, make sure you've gone through yesterday's post to set up your AKS cluster.

    Gather requirements

    The eShopOnWeb application is written in .NET 7 and has two major pieces of functionality. The web UI is where customers can browse and shop. The web UI also includes an admin portal for managing the product catalog. This admin portal, is packaged as a WebAssembly application and relies on a separate REST API service. Both the web UI and the REST API connect to the same SQL Server container.

    Looking through the source code which can be found here we can identify requirements for configs, persistent storage, and secrets.

    Database server

    • Need to store the password for the sa account as a secure secret
    • Need persistent storage volume for data directory
    • Need to inject environment variables for SQL Server license type and EULA acceptance

    Web UI and REST API service

    • Need to store database connection string as a secure secret
    • Need to inject ASP.NET environment variables to override app settings
    • Need persistent storage volume for ASP.NET key storage

    Implement environment variables using ConfigMaps

    ConfigMaps are relatively straight-forward to create. If you were following along with the examples last week, this should be review 😉

    Create a ConfigMap to store database environment variables.

    kubectl apply -f - <<EOF
    apiVersion: v1
    kind: ConfigMap
    metadata:
    name: mssql-settings
    data:
    MSSQL_PID: Developer
    ACCEPT_EULA: "Y"
    EOF

    Create another ConfigMap to store ASP.NET environment variables.

    kubectl apply -f - <<EOF
    apiVersion: v1
    kind: ConfigMap
    metadata:
    name: aspnet-settings
    data:
    ASPNETCORE_ENVIRONMENT: Development
    EOF

    Implement persistent volumes using Azure Files

    Similar to last week, we'll take advantage of storage classes built into AKS. For our SQL Server data, we'll use the azurefile-csi-premium storage class and leverage an Azure Files resource as our PersistentVolume.

    Create a PersistentVolumeClaim (PVC) for persisting SQL Server data.

    kubectl apply -f - <<EOF
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
    name: mssql-data
    spec:
    accessModes:
    - ReadWriteMany
    storageClassName: azurefile-csi-premium
    resources:
    requests:
    storage: 5Gi
    EOF

    Create another PVC for persisting ASP.NET data.

    kubectl apply -f - <<EOF
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
    name: aspnet-data
    spec:
    accessModes:
    - ReadWriteMany
    storageClassName: azurefile-csi-premium
    resources:
    requests:
    storage: 5Gi
    EOF

    Implement secrets using Azure Key Vault

    It's a well known fact that Kubernetes secretes are not really secrets. They're just base64-encoded values and not secure, especially if malicious users have access to your Kubernetes cluster.

    In a production scenario, you will want to leverage an external vault like Azure Key Vault or HashiCorp Vault to encrypt and store secrets.

    With AKS, we can enable the Secrets Store CSI driver add-on which will allow us to leverage Azure Key Vault.

    # Set some variables
    RG_NAME=<YOUR_RESOURCE_GROUP_NAME>
    AKS_NAME=<YOUR_AKS_CLUSTER_NAME>
    ACR_NAME=<YOUR_ACR_NAME>

    az aks enable-addons \
    --addons azure-keyvault-secrets-provider \
    --name $AKS_NAME \
    --resource-group $RG_NAME

    With the add-on enabled, you should see aks-secrets-store-csi-driver and aks-secrets-store-provider-azure resources installed on each node in your Kubernetes cluster.

    Run the command below to verify.

    kubectl get pods \
    --namespace kube-system \
    --selector 'app in (secrets-store-csi-driver, secrets-store-provider-azure)'

    The Secrets Store CSI driver allows us to use secret stores via Container Storage Interface (CSI) volumes. This provider offers capabilities such as mounting and syncing between the secure vault and Kubernetes Secrets. On AKS, the Azure Key Vault Provider for Secrets Store CSI Driver enables integration with Azure Key Vault.

    You may not have an Azure Key Vault created yet, so let's create one and add some secrets to it.

    AKV_NAME=$(az keyvault create \
    --name akv-eshop$RANDOM \
    --resource-group $RG_NAME \
    --query name -o tsv)

    # Database server password
    az keyvault secret set \
    --vault-name $AKV_NAME \
    --name mssql-password \
    --value "@someThingComplicated1234"

    # Catalog database connection string
    az keyvault secret set \
    --vault-name $AKV_NAME \
    --name mssql-connection-catalog \
    --value "Server=db;Database=Microsoft.eShopOnWeb.CatalogDb;User Id=sa;Password=@someThingComplicated1234;TrustServerCertificate=True;"

    # Identity database connection string
    az keyvault secret set \
    --vault-name $AKV_NAME \
    --name mssql-connection-identity \
    --value "Server=db;Database=Microsoft.eShopOnWeb.Identity;User Id=sa;Password=@someThingComplicated1234;TrustServerCertificate=True;"

    Pods authentication using Azure Workload Identity

    In order for our Pods to retrieve secrets from Azure Key Vault, we'll need to set up a way for the Pod to authenticate against Azure AD. This can be achieved by implementing the new Azure Workload Identity feature of AKS.

    info

    At the time of this writing, the workload identity feature of AKS is in Preview.

    The workload identity feature within AKS allows us to leverage native Kubernetes resources and link a Kubernetes ServiceAccount to an Azure Managed Identity to authenticate against Azure AD.

    For the authentication flow, our Kubernetes cluster will act as an Open ID Connect (OIDC) issuer and will be able issue identity tokens to ServiceAccounts which will be assigned to our Pods.

    The Azure Managed Identity will be granted permission to access secrets in our Azure Key Vault and with the ServiceAccount being assigned to our Pods, they will be able to retrieve our secrets.

    For more information on how the authentication mechanism all works, check out this doc.

    To implement all this, start by enabling the new preview feature for AKS.

    az feature register \
    --namespace "Microsoft.ContainerService" \
    --name "EnableWorkloadIdentityPreview"
    caution

    This can take several minutes to complete.

    Check the status and ensure the state shows Regestered before moving forward.

    az feature show \
    --namespace "Microsoft.ContainerService" \
    --name "EnableWorkloadIdentityPreview"

    Update your AKS cluster to enable the workload identity feature and enable the OIDC issuer endpoint.

    az aks update \
    --name $AKS_NAME \
    --resource-group $RG_NAME \
    --enable-workload-identity \
    --enable-oidc-issuer

    Create an Azure Managed Identity and retrieve its client ID.

    MANAGED_IDENTITY_CLIENT_ID=$(az identity create \
    --name aks-workload-identity \
    --resource-group $RG_NAME \
    --subscription $(az account show --query id -o tsv) \
    --query 'clientId' -o tsv)

    Create the Kubernetes ServiceAccount.

    # Set namespace (this must align with the namespace that your app is deployed into)
    SERVICE_ACCOUNT_NAMESPACE=default

    # Set the service account name
    SERVICE_ACCOUNT_NAME=eshop-serviceaccount

    # Create the service account
    kubectl apply -f - <<EOF
    apiVersion: v1
    kind: ServiceAccount
    metadata:
    annotations:
    azure.workload.identity/client-id: ${MANAGED_IDENTITY_CLIENT_ID}
    labels:
    azure.workload.identity/use: "true"
    name: ${SERVICE_ACCOUNT_NAME}
    namespace: ${SERVICE_ACCOUNT_NAMESPACE}
    EOF
    info

    Note to enable this ServiceAccount to work with Azure Workload Identity, you must annotate the resource with azure.workload.identity/client-id, and add a label of azure.workload.identity/use: "true"

    That was a lot... Let's review what we just did.

    We have an Azure Managed Identity (object in Azure AD), an OIDC issuer URL (endpoint in our Kubernetes cluster), and a Kubernetes ServiceAccount.

    The next step is to "tie" these components together and establish a Federated Identity Credential so that Azure AD can trust authentication requests from your Kubernetes cluster.

    info

    This identity federation can be established between Azure AD any Kubernetes cluster; not just AKS 🤗

    To establish the federated credential, we'll need the OIDC issuer URL, and a subject which points to your Kubernetes ServiceAccount.

    # Get the OIDC issuer URL
    OIDC_ISSUER_URL=$(az aks show \
    --name $AKS_NAME \
    --resource-group $RG_NAME \
    --query "oidcIssuerProfile.issuerUrl" -o tsv)

    # Set the subject name using this format: `system:serviceaccount:<YOUR_SERVICE_ACCOUNT_NAMESPACE>:<YOUR_SERVICE_ACCOUNT_NAME>`
    SUBJECT=system:serviceaccount:$SERVICE_ACCOUNT_NAMESPACE:$SERVICE_ACCOUNT_NAME

    az identity federated-credential create \
    --name aks-federated-credential \
    --identity-name aks-workload-identity \
    --resource-group $RG_NAME \
    --issuer $OIDC_ISSUER_URL \
    --subject $SUBJECT

    With the authentication components set, we can now create a SecretProviderClass which includes details about the Azure Key Vault, the secrets to pull out from the vault, and identity used to access the vault.

    # Get the tenant id for the key vault
    TENANT_ID=$(az keyvault show \
    --name $AKV_NAME \
    --resource-group $RG_NAME \
    --query properties.tenantId -o tsv)

    # Create the secret provider for azure key vault
    kubectl apply -f - <<EOF
    apiVersion: secrets-store.csi.x-k8s.io/v1
    kind: SecretProviderClass
    metadata:
    name: eshop-azure-keyvault
    spec:
    provider: azure
    parameters:
    usePodIdentity: "false"
    useVMManagedIdentity: "false"
    clientID: "${MANAGED_IDENTITY_CLIENT_ID}"
    keyvaultName: "${AKV_NAME}"
    cloudName: ""
    objects: |
    array:
    - |
    objectName: mssql-password
    objectType: secret
    objectVersion: ""
    - |
    objectName: mssql-connection-catalog
    objectType: secret
    objectVersion: ""
    - |
    objectName: mssql-connection-identity
    objectType: secret
    objectVersion: ""
    tenantId: "${TENANT_ID}"
    secretObjects:
    - secretName: eshop-secrets
    type: Opaque
    data:
    - objectName: mssql-password
    key: mssql-password
    - objectName: mssql-connection-catalog
    key: mssql-connection-catalog
    - objectName: mssql-connection-identity
    key: mssql-connection-identity
    EOF

    Finally, lets grant the Azure Managed Identity permissions to retrieve secrets from the Azure Key Vault.

    az keyvault set-policy \
    --name $AKV_NAME \
    --secret-permissions get \
    --spn $MANAGED_IDENTITY_CLIENT_ID

    Re-package deployments

    Update your database deployment to load environment variables from our ConfigMap, attach the PVC and SecretProviderClass as volumes, mount the volumes into the Pod, and use the ServiceAccount to retrieve secrets.

    Additionally, you may notice the database Pod is set to use fsGroup:10001 as part of the securityContext. This is required as the MSSQL container runs using a non-root account called mssql and this account has the proper permissions to read/write data at the /var/opt/mssql mount path.

    kubectl apply -f - <<EOF
    apiVersion: apps/v1
    kind: Deployment
    metadata:
    name: db
    labels:
    app: db
    spec:
    replicas: 1
    selector:
    matchLabels:
    app: db
    template:
    metadata:
    labels:
    app: db
    spec:
    securityContext:
    fsGroup: 10001
    serviceAccountName: ${SERVICE_ACCOUNT_NAME}
    containers:
    - name: db
    image: mcr.microsoft.com/mssql/server:2019-latest
    ports:
    - containerPort: 1433
    envFrom:
    - configMapRef:
    name: mssql-settings
    env:
    - name: MSSQL_SA_PASSWORD
    valueFrom:
    secretKeyRef:
    name: eshop-secrets
    key: mssql-password
    resources: {}
    volumeMounts:
    - name: mssqldb
    mountPath: /var/opt/mssql
    - name: eshop-secrets
    mountPath: "/mnt/secrets-store"
    readOnly: true
    volumes:
    - name: mssqldb
    persistentVolumeClaim:
    claimName: mssql-data
    - name: eshop-secrets
    csi:
    driver: secrets-store.csi.k8s.io
    readOnly: true
    volumeAttributes:
    secretProviderClass: eshop-azure-keyvault
    EOF

    We'll update the API and Web deployments in a similar way.

    # Set the image tag
    IMAGE_TAG=<YOUR_IMAGE_TAG>

    # API deployment
    kubectl apply -f - <<EOF
    apiVersion: apps/v1
    kind: Deployment
    metadata:
    name: api
    labels:
    app: api
    spec:
    replicas: 1
    selector:
    matchLabels:
    app: api
    template:
    metadata:
    labels:
    app: api
    spec:
    serviceAccount: ${SERVICE_ACCOUNT_NAME}
    containers:
    - name: api
    image: ${ACR_NAME}.azurecr.io/api:${IMAGE_TAG}
    ports:
    - containerPort: 80
    envFrom:
    - configMapRef:
    name: aspnet-settings
    env:
    - name: ConnectionStrings__CatalogConnection
    valueFrom:
    secretKeyRef:
    name: eshop-secrets
    key: mssql-connection-catalog
    - name: ConnectionStrings__IdentityConnection
    valueFrom:
    secretKeyRef:
    name: eshop-secrets
    key: mssql-connection-identity
    resources: {}
    volumeMounts:
    - name: aspnet
    mountPath: ~/.aspnet/https:/root/.aspnet/https:ro
    - name: eshop-secrets
    mountPath: "/mnt/secrets-store"
    readOnly: true
    volumes:
    - name: aspnet
    persistentVolumeClaim:
    claimName: aspnet-data
    - name: eshop-secrets
    csi:
    driver: secrets-store.csi.k8s.io
    readOnly: true
    volumeAttributes:
    secretProviderClass: eshop-azure-keyvault
    EOF

    ## Web deployment
    kubectl apply -f - <<EOF
    apiVersion: apps/v1
    kind: Deployment
    metadata:
    name: web
    labels:
    app: web
    spec:
    replicas: 1
    selector:
    matchLabels:
    app: web
    template:
    metadata:
    labels:
    app: web
    spec:
    serviceAccount: ${SERVICE_ACCOUNT_NAME}
    containers:
    - name: web
    image: ${ACR_NAME}.azurecr.io/web:${IMAGE_TAG}
    ports:
    - containerPort: 80
    envFrom:
    - configMapRef:
    name: aspnet-settings
    env:
    - name: ConnectionStrings__CatalogConnection
    valueFrom:
    secretKeyRef:
    name: eshop-secrets
    key: mssql-connection-catalog
    - name: ConnectionStrings__IdentityConnection
    valueFrom:
    secretKeyRef:
    name: eshop-secrets
    key: mssql-connection-identity
    resources: {}
    volumeMounts:
    - name: aspnet
    mountPath: ~/.aspnet/https:/root/.aspnet/https:ro
    - name: eshop-secrets
    mountPath: "/mnt/secrets-store"
    readOnly: true
    volumes:
    - name: aspnet
    persistentVolumeClaim:
    claimName: aspnet-data
    - name: eshop-secrets
    csi:
    driver: secrets-store.csi.k8s.io
    readOnly: true
    volumeAttributes:
    secretProviderClass: eshop-azure-keyvault
    EOF

    If all went well with your deployment updates, you should be able to browse to your website and buy some merchandise again 🥳

    echo "http://$(kubectl get service web -o jsonpath='{.status.loadBalancer.ingress[0].ip}')"

    Conclusion

    Although there is no visible changes on with our website, we've made a ton of changes on the Kubernetes backend to make this application much more secure and resilient.

    We used a combination of Kubernetes resources and AKS-specific features to achieve our goal of securing our secrets and ensuring data is not lost on container crashes and restarts.

    To learn more about the components we leveraged here today, checkout the resources and additional tutorials listed below.

    You can also find manifests with all the changes made in today's post in the Azure-Samples/eShopOnAKS repository.

    See you in the next post!

    Resources

    Take the Cloud Skills Challenge!

    Enroll in the Cloud Skills Challenge!

    Don't miss out on this opportunity to level up your skills and stay ahead of the curve in the world of cloud native.

    - + \ No newline at end of file diff --git a/cnny-2023/tags/kubernetes/page/4/index.html b/cnny-2023/tags/kubernetes/page/4/index.html index f479ffb267..fb1b5d1ded 100644 --- a/cnny-2023/tags/kubernetes/page/4/index.html +++ b/cnny-2023/tags/kubernetes/page/4/index.html @@ -14,13 +14,13 @@ - +

    5 posts tagged with "kubernetes"

    View All Tags

    · 10 min read
    Paul Yu

    Welcome to Day 3 of Week 3 of #CloudNativeNewYear!

    The theme for this week is Bringing Your Application to Kubernetes. Yesterday we added configuration, secrets, and storage to our app. Today we'll explore how to expose the eShopOnWeb app so that customers can reach it over the internet using a custom domain name and TLS.

    Ask the Experts Thursday, February 9th at 9 AM PST
    Friday, February 10th at 11 AM PST

    Watch the recorded demo and conversation about this week's topics.

    We were live on YouTube walking through today's (and the rest of this week's) demos. Join us Friday, February 10th and bring your questions!

    What We'll Cover

    • Gather requirements
    • Generate TLS certificate and store in Azure Key Vault
    • Implement custom DNS using Azure DNS
    • Enable Web Application Routing add-on for AKS
    • Implement Ingress for the web application
    • Conclusion
    • Resources

    Gather requirements

    Currently, our eShopOnWeb app has three Kubernetes services deployed:

    1. db exposed internally via ClusterIP
    2. api exposed externally via LoadBalancer
    3. web exposed externally via LoadBalancer

    As mentioned in my post last week, Services allow applications to communicate with each other using DNS names. Kubernetes has service discovery capabilities built-in that allows Pods to resolve Services simply by using their names.

    In the case of our api and web deployments, they can simply reach the database by calling its name. The service type of ClusterIP for the db can remain as-is since it only needs to be accessed by the api and web apps.

    On the other hand, api and web both need to be accessed over the public internet. Currently, these services are using service type LoadBalancer which tells AKS to provision an Azure Load Balancer with a public IP address. No one is going to remember the IP addresses, so we need to make the app more accessible by adding a custom domain name and securing it with a TLS certificate.

    Here's what we're going to need:

    • Custom domain name for our app
    • TLS certificate for the custom domain name
    • Routing rule to ensure requests with /api/ in the URL is routed to the backend REST API
    • Routing rule to ensure requests without /api/ in the URL is routing to the web UI

    Just like last week, we will use the Web Application Routing add-on for AKS. But this time, we'll integrate it with Azure DNS and Azure Key Vault to satisfy all of our requirements above.

    info

    At the time of this writing the add-on is still in Public Preview

    Generate TLS certificate and store in Azure Key Vault

    We deployed an Azure Key Vault yesterday to store secrets. We'll use it again to store a TLS certificate too.

    Let's create and export a self-signed certificate for the custom domain.

    DNS_NAME=eshoponweb$RANDOM.com
    openssl req -new -x509 -nodes -out web-tls.crt -keyout web-tls.key -subj "/CN=${DNS_NAME}" -addext "subjectAltName=DNS:${DNS_NAME}"
    openssl pkcs12 -export -in web-tls.crt -inkey web-tls.key -out web-tls.pfx -password pass:
    info

    For learning purposes we'll use a self-signed certificate and a fake custom domain name.

    To browse to the site using the fake domain, we'll mimic a DNS lookup by adding an entry to your host file which maps the public IP address assigned to the ingress controller to the custom domain.

    In a production scenario, you will need to have a real domain delegated to Azure DNS and a valid TLS certificate for the domain.

    Grab your Azure Key Vault name and set the value in a variable for later use.

    RESOURCE_GROUP=cnny-week3

    AKV_NAME=$(az resource list \
    --resource-group $RESOURCE_GROUP \
    --resource-type Microsoft.KeyVault/vaults \
    --query "[0].name" -o tsv)

    Grant yourself permissions to get, list, and import certificates.

    MY_USER_NAME=$(az account show --query user.name -o tsv)
    MY_USER_OBJECT_ID=$(az ad user show --id $MY_USER_NAME --query id -o tsv)

    az keyvault set-policy \
    --name $AKV_NAME \
    --object-id $MY_USER_OBJECT_ID \
    --certificate-permissions get list import

    Upload the TLS certificate to Azure Key Vault and grab its certificate URI.

    WEB_TLS_CERT_ID=$(az keyvault certificate import \
    --vault-name $AKV_NAME \
    --name web-tls \
    --file web-tls.pfx \
    --query id \
    --output tsv)

    Implement custom DNS with Azure DNS

    Create a custom domain for our application and grab its Azure resource id.

    DNS_ZONE_ID=$(az network dns zone create \
    --name $DNS_NAME \
    --resource-group $RESOURCE_GROUP \
    --query id \
    --output tsv)

    Enable Web Application Routing add-on for AKS

    As we enable the Web Application Routing add-on, we'll also pass in the Azure DNS Zone resource id which triggers the installation of the external-dns controller in your Kubernetes cluster. This controller will be able to write Azure DNS zone entries on your behalf as you deploy Ingress manifests.

    AKS_NAME=$(az resource list \
    --resource-group $RESOURCE_GROUP \
    --resource-type Microsoft.ContainerService/managedClusters \
    --query "[0].name" -o tsv)

    az aks enable-addons \
    --name $AKS_NAME \
    --resource-group $RESOURCE_GROUP \
    --addons web_application_routing \
    --dns-zone-resource-id=$DNS_ZONE_ID \
    --enable-secret-rotation

    The add-on will also deploy a new Azure Managed Identity which is used by the external-dns controller when writing Azure DNS zone entries. Currently, it does not have permission to do that, so let's grant it permission.

    # This is where resources are automatically deployed by AKS
    NODE_RESOURCE_GROUP=$(az aks show \
    --name $AKS_NAME \
    --resource-group $RESOURCE_GROUP \
    --query nodeResourceGroup -o tsv)

    # This is the managed identity created by the Web Application Routing add-on
    MANAGED_IDENTTIY_OBJECT_ID=$(az resource show \
    --name webapprouting-${AKS_NAME} \
    --resource-group $NODE_RESOURCE_GROUP \
    --resource-type Microsoft.ManagedIdentity/userAssignedIdentities \
    --query properties.principalId \
    --output tsv)

    # Grant the managed identity permissions to write DNS entries
    az role assignment create \
    --role "DNS Zone Contributor" \
    --assignee $MANAGED_IDENTTIY_OBJECT_ID \
    --scope $DNS_ZONE_ID

    The Azure Managed Identity will also be used to retrieve and rotate TLS certificates from Azure Key Vault. So we'll need to grant it permission for that too.

    az keyvault set-policy \
    --name $AKV_NAME \
    --object-id $MANAGED_IDENTTIY_OBJECT_ID \
    --secret-permissions get \
    --certificate-permissions get

    Implement Ingress for the web application

    Before we create a new Ingress manifest, let's update the existing services to use ClusterIP instead of LoadBalancer. With an Ingress in place, there is no reason why we need the Service resources to be accessible from outside the cluster. The new Ingress will be the only entrypoint for external users.

    We can use the kubectl patch command to update the services

    kubectl patch service api -p '{"spec": {"type": "ClusterIP"}}'
    kubectl patch service web -p '{"spec": {"type": "ClusterIP"}}'

    Deploy a new Ingress to place in front of the web Service. Notice there is a special annotations entry for kubernetes.azure.com/tls-cert-keyvault-uri which points back to our self-signed certificate that was uploaded to Azure Key Vault.

    kubectl apply -f - <<EOF
    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
    annotations:
    kubernetes.azure.com/tls-cert-keyvault-uri: ${WEB_TLS_CERT_ID}
    name: web
    spec:
    ingressClassName: webapprouting.kubernetes.azure.com
    rules:
    - host: ${DNS_NAME}
    http:
    paths:
    - backend:
    service:
    name: web
    port:
    number: 80
    path: /
    pathType: Prefix
    - backend:
    service:
    name: api
    port:
    number: 80
    path: /api
    pathType: Prefix
    tls:
    - hosts:
    - ${DNS_NAME}
    secretName: web-tls
    EOF

    In our manifest above, we've also configured the Ingress route the traffic to either the web or api services based on the URL path requested. If the request URL includes /api/ then it will send traffic to the api backend service. Otherwise, it will send traffic to the web service.

    Within a few minutes, the external-dns controller will add an A record to Azure DNS which points to the Ingress resource's public IP. With the custom domain in place, we can simply browse using this domain name.

    info

    As mentioned above, since this is not a real domain name, we need to modify our host file to make it seem like our custom domain is resolving to the Ingress' public IP address.

    To get the ingress public IP, run the following:

    # Get the IP
    kubectl get ingress web -o jsonpath="{.status.loadBalancer.ingress[0].ip}"

    # Get the hostname
    kubectl get ingress web -o jsonpath="{.spec.tls[0].hosts[0]}"

    Next, open your host file and add an entry using the format <YOUR_PUBLIC_IP> <YOUR_CUSTOM_DOMAIN>. Below is an example of what it should look like.

    20.237.116.224 eshoponweb11265.com

    See this doc for more info on how to do this.

    When browsing to the website, you may be presented with a warning about the connection not being private. This is due to the fact that we are using a self-signed certificate. This is expected, so go ahead and proceed anyway to load up the page.

    Why is the Admin page broken?

    If you log in using the admin@microsoft.com account and browse to the Admin page, you'll notice no products are loaded on the page.

    This is because the admin page is built using Blazor and compiled as a WebAssembly application that runs in your browser. When the application was compiled, it packed the appsettings.Development.json file as an embedded resource. This file contains the base URL for the public API and it currently points to https://localhost:5099. Now that we have a domain name, we can update the base URL and point it to our custom domain.

    From the root of the eShopOnWeb repo, update the configuration file using a sed command.

    sed -i -e "s/localhost:5099/${DNS_NAME}/g" ./src/BlazorAdmin/wwwroot/appsettings.Development.json

    Rebuild and push the container to Azure Container Registry.

    # Grab the name of your Azure Container Registry
    ACR_NAME=$(az resource list \
    --resource-group $RESOURCE_GROUP \
    --resource-type Microsoft.ContainerRegistry/registries \
    --query "[0].name" -o tsv)

    # Invoke a build and publish job
    az acr build \
    --registry $ACR_NAME \
    --image $ACR_NAME.azurecr.io/web:v0.1.0 \
    --file ./src/Web/Dockerfile .

    Once the container build has completed, we can issue a kubectl patch command to quickly update the web deployment to test our change.

    kubectl patch deployment web -p "$(cat <<EOF
    {
    "spec": {
    "template": {
    "spec": {
    "containers": [
    {
    "name": "web",
    "image": "${ACR_NAME}.azurecr.io/web:v0.1.0"
    }
    ]
    }
    }
    }
    }
    EOF
    )"

    If all went well, you will be able to browse the admin page again and confirm product data is being loaded 🥳

    Conclusion

    The Web Application Routing add-on for AKS aims to streamline the process of exposing it to the public using the open-source NGINX Ingress Controller. With the add-on being managed by Azure, it natively integrates with other Azure services like Azure DNS and eliminates the need to manually create DNS entries. It can also integrate with Azure Key Vault to automatically pull in TLS certificates and rotate them as needed to further reduce operational overhead.

    We are one step closer to production and in the upcoming posts we'll further operationalize and secure our deployment, so stay tuned!

    In the meantime, check out the resources listed below for further reading.

    You can also find manifests with all the changes made in today's post in the Azure-Samples/eShopOnAKS repository.

    Resources

    Take the Cloud Skills Challenge!

    Enroll in the Cloud Skills Challenge!

    Don't miss out on this opportunity to level up your skills and stay ahead of the curve in the world of cloud native.

    - + \ No newline at end of file diff --git a/cnny-2023/tags/kubernetes/page/5/index.html b/cnny-2023/tags/kubernetes/page/5/index.html index 2b2056cfe4..23465148a3 100644 --- a/cnny-2023/tags/kubernetes/page/5/index.html +++ b/cnny-2023/tags/kubernetes/page/5/index.html @@ -14,13 +14,13 @@ - +

    5 posts tagged with "kubernetes"

    View All Tags

    · 6 min read
    Josh Duffney

    Welcome to Day 5 of Week 3 of #CloudNativeNewYear!

    The theme for this week is Bringing Your Application to Kubernetes. Yesterday we talked about debugging and instrumenting our application. Today we'll explore the topic of container image signing and secure supply chain.

    Ask the Experts Thursday, February 9th at 9 AM PST
    Friday, February 10th at 11 AM PST

    Watch the recorded demo and conversation about this week's topics.

    We were live on YouTube walking through today's (and the rest of this week's) demos. Join us Friday, February 10th and bring your questions!

    What We'll Cover

    • Introduction
    • Prerequisites
    • Create a digital signing certificate
    • Generate an Azure Container Registry Token
    • Set up Notation
    • Install the Notation Azure Key Vault Plugin
    • Add the signing Certificate to Notation
    • Sign Container Images
    • Conclusion

    Introduction

    The secure supply chain is a crucial aspect of software development, delivery, and deployment, and digital signing plays a critical role in this process.

    By using digital signatures to verify the authenticity and integrity of container images, organizations can improve the security of your software supply chain and reduce the risk of security breaches and data compromise.

    In this article, you'll learn how to use Notary, an open-source project hosted by the Cloud Native Computing Foundation (CNCF) to digitally sign container images stored on Azure Container Registry.

    Prerequisites

    To follow along, you'll need an instance of:

    Create a digital signing certificate

    A digital signing certificate is a certificate that is used to digitally sign and verify the authenticity and integrity of digital artifacts. Such documents, software, and of course container images.

    Before you can implement digital signatures, you must first create a digital signing certificate.

    Run the following command to generate the certificate:

    1. Create the policy file

      cat <<EOF > ./my_policy.json
      {
      "issuerParameters": {
      "certificateTransparency": null,
      "name": "Self"
      },
      "x509CertificateProperties": {
      "ekus": [
      "1.3.6.1.5.5.7.3.3"
      ],
      "key_usage": [
      "digitalSignature"
      ],
      "subject": "CN=${keySubjectName}",
      "validityInMonths": 12
      }
      }
      EOF

      The ekus and key usage of this certificate policy dictate that the certificate can only be used for digital signatures.

    2. Create the certificate in Azure Key Vault

      az keyvault certificate create --name $keyName --vault-name $keyVaultName --policy @my_policy.json

      Replace $keyName and $keyVaultName with your desired certificate name and Azure Key Vault instance name.

    Generate a Azure Container Registry token

    Azure Container Registry tokens are used to grant access to the contents of the registry. Tokens can be used for a variety of things such as pulling images, pushing images, or managing the registry.

    As part of the container image signing workflow, you'll need a token to authenticate the Notation CLI with your Azure Container Registry.

    Run the following command to generate an ACR token:

    az acr token create \
    --name $tokenName \
    --registry $registry \
    --scope-map _repositories_admin \
    --query 'credentials.passwords[0].value' \
    --only-show-errors \
    --output tsv

    Replace $tokenName with your name for the ACR token and $registry with the name of your ACR instance.

    Setup Notation

    Notation is the command-line interface for the CNCF Notary project. You'll use it to digitally sign the api and web container images for the eShopOnWeb application.

    Run the following commands to download and install the NotationCli:

    1. Open a terminal or command prompt window

    2. Download the Notary notation release

      curl -Lo notation.tar.gz https://github.com/notaryproject/notation/releases/download/v1.0.0-rc.1/notation_1.0.0-rc.1_linux_amd64.tar.gz > /dev/null 2>&1

      If you're not using Linux, you can find the releases here.

    3. Extract the contents of the notation.tar.gz

      tar xvzf notation.tar.gz > /dev/null 2>&1
    4. Copy the notation binary to the $HOME/bin directory

      cp ./notation $HOME/bin
    5. Add the $HOME/bin directory to the PATH environment variable

      export PATH="$HOME/bin:$PATH"
    6. Remove the downloaded files

      rm notation.tar.gz LICENSE
    7. Check the notation version

      notation --version

    Install the Notation Azure Key Vault plugin

    By design the NotationCli supports plugins that extend its digital signing capabilities to remote registries. And in order to sign your container images stored in Azure Container Registry, you'll need to install the Azure Key Vault plugin for Notation.

    Run the following commands to install the azure-kv plugin:

    1. Download the plugin

      curl -Lo notation-azure-kv.tar.gz \
      https://github.com/Azure/notation-azure-kv/releases/download/v0.5.0-rc.1/notation-azure-kv_0.5.0-rc.1_linux_amd64.tar.gz > /dev/null 2>&1

      Non-Linux releases can be found here.

    2. Extract to the plugin directory & delete download files

      tar xvzf notation-azure-kv.tar.gz -C ~/.config/notation/plugins/azure-kv notation-azure-kv > /dev/null 2>&

      rm -rf notation-azure-kv.tar.gz
    3. Verify the plugin was installed

      notation plugin ls

    Add the signing certificate to Notation

    Now that you have Notation and the Azure Key Vault plugin installed, add the certificate's keyId created above to Notation.

    1. Get the Certificate Key ID from Azure Key Vault

      az keyvault certificate show \
      --vault-name $keyVaultName \
      --name $keyName \
      --query "kid" --only-show-errors --output tsv

      Replace $keyVaultName and $keyName with the appropriate information.

    2. Add the Key ID to KMS using Notation

      notation key add --plugin azure-kv --id $keyID $keyName
    3. Check the key list

      notation key ls

    Sign Container Images

    At this point, all that's left is to sign the container images.

    Run the notation sign command to sign the api and web container images:

    notation sign $registry.azurecr.io/web:$tag \
    --username $tokenName \
    --password $tokenPassword

    notation sign $registry.azurecr.io/api:$tag \
    --username $tokenName \
    --password $tokenPassword

    Replace $registry, $tag, $tokenName, and $tokenPassword with the appropriate values. To improve security, use a SHA hash for the tag.

    NOTE: If you didn't take note of the token password, you can rerun the az acr token create command to generate a new password.

    Conclusion

    Digital signing plays a critical role in ensuring the security of software supply chains.

    By signing software components, organizations can verify the authenticity and integrity of software, helping to prevent unauthorized modifications, tampering, and malware.

    And if you want to take digital signing to a whole new level by using them to prevent the deployment of unsigned container images, check out the Ratify project on GitHub!

    Take the Cloud Skills Challenge!

    Enroll in the Cloud Skills Challenge!

    Don't miss out on this opportunity to level up your skills and stay ahead of the curve in the world of cloud native.

    - + \ No newline at end of file diff --git a/cnny-2023/tags/microservices/index.html b/cnny-2023/tags/microservices/index.html index dca426f783..4b5344550f 100644 --- a/cnny-2023/tags/microservices/index.html +++ b/cnny-2023/tags/microservices/index.html @@ -14,13 +14,13 @@ - +

    One post tagged with "microservices"

    View All Tags

    · 6 min read
    Josh Duffney

    Welcome to Day 4 of Week 1 of #CloudNativeNewYear!

    This week we'll focus on advanced topics and best practices for Cloud-Native practitioners, kicking off with this post on Serverless Container Options with Azure. We'll look at technologies, tools and best practices that range from managed services like Azure Kubernetes Service, to options allowing finer granularity of control and oversight.

    What We'll Cover

    • What is Microservice Architecture?
    • How do you design a Microservice?
    • What challenges do Microservices introduce?
    • Conclusion
    • Resources


    Microservices are a modern way of designing and building software that increases deployment velocity by decomposing an application into small autonomous services that can be deployed independently.

    By deploying loosely coupled microservices your applications can be developed, deployed, and scaled independently. Because each service is independent, it can be updated or replaced without having to worry about the impact on the rest of the application. This means that if a bug is found in one service, it can be fixed without having to redeploy the entire application. All of which gives an organization the ability to deliver value to their customers faster.

    In this article, we will explore the basics of microservices architecture, its benefits and challenges, and how it can help improve the development, deployment, and maintenance of software applications.

    What is Microservice Architecture?

    Before explaining what Microservice architecture is, it’s important to understand what problems microservices aim to address.

    Traditional software development is centered around building monolithic applications. Monolithic applications are built as a single, large codebase. Meaning your code is tightly coupled causing the monolithic app to suffer from the following:

    Too much Complexity: Monolithic applications can become complex and difficult to understand and maintain as they grow. This can make it hard to identify and fix bugs and add new features.

    Difficult to Scale: Monolithic applications can be difficult to scale as they often have a single point of failure, which can cause the whole application to crash if a service fails.

    Slow Deployment: Deploying a monolithic application can be risky and time-consuming, as a small change in one part of the codebase can affect the entire application.

    Microservice architecture (often called microservices) is an architecture style that addresses the challenges created by Monolithic applications. Microservices architecture is a way of designing and building software applications as a collection of small, independent services that communicate with each other through APIs. This allows for faster development and deployment cycles, as well as easier scaling and maintenance than is possible with a monolithic application.

    How do you design a Microservice?

    Building applications with Microservices architecture requires a different approach. Microservices architecture focuses on business capabilities rather than technical layers, such as data access or messaging. Doing so requires that you shift your focus away from the technical stack and model your applications based upon the various domains that exist within the business.

    Domain-driven design (DDD) is a way to design software by focusing on the business needs. You can use Domain-driven design as a framework that guides the development of well-designed microservices by building services that encapsulate knowledge in each domain and abstract that knowledge from clients.

    In Domain-driven design you start by modeling the business domain and creating a domain model. A domain model is an abstract model of the business model that distills and organizes a domain of knowledge and provides a common language for developers and domain experts. It’s the resulting domain model that microservices a best suited to be built around because it helps establish a well-defined boundary between external systems and other internal applications.

    In short, before you begin designing microservices, start by mapping the functions of the business and their connections to create a domain model for the microservice(s) to be built around.

    What challenges do Microservices introduce?

    Microservices solve a lot of problems and have several advantages, but the grass isn’t always greener on the other side.

    One of the key challenges of microservices is managing communication between services. Because services are independent, they need to communicate with each other through APIs. This can be complex and difficult to manage, especially as the number of services grows. To address this challenge, it is important to have a clear API design, with well-defined inputs and outputs for each service. It is also important to have a system for managing and monitoring communication between services, to ensure that everything is running smoothly.

    Another challenge of microservices is managing the deployment and scaling of services. Because each service is independent, it needs to be deployed and scaled separately from the rest of the application. This can be complex and difficult to manage, especially as the number of services grows. To address this challenge, it is important to have a clear and consistent deployment process, with well-defined steps for deploying and scaling each service. Furthermore, it is advisable to host them on a system with self-healing capabilities to reduce operational burden.

    It is also important to have a system for monitoring and managing the deployment and scaling of services, to ensure optimal performance.

    Each of these challenges has created fertile ground for tooling and process that exists in the cloud-native ecosystem. Kubernetes, CI CD, and other DevOps practices are part of the package of adopting the microservices architecture.

    Conclusion

    In summary, microservices architecture focuses on software applications as a collection of small, independent services that communicate with each other over well-defined APIs.

    The main advantages of microservices include:

    • increased flexibility and scalability per microservice,
    • efficient resource utilization (with help from a container orchestrator like Kubernetes),
    • and faster development cycles.

    Continue following along with this series to see how you can use Kubernetes to help adopt microservices patterns in your own environments!

    Resources

    - + \ No newline at end of file diff --git a/cnny-2023/tags/nginx-ingress-controller/index.html b/cnny-2023/tags/nginx-ingress-controller/index.html index 014c4946f5..a4365888dd 100644 --- a/cnny-2023/tags/nginx-ingress-controller/index.html +++ b/cnny-2023/tags/nginx-ingress-controller/index.html @@ -14,13 +14,13 @@ - +

    One post tagged with "nginx-ingress-controller"

    View All Tags

    · 10 min read
    Paul Yu

    Welcome to Day 3 of Week 3 of #CloudNativeNewYear!

    The theme for this week is Bringing Your Application to Kubernetes. Yesterday we added configuration, secrets, and storage to our app. Today we'll explore how to expose the eShopOnWeb app so that customers can reach it over the internet using a custom domain name and TLS.

    Ask the Experts Thursday, February 9th at 9 AM PST
    Friday, February 10th at 11 AM PST

    Watch the recorded demo and conversation about this week's topics.

    We were live on YouTube walking through today's (and the rest of this week's) demos. Join us Friday, February 10th and bring your questions!

    What We'll Cover

    • Gather requirements
    • Generate TLS certificate and store in Azure Key Vault
    • Implement custom DNS using Azure DNS
    • Enable Web Application Routing add-on for AKS
    • Implement Ingress for the web application
    • Conclusion
    • Resources

    Gather requirements

    Currently, our eShopOnWeb app has three Kubernetes services deployed:

    1. db exposed internally via ClusterIP
    2. api exposed externally via LoadBalancer
    3. web exposed externally via LoadBalancer

    As mentioned in my post last week, Services allow applications to communicate with each other using DNS names. Kubernetes has service discovery capabilities built-in that allows Pods to resolve Services simply by using their names.

    In the case of our api and web deployments, they can simply reach the database by calling its name. The service type of ClusterIP for the db can remain as-is since it only needs to be accessed by the api and web apps.

    On the other hand, api and web both need to be accessed over the public internet. Currently, these services are using service type LoadBalancer which tells AKS to provision an Azure Load Balancer with a public IP address. No one is going to remember the IP addresses, so we need to make the app more accessible by adding a custom domain name and securing it with a TLS certificate.

    Here's what we're going to need:

    • Custom domain name for our app
    • TLS certificate for the custom domain name
    • Routing rule to ensure requests with /api/ in the URL is routed to the backend REST API
    • Routing rule to ensure requests without /api/ in the URL is routing to the web UI

    Just like last week, we will use the Web Application Routing add-on for AKS. But this time, we'll integrate it with Azure DNS and Azure Key Vault to satisfy all of our requirements above.

    info

    At the time of this writing the add-on is still in Public Preview

    Generate TLS certificate and store in Azure Key Vault

    We deployed an Azure Key Vault yesterday to store secrets. We'll use it again to store a TLS certificate too.

    Let's create and export a self-signed certificate for the custom domain.

    DNS_NAME=eshoponweb$RANDOM.com
    openssl req -new -x509 -nodes -out web-tls.crt -keyout web-tls.key -subj "/CN=${DNS_NAME}" -addext "subjectAltName=DNS:${DNS_NAME}"
    openssl pkcs12 -export -in web-tls.crt -inkey web-tls.key -out web-tls.pfx -password pass:
    info

    For learning purposes we'll use a self-signed certificate and a fake custom domain name.

    To browse to the site using the fake domain, we'll mimic a DNS lookup by adding an entry to your host file which maps the public IP address assigned to the ingress controller to the custom domain.

    In a production scenario, you will need to have a real domain delegated to Azure DNS and a valid TLS certificate for the domain.

    Grab your Azure Key Vault name and set the value in a variable for later use.

    RESOURCE_GROUP=cnny-week3

    AKV_NAME=$(az resource list \
    --resource-group $RESOURCE_GROUP \
    --resource-type Microsoft.KeyVault/vaults \
    --query "[0].name" -o tsv)

    Grant yourself permissions to get, list, and import certificates.

    MY_USER_NAME=$(az account show --query user.name -o tsv)
    MY_USER_OBJECT_ID=$(az ad user show --id $MY_USER_NAME --query id -o tsv)

    az keyvault set-policy \
    --name $AKV_NAME \
    --object-id $MY_USER_OBJECT_ID \
    --certificate-permissions get list import

    Upload the TLS certificate to Azure Key Vault and grab its certificate URI.

    WEB_TLS_CERT_ID=$(az keyvault certificate import \
    --vault-name $AKV_NAME \
    --name web-tls \
    --file web-tls.pfx \
    --query id \
    --output tsv)

    Implement custom DNS with Azure DNS

    Create a custom domain for our application and grab its Azure resource id.

    DNS_ZONE_ID=$(az network dns zone create \
    --name $DNS_NAME \
    --resource-group $RESOURCE_GROUP \
    --query id \
    --output tsv)

    Enable Web Application Routing add-on for AKS

    As we enable the Web Application Routing add-on, we'll also pass in the Azure DNS Zone resource id which triggers the installation of the external-dns controller in your Kubernetes cluster. This controller will be able to write Azure DNS zone entries on your behalf as you deploy Ingress manifests.

    AKS_NAME=$(az resource list \
    --resource-group $RESOURCE_GROUP \
    --resource-type Microsoft.ContainerService/managedClusters \
    --query "[0].name" -o tsv)

    az aks enable-addons \
    --name $AKS_NAME \
    --resource-group $RESOURCE_GROUP \
    --addons web_application_routing \
    --dns-zone-resource-id=$DNS_ZONE_ID \
    --enable-secret-rotation

    The add-on will also deploy a new Azure Managed Identity which is used by the external-dns controller when writing Azure DNS zone entries. Currently, it does not have permission to do that, so let's grant it permission.

    # This is where resources are automatically deployed by AKS
    NODE_RESOURCE_GROUP=$(az aks show \
    --name $AKS_NAME \
    --resource-group $RESOURCE_GROUP \
    --query nodeResourceGroup -o tsv)

    # This is the managed identity created by the Web Application Routing add-on
    MANAGED_IDENTTIY_OBJECT_ID=$(az resource show \
    --name webapprouting-${AKS_NAME} \
    --resource-group $NODE_RESOURCE_GROUP \
    --resource-type Microsoft.ManagedIdentity/userAssignedIdentities \
    --query properties.principalId \
    --output tsv)

    # Grant the managed identity permissions to write DNS entries
    az role assignment create \
    --role "DNS Zone Contributor" \
    --assignee $MANAGED_IDENTTIY_OBJECT_ID \
    --scope $DNS_ZONE_ID

    The Azure Managed Identity will also be used to retrieve and rotate TLS certificates from Azure Key Vault. So we'll need to grant it permission for that too.

    az keyvault set-policy \
    --name $AKV_NAME \
    --object-id $MANAGED_IDENTTIY_OBJECT_ID \
    --secret-permissions get \
    --certificate-permissions get

    Implement Ingress for the web application

    Before we create a new Ingress manifest, let's update the existing services to use ClusterIP instead of LoadBalancer. With an Ingress in place, there is no reason why we need the Service resources to be accessible from outside the cluster. The new Ingress will be the only entrypoint for external users.

    We can use the kubectl patch command to update the services

    kubectl patch service api -p '{"spec": {"type": "ClusterIP"}}'
    kubectl patch service web -p '{"spec": {"type": "ClusterIP"}}'

    Deploy a new Ingress to place in front of the web Service. Notice there is a special annotations entry for kubernetes.azure.com/tls-cert-keyvault-uri which points back to our self-signed certificate that was uploaded to Azure Key Vault.

    kubectl apply -f - <<EOF
    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
    annotations:
    kubernetes.azure.com/tls-cert-keyvault-uri: ${WEB_TLS_CERT_ID}
    name: web
    spec:
    ingressClassName: webapprouting.kubernetes.azure.com
    rules:
    - host: ${DNS_NAME}
    http:
    paths:
    - backend:
    service:
    name: web
    port:
    number: 80
    path: /
    pathType: Prefix
    - backend:
    service:
    name: api
    port:
    number: 80
    path: /api
    pathType: Prefix
    tls:
    - hosts:
    - ${DNS_NAME}
    secretName: web-tls
    EOF

    In our manifest above, we've also configured the Ingress route the traffic to either the web or api services based on the URL path requested. If the request URL includes /api/ then it will send traffic to the api backend service. Otherwise, it will send traffic to the web service.

    Within a few minutes, the external-dns controller will add an A record to Azure DNS which points to the Ingress resource's public IP. With the custom domain in place, we can simply browse using this domain name.

    info

    As mentioned above, since this is not a real domain name, we need to modify our host file to make it seem like our custom domain is resolving to the Ingress' public IP address.

    To get the ingress public IP, run the following:

    # Get the IP
    kubectl get ingress web -o jsonpath="{.status.loadBalancer.ingress[0].ip}"

    # Get the hostname
    kubectl get ingress web -o jsonpath="{.spec.tls[0].hosts[0]}"

    Next, open your host file and add an entry using the format <YOUR_PUBLIC_IP> <YOUR_CUSTOM_DOMAIN>. Below is an example of what it should look like.

    20.237.116.224 eshoponweb11265.com

    See this doc for more info on how to do this.

    When browsing to the website, you may be presented with a warning about the connection not being private. This is due to the fact that we are using a self-signed certificate. This is expected, so go ahead and proceed anyway to load up the page.

    Why is the Admin page broken?

    If you log in using the admin@microsoft.com account and browse to the Admin page, you'll notice no products are loaded on the page.

    This is because the admin page is built using Blazor and compiled as a WebAssembly application that runs in your browser. When the application was compiled, it packed the appsettings.Development.json file as an embedded resource. This file contains the base URL for the public API and it currently points to https://localhost:5099. Now that we have a domain name, we can update the base URL and point it to our custom domain.

    From the root of the eShopOnWeb repo, update the configuration file using a sed command.

    sed -i -e "s/localhost:5099/${DNS_NAME}/g" ./src/BlazorAdmin/wwwroot/appsettings.Development.json

    Rebuild and push the container to Azure Container Registry.

    # Grab the name of your Azure Container Registry
    ACR_NAME=$(az resource list \
    --resource-group $RESOURCE_GROUP \
    --resource-type Microsoft.ContainerRegistry/registries \
    --query "[0].name" -o tsv)

    # Invoke a build and publish job
    az acr build \
    --registry $ACR_NAME \
    --image $ACR_NAME.azurecr.io/web:v0.1.0 \
    --file ./src/Web/Dockerfile .

    Once the container build has completed, we can issue a kubectl patch command to quickly update the web deployment to test our change.

    kubectl patch deployment web -p "$(cat <<EOF
    {
    "spec": {
    "template": {
    "spec": {
    "containers": [
    {
    "name": "web",
    "image": "${ACR_NAME}.azurecr.io/web:v0.1.0"
    }
    ]
    }
    }
    }
    }
    EOF
    )"

    If all went well, you will be able to browse the admin page again and confirm product data is being loaded 🥳

    Conclusion

    The Web Application Routing add-on for AKS aims to streamline the process of exposing it to the public using the open-source NGINX Ingress Controller. With the add-on being managed by Azure, it natively integrates with other Azure services like Azure DNS and eliminates the need to manually create DNS entries. It can also integrate with Azure Key Vault to automatically pull in TLS certificates and rotate them as needed to further reduce operational overhead.

    We are one step closer to production and in the upcoming posts we'll further operationalize and secure our deployment, so stay tuned!

    In the meantime, check out the resources listed below for further reading.

    You can also find manifests with all the changes made in today's post in the Azure-Samples/eShopOnAKS repository.

    Resources

    Take the Cloud Skills Challenge!

    Enroll in the Cloud Skills Challenge!

    Don't miss out on this opportunity to level up your skills and stay ahead of the curve in the world of cloud native.

    - + \ No newline at end of file diff --git a/cnny-2023/tags/notary/index.html b/cnny-2023/tags/notary/index.html index dda03ff553..d9c2ab8cf1 100644 --- a/cnny-2023/tags/notary/index.html +++ b/cnny-2023/tags/notary/index.html @@ -14,13 +14,13 @@ - +

    One post tagged with "notary"

    View All Tags

    · 6 min read
    Josh Duffney

    Welcome to Day 5 of Week 3 of #CloudNativeNewYear!

    The theme for this week is Bringing Your Application to Kubernetes. Yesterday we talked about debugging and instrumenting our application. Today we'll explore the topic of container image signing and secure supply chain.

    Ask the Experts Thursday, February 9th at 9 AM PST
    Friday, February 10th at 11 AM PST

    Watch the recorded demo and conversation about this week's topics.

    We were live on YouTube walking through today's (and the rest of this week's) demos. Join us Friday, February 10th and bring your questions!

    What We'll Cover

    • Introduction
    • Prerequisites
    • Create a digital signing certificate
    • Generate an Azure Container Registry Token
    • Set up Notation
    • Install the Notation Azure Key Vault Plugin
    • Add the signing Certificate to Notation
    • Sign Container Images
    • Conclusion

    Introduction

    The secure supply chain is a crucial aspect of software development, delivery, and deployment, and digital signing plays a critical role in this process.

    By using digital signatures to verify the authenticity and integrity of container images, organizations can improve the security of your software supply chain and reduce the risk of security breaches and data compromise.

    In this article, you'll learn how to use Notary, an open-source project hosted by the Cloud Native Computing Foundation (CNCF) to digitally sign container images stored on Azure Container Registry.

    Prerequisites

    To follow along, you'll need an instance of:

    Create a digital signing certificate

    A digital signing certificate is a certificate that is used to digitally sign and verify the authenticity and integrity of digital artifacts. Such documents, software, and of course container images.

    Before you can implement digital signatures, you must first create a digital signing certificate.

    Run the following command to generate the certificate:

    1. Create the policy file

      cat <<EOF > ./my_policy.json
      {
      "issuerParameters": {
      "certificateTransparency": null,
      "name": "Self"
      },
      "x509CertificateProperties": {
      "ekus": [
      "1.3.6.1.5.5.7.3.3"
      ],
      "key_usage": [
      "digitalSignature"
      ],
      "subject": "CN=${keySubjectName}",
      "validityInMonths": 12
      }
      }
      EOF

      The ekus and key usage of this certificate policy dictate that the certificate can only be used for digital signatures.

    2. Create the certificate in Azure Key Vault

      az keyvault certificate create --name $keyName --vault-name $keyVaultName --policy @my_policy.json

      Replace $keyName and $keyVaultName with your desired certificate name and Azure Key Vault instance name.

    Generate a Azure Container Registry token

    Azure Container Registry tokens are used to grant access to the contents of the registry. Tokens can be used for a variety of things such as pulling images, pushing images, or managing the registry.

    As part of the container image signing workflow, you'll need a token to authenticate the Notation CLI with your Azure Container Registry.

    Run the following command to generate an ACR token:

    az acr token create \
    --name $tokenName \
    --registry $registry \
    --scope-map _repositories_admin \
    --query 'credentials.passwords[0].value' \
    --only-show-errors \
    --output tsv

    Replace $tokenName with your name for the ACR token and $registry with the name of your ACR instance.

    Setup Notation

    Notation is the command-line interface for the CNCF Notary project. You'll use it to digitally sign the api and web container images for the eShopOnWeb application.

    Run the following commands to download and install the NotationCli:

    1. Open a terminal or command prompt window

    2. Download the Notary notation release

      curl -Lo notation.tar.gz https://github.com/notaryproject/notation/releases/download/v1.0.0-rc.1/notation_1.0.0-rc.1_linux_amd64.tar.gz > /dev/null 2>&1

      If you're not using Linux, you can find the releases here.

    3. Extract the contents of the notation.tar.gz

      tar xvzf notation.tar.gz > /dev/null 2>&1
    4. Copy the notation binary to the $HOME/bin directory

      cp ./notation $HOME/bin
    5. Add the $HOME/bin directory to the PATH environment variable

      export PATH="$HOME/bin:$PATH"
    6. Remove the downloaded files

      rm notation.tar.gz LICENSE
    7. Check the notation version

      notation --version

    Install the Notation Azure Key Vault plugin

    By design the NotationCli supports plugins that extend its digital signing capabilities to remote registries. And in order to sign your container images stored in Azure Container Registry, you'll need to install the Azure Key Vault plugin for Notation.

    Run the following commands to install the azure-kv plugin:

    1. Download the plugin

      curl -Lo notation-azure-kv.tar.gz \
      https://github.com/Azure/notation-azure-kv/releases/download/v0.5.0-rc.1/notation-azure-kv_0.5.0-rc.1_linux_amd64.tar.gz > /dev/null 2>&1

      Non-Linux releases can be found here.

    2. Extract to the plugin directory & delete download files

      tar xvzf notation-azure-kv.tar.gz -C ~/.config/notation/plugins/azure-kv notation-azure-kv > /dev/null 2>&

      rm -rf notation-azure-kv.tar.gz
    3. Verify the plugin was installed

      notation plugin ls

    Add the signing certificate to Notation

    Now that you have Notation and the Azure Key Vault plugin installed, add the certificate's keyId created above to Notation.

    1. Get the Certificate Key ID from Azure Key Vault

      az keyvault certificate show \
      --vault-name $keyVaultName \
      --name $keyName \
      --query "kid" --only-show-errors --output tsv

      Replace $keyVaultName and $keyName with the appropriate information.

    2. Add the Key ID to KMS using Notation

      notation key add --plugin azure-kv --id $keyID $keyName
    3. Check the key list

      notation key ls

    Sign Container Images

    At this point, all that's left is to sign the container images.

    Run the notation sign command to sign the api and web container images:

    notation sign $registry.azurecr.io/web:$tag \
    --username $tokenName \
    --password $tokenPassword

    notation sign $registry.azurecr.io/api:$tag \
    --username $tokenName \
    --password $tokenPassword

    Replace $registry, $tag, $tokenName, and $tokenPassword with the appropriate values. To improve security, use a SHA hash for the tag.

    NOTE: If you didn't take note of the token password, you can rerun the az acr token create command to generate a new password.

    Conclusion

    Digital signing plays a critical role in ensuring the security of software supply chains.

    By signing software components, organizations can verify the authenticity and integrity of software, helping to prevent unauthorized modifications, tampering, and malware.

    And if you want to take digital signing to a whole new level by using them to prevent the deployment of unsigned container images, check out the Ratify project on GitHub!

    Take the Cloud Skills Challenge!

    Enroll in the Cloud Skills Challenge!

    Don't miss out on this opportunity to level up your skills and stay ahead of the curve in the world of cloud native.

    - + \ No newline at end of file diff --git a/cnny-2023/tags/notation/index.html b/cnny-2023/tags/notation/index.html index 236cac0693..66360b3d0f 100644 --- a/cnny-2023/tags/notation/index.html +++ b/cnny-2023/tags/notation/index.html @@ -14,13 +14,13 @@ - +

    One post tagged with "notation"

    View All Tags

    · 6 min read
    Josh Duffney

    Welcome to Day 5 of Week 3 of #CloudNativeNewYear!

    The theme for this week is Bringing Your Application to Kubernetes. Yesterday we talked about debugging and instrumenting our application. Today we'll explore the topic of container image signing and secure supply chain.

    Ask the Experts Thursday, February 9th at 9 AM PST
    Friday, February 10th at 11 AM PST

    Watch the recorded demo and conversation about this week's topics.

    We were live on YouTube walking through today's (and the rest of this week's) demos. Join us Friday, February 10th and bring your questions!

    What We'll Cover

    • Introduction
    • Prerequisites
    • Create a digital signing certificate
    • Generate an Azure Container Registry Token
    • Set up Notation
    • Install the Notation Azure Key Vault Plugin
    • Add the signing Certificate to Notation
    • Sign Container Images
    • Conclusion

    Introduction

    The secure supply chain is a crucial aspect of software development, delivery, and deployment, and digital signing plays a critical role in this process.

    By using digital signatures to verify the authenticity and integrity of container images, organizations can improve the security of your software supply chain and reduce the risk of security breaches and data compromise.

    In this article, you'll learn how to use Notary, an open-source project hosted by the Cloud Native Computing Foundation (CNCF) to digitally sign container images stored on Azure Container Registry.

    Prerequisites

    To follow along, you'll need an instance of:

    Create a digital signing certificate

    A digital signing certificate is a certificate that is used to digitally sign and verify the authenticity and integrity of digital artifacts. Such documents, software, and of course container images.

    Before you can implement digital signatures, you must first create a digital signing certificate.

    Run the following command to generate the certificate:

    1. Create the policy file

      cat <<EOF > ./my_policy.json
      {
      "issuerParameters": {
      "certificateTransparency": null,
      "name": "Self"
      },
      "x509CertificateProperties": {
      "ekus": [
      "1.3.6.1.5.5.7.3.3"
      ],
      "key_usage": [
      "digitalSignature"
      ],
      "subject": "CN=${keySubjectName}",
      "validityInMonths": 12
      }
      }
      EOF

      The ekus and key usage of this certificate policy dictate that the certificate can only be used for digital signatures.

    2. Create the certificate in Azure Key Vault

      az keyvault certificate create --name $keyName --vault-name $keyVaultName --policy @my_policy.json

      Replace $keyName and $keyVaultName with your desired certificate name and Azure Key Vault instance name.

    Generate a Azure Container Registry token

    Azure Container Registry tokens are used to grant access to the contents of the registry. Tokens can be used for a variety of things such as pulling images, pushing images, or managing the registry.

    As part of the container image signing workflow, you'll need a token to authenticate the Notation CLI with your Azure Container Registry.

    Run the following command to generate an ACR token:

    az acr token create \
    --name $tokenName \
    --registry $registry \
    --scope-map _repositories_admin \
    --query 'credentials.passwords[0].value' \
    --only-show-errors \
    --output tsv

    Replace $tokenName with your name for the ACR token and $registry with the name of your ACR instance.

    Setup Notation

    Notation is the command-line interface for the CNCF Notary project. You'll use it to digitally sign the api and web container images for the eShopOnWeb application.

    Run the following commands to download and install the NotationCli:

    1. Open a terminal or command prompt window

    2. Download the Notary notation release

      curl -Lo notation.tar.gz https://github.com/notaryproject/notation/releases/download/v1.0.0-rc.1/notation_1.0.0-rc.1_linux_amd64.tar.gz > /dev/null 2>&1

      If you're not using Linux, you can find the releases here.

    3. Extract the contents of the notation.tar.gz

      tar xvzf notation.tar.gz > /dev/null 2>&1
    4. Copy the notation binary to the $HOME/bin directory

      cp ./notation $HOME/bin
    5. Add the $HOME/bin directory to the PATH environment variable

      export PATH="$HOME/bin:$PATH"
    6. Remove the downloaded files

      rm notation.tar.gz LICENSE
    7. Check the notation version

      notation --version

    Install the Notation Azure Key Vault plugin

    By design the NotationCli supports plugins that extend its digital signing capabilities to remote registries. And in order to sign your container images stored in Azure Container Registry, you'll need to install the Azure Key Vault plugin for Notation.

    Run the following commands to install the azure-kv plugin:

    1. Download the plugin

      curl -Lo notation-azure-kv.tar.gz \
      https://github.com/Azure/notation-azure-kv/releases/download/v0.5.0-rc.1/notation-azure-kv_0.5.0-rc.1_linux_amd64.tar.gz > /dev/null 2>&1

      Non-Linux releases can be found here.

    2. Extract to the plugin directory & delete download files

      tar xvzf notation-azure-kv.tar.gz -C ~/.config/notation/plugins/azure-kv notation-azure-kv > /dev/null 2>&

      rm -rf notation-azure-kv.tar.gz
    3. Verify the plugin was installed

      notation plugin ls

    Add the signing certificate to Notation

    Now that you have Notation and the Azure Key Vault plugin installed, add the certificate's keyId created above to Notation.

    1. Get the Certificate Key ID from Azure Key Vault

      az keyvault certificate show \
      --vault-name $keyVaultName \
      --name $keyName \
      --query "kid" --only-show-errors --output tsv

      Replace $keyVaultName and $keyName with the appropriate information.

    2. Add the Key ID to KMS using Notation

      notation key add --plugin azure-kv --id $keyID $keyName
    3. Check the key list

      notation key ls

    Sign Container Images

    At this point, all that's left is to sign the container images.

    Run the notation sign command to sign the api and web container images:

    notation sign $registry.azurecr.io/web:$tag \
    --username $tokenName \
    --password $tokenPassword

    notation sign $registry.azurecr.io/api:$tag \
    --username $tokenName \
    --password $tokenPassword

    Replace $registry, $tag, $tokenName, and $tokenPassword with the appropriate values. To improve security, use a SHA hash for the tag.

    NOTE: If you didn't take note of the token password, you can rerun the az acr token create command to generate a new password.

    Conclusion

    Digital signing plays a critical role in ensuring the security of software supply chains.

    By signing software components, organizations can verify the authenticity and integrity of software, helping to prevent unauthorized modifications, tampering, and malware.

    And if you want to take digital signing to a whole new level by using them to prevent the deployment of unsigned container images, check out the Ratify project on GitHub!

    Take the Cloud Skills Challenge!

    Enroll in the Cloud Skills Challenge!

    Don't miss out on this opportunity to level up your skills and stay ahead of the curve in the world of cloud native.

    - + \ No newline at end of file diff --git a/cnny-2023/tags/persistent-storage/index.html b/cnny-2023/tags/persistent-storage/index.html index 4eef54a42b..2087506ca4 100644 --- a/cnny-2023/tags/persistent-storage/index.html +++ b/cnny-2023/tags/persistent-storage/index.html @@ -14,13 +14,13 @@ - +

    One post tagged with "persistent-storage"

    View All Tags

    · 12 min read
    Paul Yu

    Welcome to Day 2 of Week 3 of #CloudNativeNewYear!

    The theme for this week is Bringing Your Application to Kubernetes. Yesterday we talked about getting an existing application running in Kubernetes with a full pipeline in GitHub Actions. Today we'll evaluate our sample application's configuration, storage, and networking requirements and implement using Kubernetes and Azure resources.

    Ask the Experts Thursday, February 9th at 9 AM PST
    Friday, February 10th at 11 AM PST

    Watch the recorded demo and conversation about this week's topics.

    We were live on YouTube walking through today's (and the rest of this week's) demos. Join us Friday, February 10th and bring your questions!

    What We'll Cover

    • Gather requirements
    • Implement environment variables using ConfigMaps
    • Implement persistent volumes using Azure Files
    • Implement secrets using Azure Key Vault
    • Re-package deployments
    • Conclusion
    • Resources
    caution

    Before you begin, make sure you've gone through yesterday's post to set up your AKS cluster.

    Gather requirements

    The eShopOnWeb application is written in .NET 7 and has two major pieces of functionality. The web UI is where customers can browse and shop. The web UI also includes an admin portal for managing the product catalog. This admin portal, is packaged as a WebAssembly application and relies on a separate REST API service. Both the web UI and the REST API connect to the same SQL Server container.

    Looking through the source code which can be found here we can identify requirements for configs, persistent storage, and secrets.

    Database server

    • Need to store the password for the sa account as a secure secret
    • Need persistent storage volume for data directory
    • Need to inject environment variables for SQL Server license type and EULA acceptance

    Web UI and REST API service

    • Need to store database connection string as a secure secret
    • Need to inject ASP.NET environment variables to override app settings
    • Need persistent storage volume for ASP.NET key storage

    Implement environment variables using ConfigMaps

    ConfigMaps are relatively straight-forward to create. If you were following along with the examples last week, this should be review 😉

    Create a ConfigMap to store database environment variables.

    kubectl apply -f - <<EOF
    apiVersion: v1
    kind: ConfigMap
    metadata:
    name: mssql-settings
    data:
    MSSQL_PID: Developer
    ACCEPT_EULA: "Y"
    EOF

    Create another ConfigMap to store ASP.NET environment variables.

    kubectl apply -f - <<EOF
    apiVersion: v1
    kind: ConfigMap
    metadata:
    name: aspnet-settings
    data:
    ASPNETCORE_ENVIRONMENT: Development
    EOF

    Implement persistent volumes using Azure Files

    Similar to last week, we'll take advantage of storage classes built into AKS. For our SQL Server data, we'll use the azurefile-csi-premium storage class and leverage an Azure Files resource as our PersistentVolume.

    Create a PersistentVolumeClaim (PVC) for persisting SQL Server data.

    kubectl apply -f - <<EOF
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
    name: mssql-data
    spec:
    accessModes:
    - ReadWriteMany
    storageClassName: azurefile-csi-premium
    resources:
    requests:
    storage: 5Gi
    EOF

    Create another PVC for persisting ASP.NET data.

    kubectl apply -f - <<EOF
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
    name: aspnet-data
    spec:
    accessModes:
    - ReadWriteMany
    storageClassName: azurefile-csi-premium
    resources:
    requests:
    storage: 5Gi
    EOF

    Implement secrets using Azure Key Vault

    It's a well known fact that Kubernetes secretes are not really secrets. They're just base64-encoded values and not secure, especially if malicious users have access to your Kubernetes cluster.

    In a production scenario, you will want to leverage an external vault like Azure Key Vault or HashiCorp Vault to encrypt and store secrets.

    With AKS, we can enable the Secrets Store CSI driver add-on which will allow us to leverage Azure Key Vault.

    # Set some variables
    RG_NAME=<YOUR_RESOURCE_GROUP_NAME>
    AKS_NAME=<YOUR_AKS_CLUSTER_NAME>
    ACR_NAME=<YOUR_ACR_NAME>

    az aks enable-addons \
    --addons azure-keyvault-secrets-provider \
    --name $AKS_NAME \
    --resource-group $RG_NAME

    With the add-on enabled, you should see aks-secrets-store-csi-driver and aks-secrets-store-provider-azure resources installed on each node in your Kubernetes cluster.

    Run the command below to verify.

    kubectl get pods \
    --namespace kube-system \
    --selector 'app in (secrets-store-csi-driver, secrets-store-provider-azure)'

    The Secrets Store CSI driver allows us to use secret stores via Container Storage Interface (CSI) volumes. This provider offers capabilities such as mounting and syncing between the secure vault and Kubernetes Secrets. On AKS, the Azure Key Vault Provider for Secrets Store CSI Driver enables integration with Azure Key Vault.

    You may not have an Azure Key Vault created yet, so let's create one and add some secrets to it.

    AKV_NAME=$(az keyvault create \
    --name akv-eshop$RANDOM \
    --resource-group $RG_NAME \
    --query name -o tsv)

    # Database server password
    az keyvault secret set \
    --vault-name $AKV_NAME \
    --name mssql-password \
    --value "@someThingComplicated1234"

    # Catalog database connection string
    az keyvault secret set \
    --vault-name $AKV_NAME \
    --name mssql-connection-catalog \
    --value "Server=db;Database=Microsoft.eShopOnWeb.CatalogDb;User Id=sa;Password=@someThingComplicated1234;TrustServerCertificate=True;"

    # Identity database connection string
    az keyvault secret set \
    --vault-name $AKV_NAME \
    --name mssql-connection-identity \
    --value "Server=db;Database=Microsoft.eShopOnWeb.Identity;User Id=sa;Password=@someThingComplicated1234;TrustServerCertificate=True;"

    Pods authentication using Azure Workload Identity

    In order for our Pods to retrieve secrets from Azure Key Vault, we'll need to set up a way for the Pod to authenticate against Azure AD. This can be achieved by implementing the new Azure Workload Identity feature of AKS.

    info

    At the time of this writing, the workload identity feature of AKS is in Preview.

    The workload identity feature within AKS allows us to leverage native Kubernetes resources and link a Kubernetes ServiceAccount to an Azure Managed Identity to authenticate against Azure AD.

    For the authentication flow, our Kubernetes cluster will act as an Open ID Connect (OIDC) issuer and will be able issue identity tokens to ServiceAccounts which will be assigned to our Pods.

    The Azure Managed Identity will be granted permission to access secrets in our Azure Key Vault and with the ServiceAccount being assigned to our Pods, they will be able to retrieve our secrets.

    For more information on how the authentication mechanism all works, check out this doc.

    To implement all this, start by enabling the new preview feature for AKS.

    az feature register \
    --namespace "Microsoft.ContainerService" \
    --name "EnableWorkloadIdentityPreview"
    caution

    This can take several minutes to complete.

    Check the status and ensure the state shows Regestered before moving forward.

    az feature show \
    --namespace "Microsoft.ContainerService" \
    --name "EnableWorkloadIdentityPreview"

    Update your AKS cluster to enable the workload identity feature and enable the OIDC issuer endpoint.

    az aks update \
    --name $AKS_NAME \
    --resource-group $RG_NAME \
    --enable-workload-identity \
    --enable-oidc-issuer

    Create an Azure Managed Identity and retrieve its client ID.

    MANAGED_IDENTITY_CLIENT_ID=$(az identity create \
    --name aks-workload-identity \
    --resource-group $RG_NAME \
    --subscription $(az account show --query id -o tsv) \
    --query 'clientId' -o tsv)

    Create the Kubernetes ServiceAccount.

    # Set namespace (this must align with the namespace that your app is deployed into)
    SERVICE_ACCOUNT_NAMESPACE=default

    # Set the service account name
    SERVICE_ACCOUNT_NAME=eshop-serviceaccount

    # Create the service account
    kubectl apply -f - <<EOF
    apiVersion: v1
    kind: ServiceAccount
    metadata:
    annotations:
    azure.workload.identity/client-id: ${MANAGED_IDENTITY_CLIENT_ID}
    labels:
    azure.workload.identity/use: "true"
    name: ${SERVICE_ACCOUNT_NAME}
    namespace: ${SERVICE_ACCOUNT_NAMESPACE}
    EOF
    info

    Note to enable this ServiceAccount to work with Azure Workload Identity, you must annotate the resource with azure.workload.identity/client-id, and add a label of azure.workload.identity/use: "true"

    That was a lot... Let's review what we just did.

    We have an Azure Managed Identity (object in Azure AD), an OIDC issuer URL (endpoint in our Kubernetes cluster), and a Kubernetes ServiceAccount.

    The next step is to "tie" these components together and establish a Federated Identity Credential so that Azure AD can trust authentication requests from your Kubernetes cluster.

    info

    This identity federation can be established between Azure AD any Kubernetes cluster; not just AKS 🤗

    To establish the federated credential, we'll need the OIDC issuer URL, and a subject which points to your Kubernetes ServiceAccount.

    # Get the OIDC issuer URL
    OIDC_ISSUER_URL=$(az aks show \
    --name $AKS_NAME \
    --resource-group $RG_NAME \
    --query "oidcIssuerProfile.issuerUrl" -o tsv)

    # Set the subject name using this format: `system:serviceaccount:<YOUR_SERVICE_ACCOUNT_NAMESPACE>:<YOUR_SERVICE_ACCOUNT_NAME>`
    SUBJECT=system:serviceaccount:$SERVICE_ACCOUNT_NAMESPACE:$SERVICE_ACCOUNT_NAME

    az identity federated-credential create \
    --name aks-federated-credential \
    --identity-name aks-workload-identity \
    --resource-group $RG_NAME \
    --issuer $OIDC_ISSUER_URL \
    --subject $SUBJECT

    With the authentication components set, we can now create a SecretProviderClass which includes details about the Azure Key Vault, the secrets to pull out from the vault, and identity used to access the vault.

    # Get the tenant id for the key vault
    TENANT_ID=$(az keyvault show \
    --name $AKV_NAME \
    --resource-group $RG_NAME \
    --query properties.tenantId -o tsv)

    # Create the secret provider for azure key vault
    kubectl apply -f - <<EOF
    apiVersion: secrets-store.csi.x-k8s.io/v1
    kind: SecretProviderClass
    metadata:
    name: eshop-azure-keyvault
    spec:
    provider: azure
    parameters:
    usePodIdentity: "false"
    useVMManagedIdentity: "false"
    clientID: "${MANAGED_IDENTITY_CLIENT_ID}"
    keyvaultName: "${AKV_NAME}"
    cloudName: ""
    objects: |
    array:
    - |
    objectName: mssql-password
    objectType: secret
    objectVersion: ""
    - |
    objectName: mssql-connection-catalog
    objectType: secret
    objectVersion: ""
    - |
    objectName: mssql-connection-identity
    objectType: secret
    objectVersion: ""
    tenantId: "${TENANT_ID}"
    secretObjects:
    - secretName: eshop-secrets
    type: Opaque
    data:
    - objectName: mssql-password
    key: mssql-password
    - objectName: mssql-connection-catalog
    key: mssql-connection-catalog
    - objectName: mssql-connection-identity
    key: mssql-connection-identity
    EOF

    Finally, lets grant the Azure Managed Identity permissions to retrieve secrets from the Azure Key Vault.

    az keyvault set-policy \
    --name $AKV_NAME \
    --secret-permissions get \
    --spn $MANAGED_IDENTITY_CLIENT_ID

    Re-package deployments

    Update your database deployment to load environment variables from our ConfigMap, attach the PVC and SecretProviderClass as volumes, mount the volumes into the Pod, and use the ServiceAccount to retrieve secrets.

    Additionally, you may notice the database Pod is set to use fsGroup:10001 as part of the securityContext. This is required as the MSSQL container runs using a non-root account called mssql and this account has the proper permissions to read/write data at the /var/opt/mssql mount path.

    kubectl apply -f - <<EOF
    apiVersion: apps/v1
    kind: Deployment
    metadata:
    name: db
    labels:
    app: db
    spec:
    replicas: 1
    selector:
    matchLabels:
    app: db
    template:
    metadata:
    labels:
    app: db
    spec:
    securityContext:
    fsGroup: 10001
    serviceAccountName: ${SERVICE_ACCOUNT_NAME}
    containers:
    - name: db
    image: mcr.microsoft.com/mssql/server:2019-latest
    ports:
    - containerPort: 1433
    envFrom:
    - configMapRef:
    name: mssql-settings
    env:
    - name: MSSQL_SA_PASSWORD
    valueFrom:
    secretKeyRef:
    name: eshop-secrets
    key: mssql-password
    resources: {}
    volumeMounts:
    - name: mssqldb
    mountPath: /var/opt/mssql
    - name: eshop-secrets
    mountPath: "/mnt/secrets-store"
    readOnly: true
    volumes:
    - name: mssqldb
    persistentVolumeClaim:
    claimName: mssql-data
    - name: eshop-secrets
    csi:
    driver: secrets-store.csi.k8s.io
    readOnly: true
    volumeAttributes:
    secretProviderClass: eshop-azure-keyvault
    EOF

    We'll update the API and Web deployments in a similar way.

    # Set the image tag
    IMAGE_TAG=<YOUR_IMAGE_TAG>

    # API deployment
    kubectl apply -f - <<EOF
    apiVersion: apps/v1
    kind: Deployment
    metadata:
    name: api
    labels:
    app: api
    spec:
    replicas: 1
    selector:
    matchLabels:
    app: api
    template:
    metadata:
    labels:
    app: api
    spec:
    serviceAccount: ${SERVICE_ACCOUNT_NAME}
    containers:
    - name: api
    image: ${ACR_NAME}.azurecr.io/api:${IMAGE_TAG}
    ports:
    - containerPort: 80
    envFrom:
    - configMapRef:
    name: aspnet-settings
    env:
    - name: ConnectionStrings__CatalogConnection
    valueFrom:
    secretKeyRef:
    name: eshop-secrets
    key: mssql-connection-catalog
    - name: ConnectionStrings__IdentityConnection
    valueFrom:
    secretKeyRef:
    name: eshop-secrets
    key: mssql-connection-identity
    resources: {}
    volumeMounts:
    - name: aspnet
    mountPath: ~/.aspnet/https:/root/.aspnet/https:ro
    - name: eshop-secrets
    mountPath: "/mnt/secrets-store"
    readOnly: true
    volumes:
    - name: aspnet
    persistentVolumeClaim:
    claimName: aspnet-data
    - name: eshop-secrets
    csi:
    driver: secrets-store.csi.k8s.io
    readOnly: true
    volumeAttributes:
    secretProviderClass: eshop-azure-keyvault
    EOF

    ## Web deployment
    kubectl apply -f - <<EOF
    apiVersion: apps/v1
    kind: Deployment
    metadata:
    name: web
    labels:
    app: web
    spec:
    replicas: 1
    selector:
    matchLabels:
    app: web
    template:
    metadata:
    labels:
    app: web
    spec:
    serviceAccount: ${SERVICE_ACCOUNT_NAME}
    containers:
    - name: web
    image: ${ACR_NAME}.azurecr.io/web:${IMAGE_TAG}
    ports:
    - containerPort: 80
    envFrom:
    - configMapRef:
    name: aspnet-settings
    env:
    - name: ConnectionStrings__CatalogConnection
    valueFrom:
    secretKeyRef:
    name: eshop-secrets
    key: mssql-connection-catalog
    - name: ConnectionStrings__IdentityConnection
    valueFrom:
    secretKeyRef:
    name: eshop-secrets
    key: mssql-connection-identity
    resources: {}
    volumeMounts:
    - name: aspnet
    mountPath: ~/.aspnet/https:/root/.aspnet/https:ro
    - name: eshop-secrets
    mountPath: "/mnt/secrets-store"
    readOnly: true
    volumes:
    - name: aspnet
    persistentVolumeClaim:
    claimName: aspnet-data
    - name: eshop-secrets
    csi:
    driver: secrets-store.csi.k8s.io
    readOnly: true
    volumeAttributes:
    secretProviderClass: eshop-azure-keyvault
    EOF

    If all went well with your deployment updates, you should be able to browse to your website and buy some merchandise again 🥳

    echo "http://$(kubectl get service web -o jsonpath='{.status.loadBalancer.ingress[0].ip}')"

    Conclusion

    Although there is no visible changes on with our website, we've made a ton of changes on the Kubernetes backend to make this application much more secure and resilient.

    We used a combination of Kubernetes resources and AKS-specific features to achieve our goal of securing our secrets and ensuring data is not lost on container crashes and restarts.

    To learn more about the components we leveraged here today, checkout the resources and additional tutorials listed below.

    You can also find manifests with all the changes made in today's post in the Azure-Samples/eShopOnAKS repository.

    See you in the next post!

    Resources

    Take the Cloud Skills Challenge!

    Enroll in the Cloud Skills Challenge!

    Don't miss out on this opportunity to level up your skills and stay ahead of the curve in the world of cloud native.

    - + \ No newline at end of file diff --git a/cnny-2023/tags/persistent-volume-claims/index.html b/cnny-2023/tags/persistent-volume-claims/index.html index 92c2ae6432..761d355a6b 100644 --- a/cnny-2023/tags/persistent-volume-claims/index.html +++ b/cnny-2023/tags/persistent-volume-claims/index.html @@ -14,13 +14,13 @@ - +

    One post tagged with "persistent-volume-claims"

    View All Tags

    · 8 min read
    Paul Yu

    Welcome to Day 4 of Week 2 of #CloudNativeNewYear!

    The theme for this week is Kubernetes fundamentals. Yesterday we talked about how to set app configurations and secrets at runtime using Kubernetes ConfigMaps and Secrets. Today we'll explore the topic of persistent storage on Kubernetes and show you can leverage Persistent Volumes and Persistent Volume Claims to ensure your PostgreSQL data can survive container restarts.

    Ask the Experts Thursday, February 9th at 9 AM PST
    Catch the Replay of the Live Demo

    Watch the recorded demo and conversation about this week's topics.

    We were live on YouTube walking through today's (and the rest of this week's) demos.

    What We'll Cover

    • Containers are ephemeral
    • Persistent storage on Kubernetes
    • Persistent storage on AKS
    • Takeaways
    • Resources

    Containers are ephemeral

    In our sample application, the frontend UI writes vote values to a backend PostgreSQL database. By default the database container stores its data on the container's local file system, so there will be data loss when the pod is re-deployed or crashes as containers are meant to start with a clean slate each time.

    Let's re-deploy our sample app and experience the problem first hand.

    📝 NOTE: If you don't have an AKS cluster deployed, please head over to Azure-Samples/azure-voting-app-rust, clone the repo, and follow the instructions in the README.md to execute the Azure deployment and setup your kubectl context. Check out the first post this week for more on the environment setup.

    kubectl apply -f ./manifests

    Wait for the azure-voting-app service to be assigned a public IP then browse to the website and submit some votes. Use the command below to print the URL to the terminal.

    echo "http://$(kubectl get ingress azure-voting-app -o jsonpath='{.status.loadBalancer.ingress[0].ip}')"

    Now, let's delete the pods and watch Kubernetes do what it does best... that is, re-schedule pods.

    # wait for the pod to come up then ctrl+c to stop watching
    kubectl delete --all pod --wait=false && kubectl get po -w

    Once the pods have been recovered, reload the website and confirm the vote tally has been reset to zero.

    We need to fix this so that the data outlives the container.

    Persistent storage on Kubernetes

    In order for application data to survive crashes and restarts, you must implement Persistent Volumes and Persistent Volume Claims.

    A persistent volume represents storage that is available to the cluster. Storage volumes can be provisioned manually by an administrator or dynamically using Container Storage Interface (CSI) and storage classes, which includes information on how to provision CSI volumes.

    When a user needs to add persistent storage to their application, a persistent volume claim is made to allocate chunks of storage from the volume. This "claim" includes things like volume mode (e.g., file system or block storage), the amount of storage to allocate, the access mode, and optionally a storage class. Once a persistent volume claim has been deployed, users can add the volume to the pod and mount it in a container.

    In the next section, we'll demonstrate how to enable persistent storage on AKS.

    Persistent storage on AKS

    With AKS, CSI drivers and storage classes are pre-deployed into your cluster. This allows you to natively use Azure Disks, Azure Files, and Azure Blob Storage as persistent volumes. You can either bring your own Azure storage account and use it with AKS or have AKS provision an Azure storage account for you.

    To view the Storage CSI drivers that have been enabled in your AKS cluster, run the following command.

    az aks show \
    --name <YOUR_AKS_NAME> \
    --resource-group <YOUR_AKS_RESOURCE_GROUP> \
    --query storageProfile

    You should see output that looks like this.

    {
    "blobCsiDriver": null,
    "diskCsiDriver": {
    "enabled": true,
    "version": "v1"
    },
    "fileCsiDriver": {
    "enabled": true
    },
    "snapshotController": {
    "enabled": true
    }
    }

    To view the storage classes that have been installed in your cluster, run the following command.

    kubectl get storageclass

    Workload requirements will dictate which CSI driver and storage class you will need to use.

    If you need block storage, then you should use the blobCsiDriver. The driver may not be enabled by default but you can enable it by following instructions which can be found in the Resources section below.

    If you need file storage you should leverage either diskCsiDriver or fileCsiDriver. The decision between these two boils down to whether or not you need to have the underlying storage accessible by one pod or multiple pods. It is important to note that diskCsiDriver currently supports access from a single pod only. Therefore, if you need data to be accessible by multiple pods at the same time, then you should opt for fileCsiDriver.

    For our PostgreSQL deployment, we'll use the diskCsiDriver and have AKS create an Azure Disk resource for us. There is no need to create a PV resource, all we need to do to is create a PVC using the managed-csi-premium storage class.

    Run the following command to create the PVC.

    kubectl apply -f - <<EOF            
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
    name: pvc-azuredisk
    spec:
    accessModes:
    - ReadWriteOnce
    resources:
    requests:
    storage: 10Gi
    storageClassName: managed-csi-premium
    EOF

    When you check the PVC resource, you'll notice the STATUS is set to Pending. It will be set to Bound once the volume is mounted in the PostgreSQL container.

    kubectl get persistentvolumeclaim

    Let's delete the azure-voting-db deployment.

    kubectl delete deploy azure-voting-db

    Next, we need to apply an updated deployment manifest which includes our PVC.

    kubectl apply -f - <<EOF
    apiVersion: apps/v1
    kind: Deployment
    metadata:
    creationTimestamp: null
    labels:
    app: azure-voting-db
    name: azure-voting-db
    spec:
    replicas: 1
    selector:
    matchLabels:
    app: azure-voting-db
    strategy: {}
    template:
    metadata:
    creationTimestamp: null
    labels:
    app: azure-voting-db
    spec:
    containers:
    - image: postgres:15.0-alpine
    name: postgres
    ports:
    - containerPort: 5432
    env:
    - name: POSTGRES_PASSWORD
    valueFrom:
    secretKeyRef:
    name: azure-voting-secret
    key: POSTGRES_PASSWORD
    resources: {}
    volumeMounts:
    - name: mypvc
    mountPath: "/var/lib/postgresql/data"
    subPath: "data"
    volumes:
    - name: mypvc
    persistentVolumeClaim:
    claimName: pvc-azuredisk
    EOF

    In the manifest above, you'll see that we are mounting a new volume called mypvc (the name can be whatever you want) in the pod which points to a PVC named pvc-azuredisk. With the volume in place, we can mount it in the container by referencing the name of the volume mypvc and setting the mount path to /var/lib/postgresql/data (which is the default path).

    💡 IMPORTANT: When mounting a volume into a non-empty subdirectory, you must add subPath to the volume mount and point it to a subdirectory in the volume rather than mounting at root. In our case, when Azure Disk is formatted, it leaves a lost+found directory as documented here.

    Watch the pods and wait for the STATUS to show Running and the pod's READY status shows 1/1.

    # wait for the pod to come up then ctrl+c to stop watching
    kubectl get po -w

    Verify that the STATUS of the PVC is now set to Bound

    kubectl get persistentvolumeclaim

    With the new database container running, let's restart the application pod, wait for the pod's READY status to show 1/1, then head back over to our web browser and submit a few votes.

    kubectl delete pod -lapp=azure-voting-app --wait=false && kubectl get po -lapp=azure-voting-app -w

    Now the moment of truth... let's rip out the pods again, wait for the pods to be re-scheduled, and confirm our vote counts remain in tact.

    kubectl delete --all pod --wait=false && kubectl get po -w

    If you navigate back to the website, you'll find the vote are still there 🎉

    Takeaways

    By design, containers are meant to be ephemeral and stateless workloads are ideal on Kubernetes. However, there will come a time when your data needs to outlive the container. To persist data in your Kubernetes workloads, you need to leverage PV, PVC, and optionally storage classes. In our demo scenario, we leveraged CSI drivers built into AKS and created a PVC using pre-installed storage classes. From there, we updated the database deployment to mount the PVC in the container and AKS did the rest of the work in provisioning the underlying Azure Disk. If the built-in storage classes does not fit your needs; for example, you need to change the ReclaimPolicy or change the SKU for the Azure resource, then you can create your own custom storage class and configure it just the way you need it 😊

    We'll revisit this topic again next week but in the meantime, check out some of the resources listed below to learn more.

    See you in the next post!

    Resources

    Take the Cloud Skills Challenge!

    Enroll in the Cloud Skills Challenge!

    Don't miss out on this opportunity to level up your skills and stay ahead of the curve in the world of cloud native.

    - + \ No newline at end of file diff --git a/cnny-2023/tags/persistent-volumes/index.html b/cnny-2023/tags/persistent-volumes/index.html index 25059416ae..e6ac297e8c 100644 --- a/cnny-2023/tags/persistent-volumes/index.html +++ b/cnny-2023/tags/persistent-volumes/index.html @@ -14,13 +14,13 @@ - +

    One post tagged with "persistent-volumes"

    View All Tags

    · 8 min read
    Paul Yu

    Welcome to Day 4 of Week 2 of #CloudNativeNewYear!

    The theme for this week is Kubernetes fundamentals. Yesterday we talked about how to set app configurations and secrets at runtime using Kubernetes ConfigMaps and Secrets. Today we'll explore the topic of persistent storage on Kubernetes and show you can leverage Persistent Volumes and Persistent Volume Claims to ensure your PostgreSQL data can survive container restarts.

    Ask the Experts Thursday, February 9th at 9 AM PST
    Catch the Replay of the Live Demo

    Watch the recorded demo and conversation about this week's topics.

    We were live on YouTube walking through today's (and the rest of this week's) demos.

    What We'll Cover

    • Containers are ephemeral
    • Persistent storage on Kubernetes
    • Persistent storage on AKS
    • Takeaways
    • Resources

    Containers are ephemeral

    In our sample application, the frontend UI writes vote values to a backend PostgreSQL database. By default the database container stores its data on the container's local file system, so there will be data loss when the pod is re-deployed or crashes as containers are meant to start with a clean slate each time.

    Let's re-deploy our sample app and experience the problem first hand.

    📝 NOTE: If you don't have an AKS cluster deployed, please head over to Azure-Samples/azure-voting-app-rust, clone the repo, and follow the instructions in the README.md to execute the Azure deployment and setup your kubectl context. Check out the first post this week for more on the environment setup.

    kubectl apply -f ./manifests

    Wait for the azure-voting-app service to be assigned a public IP then browse to the website and submit some votes. Use the command below to print the URL to the terminal.

    echo "http://$(kubectl get ingress azure-voting-app -o jsonpath='{.status.loadBalancer.ingress[0].ip}')"

    Now, let's delete the pods and watch Kubernetes do what it does best... that is, re-schedule pods.

    # wait for the pod to come up then ctrl+c to stop watching
    kubectl delete --all pod --wait=false && kubectl get po -w

    Once the pods have been recovered, reload the website and confirm the vote tally has been reset to zero.

    We need to fix this so that the data outlives the container.

    Persistent storage on Kubernetes

    In order for application data to survive crashes and restarts, you must implement Persistent Volumes and Persistent Volume Claims.

    A persistent volume represents storage that is available to the cluster. Storage volumes can be provisioned manually by an administrator or dynamically using Container Storage Interface (CSI) and storage classes, which includes information on how to provision CSI volumes.

    When a user needs to add persistent storage to their application, a persistent volume claim is made to allocate chunks of storage from the volume. This "claim" includes things like volume mode (e.g., file system or block storage), the amount of storage to allocate, the access mode, and optionally a storage class. Once a persistent volume claim has been deployed, users can add the volume to the pod and mount it in a container.

    In the next section, we'll demonstrate how to enable persistent storage on AKS.

    Persistent storage on AKS

    With AKS, CSI drivers and storage classes are pre-deployed into your cluster. This allows you to natively use Azure Disks, Azure Files, and Azure Blob Storage as persistent volumes. You can either bring your own Azure storage account and use it with AKS or have AKS provision an Azure storage account for you.

    To view the Storage CSI drivers that have been enabled in your AKS cluster, run the following command.

    az aks show \
    --name <YOUR_AKS_NAME> \
    --resource-group <YOUR_AKS_RESOURCE_GROUP> \
    --query storageProfile

    You should see output that looks like this.

    {
    "blobCsiDriver": null,
    "diskCsiDriver": {
    "enabled": true,
    "version": "v1"
    },
    "fileCsiDriver": {
    "enabled": true
    },
    "snapshotController": {
    "enabled": true
    }
    }

    To view the storage classes that have been installed in your cluster, run the following command.

    kubectl get storageclass

    Workload requirements will dictate which CSI driver and storage class you will need to use.

    If you need block storage, then you should use the blobCsiDriver. The driver may not be enabled by default but you can enable it by following instructions which can be found in the Resources section below.

    If you need file storage you should leverage either diskCsiDriver or fileCsiDriver. The decision between these two boils down to whether or not you need to have the underlying storage accessible by one pod or multiple pods. It is important to note that diskCsiDriver currently supports access from a single pod only. Therefore, if you need data to be accessible by multiple pods at the same time, then you should opt for fileCsiDriver.

    For our PostgreSQL deployment, we'll use the diskCsiDriver and have AKS create an Azure Disk resource for us. There is no need to create a PV resource, all we need to do to is create a PVC using the managed-csi-premium storage class.

    Run the following command to create the PVC.

    kubectl apply -f - <<EOF            
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
    name: pvc-azuredisk
    spec:
    accessModes:
    - ReadWriteOnce
    resources:
    requests:
    storage: 10Gi
    storageClassName: managed-csi-premium
    EOF

    When you check the PVC resource, you'll notice the STATUS is set to Pending. It will be set to Bound once the volume is mounted in the PostgreSQL container.

    kubectl get persistentvolumeclaim

    Let's delete the azure-voting-db deployment.

    kubectl delete deploy azure-voting-db

    Next, we need to apply an updated deployment manifest which includes our PVC.

    kubectl apply -f - <<EOF
    apiVersion: apps/v1
    kind: Deployment
    metadata:
    creationTimestamp: null
    labels:
    app: azure-voting-db
    name: azure-voting-db
    spec:
    replicas: 1
    selector:
    matchLabels:
    app: azure-voting-db
    strategy: {}
    template:
    metadata:
    creationTimestamp: null
    labels:
    app: azure-voting-db
    spec:
    containers:
    - image: postgres:15.0-alpine
    name: postgres
    ports:
    - containerPort: 5432
    env:
    - name: POSTGRES_PASSWORD
    valueFrom:
    secretKeyRef:
    name: azure-voting-secret
    key: POSTGRES_PASSWORD
    resources: {}
    volumeMounts:
    - name: mypvc
    mountPath: "/var/lib/postgresql/data"
    subPath: "data"
    volumes:
    - name: mypvc
    persistentVolumeClaim:
    claimName: pvc-azuredisk
    EOF

    In the manifest above, you'll see that we are mounting a new volume called mypvc (the name can be whatever you want) in the pod which points to a PVC named pvc-azuredisk. With the volume in place, we can mount it in the container by referencing the name of the volume mypvc and setting the mount path to /var/lib/postgresql/data (which is the default path).

    💡 IMPORTANT: When mounting a volume into a non-empty subdirectory, you must add subPath to the volume mount and point it to a subdirectory in the volume rather than mounting at root. In our case, when Azure Disk is formatted, it leaves a lost+found directory as documented here.

    Watch the pods and wait for the STATUS to show Running and the pod's READY status shows 1/1.

    # wait for the pod to come up then ctrl+c to stop watching
    kubectl get po -w

    Verify that the STATUS of the PVC is now set to Bound

    kubectl get persistentvolumeclaim

    With the new database container running, let's restart the application pod, wait for the pod's READY status to show 1/1, then head back over to our web browser and submit a few votes.

    kubectl delete pod -lapp=azure-voting-app --wait=false && kubectl get po -lapp=azure-voting-app -w

    Now the moment of truth... let's rip out the pods again, wait for the pods to be re-scheduled, and confirm our vote counts remain in tact.

    kubectl delete --all pod --wait=false && kubectl get po -w

    If you navigate back to the website, you'll find the vote are still there 🎉

    Takeaways

    By design, containers are meant to be ephemeral and stateless workloads are ideal on Kubernetes. However, there will come a time when your data needs to outlive the container. To persist data in your Kubernetes workloads, you need to leverage PV, PVC, and optionally storage classes. In our demo scenario, we leveraged CSI drivers built into AKS and created a PVC using pre-installed storage classes. From there, we updated the database deployment to mount the PVC in the container and AKS did the rest of the work in provisioning the underlying Azure Disk. If the built-in storage classes does not fit your needs; for example, you need to change the ReclaimPolicy or change the SKU for the Azure resource, then you can create your own custom storage class and configure it just the way you need it 😊

    We'll revisit this topic again next week but in the meantime, check out some of the resources listed below to learn more.

    See you in the next post!

    Resources

    Take the Cloud Skills Challenge!

    Enroll in the Cloud Skills Challenge!

    Don't miss out on this opportunity to level up your skills and stay ahead of the curve in the world of cloud native.

    - + \ No newline at end of file diff --git a/cnny-2023/tags/secrets-management/index.html b/cnny-2023/tags/secrets-management/index.html index 5430516579..b2cbc70243 100644 --- a/cnny-2023/tags/secrets-management/index.html +++ b/cnny-2023/tags/secrets-management/index.html @@ -14,13 +14,13 @@ - +

    One post tagged with "secrets-management"

    View All Tags

    · 12 min read
    Paul Yu

    Welcome to Day 2 of Week 3 of #CloudNativeNewYear!

    The theme for this week is Bringing Your Application to Kubernetes. Yesterday we talked about getting an existing application running in Kubernetes with a full pipeline in GitHub Actions. Today we'll evaluate our sample application's configuration, storage, and networking requirements and implement using Kubernetes and Azure resources.

    Ask the Experts Thursday, February 9th at 9 AM PST
    Friday, February 10th at 11 AM PST

    Watch the recorded demo and conversation about this week's topics.

    We were live on YouTube walking through today's (and the rest of this week's) demos. Join us Friday, February 10th and bring your questions!

    What We'll Cover

    • Gather requirements
    • Implement environment variables using ConfigMaps
    • Implement persistent volumes using Azure Files
    • Implement secrets using Azure Key Vault
    • Re-package deployments
    • Conclusion
    • Resources
    caution

    Before you begin, make sure you've gone through yesterday's post to set up your AKS cluster.

    Gather requirements

    The eShopOnWeb application is written in .NET 7 and has two major pieces of functionality. The web UI is where customers can browse and shop. The web UI also includes an admin portal for managing the product catalog. This admin portal, is packaged as a WebAssembly application and relies on a separate REST API service. Both the web UI and the REST API connect to the same SQL Server container.

    Looking through the source code which can be found here we can identify requirements for configs, persistent storage, and secrets.

    Database server

    • Need to store the password for the sa account as a secure secret
    • Need persistent storage volume for data directory
    • Need to inject environment variables for SQL Server license type and EULA acceptance

    Web UI and REST API service

    • Need to store database connection string as a secure secret
    • Need to inject ASP.NET environment variables to override app settings
    • Need persistent storage volume for ASP.NET key storage

    Implement environment variables using ConfigMaps

    ConfigMaps are relatively straight-forward to create. If you were following along with the examples last week, this should be review 😉

    Create a ConfigMap to store database environment variables.

    kubectl apply -f - <<EOF
    apiVersion: v1
    kind: ConfigMap
    metadata:
    name: mssql-settings
    data:
    MSSQL_PID: Developer
    ACCEPT_EULA: "Y"
    EOF

    Create another ConfigMap to store ASP.NET environment variables.

    kubectl apply -f - <<EOF
    apiVersion: v1
    kind: ConfigMap
    metadata:
    name: aspnet-settings
    data:
    ASPNETCORE_ENVIRONMENT: Development
    EOF

    Implement persistent volumes using Azure Files

    Similar to last week, we'll take advantage of storage classes built into AKS. For our SQL Server data, we'll use the azurefile-csi-premium storage class and leverage an Azure Files resource as our PersistentVolume.

    Create a PersistentVolumeClaim (PVC) for persisting SQL Server data.

    kubectl apply -f - <<EOF
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
    name: mssql-data
    spec:
    accessModes:
    - ReadWriteMany
    storageClassName: azurefile-csi-premium
    resources:
    requests:
    storage: 5Gi
    EOF

    Create another PVC for persisting ASP.NET data.

    kubectl apply -f - <<EOF
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
    name: aspnet-data
    spec:
    accessModes:
    - ReadWriteMany
    storageClassName: azurefile-csi-premium
    resources:
    requests:
    storage: 5Gi
    EOF

    Implement secrets using Azure Key Vault

    It's a well known fact that Kubernetes secretes are not really secrets. They're just base64-encoded values and not secure, especially if malicious users have access to your Kubernetes cluster.

    In a production scenario, you will want to leverage an external vault like Azure Key Vault or HashiCorp Vault to encrypt and store secrets.

    With AKS, we can enable the Secrets Store CSI driver add-on which will allow us to leverage Azure Key Vault.

    # Set some variables
    RG_NAME=<YOUR_RESOURCE_GROUP_NAME>
    AKS_NAME=<YOUR_AKS_CLUSTER_NAME>
    ACR_NAME=<YOUR_ACR_NAME>

    az aks enable-addons \
    --addons azure-keyvault-secrets-provider \
    --name $AKS_NAME \
    --resource-group $RG_NAME

    With the add-on enabled, you should see aks-secrets-store-csi-driver and aks-secrets-store-provider-azure resources installed on each node in your Kubernetes cluster.

    Run the command below to verify.

    kubectl get pods \
    --namespace kube-system \
    --selector 'app in (secrets-store-csi-driver, secrets-store-provider-azure)'

    The Secrets Store CSI driver allows us to use secret stores via Container Storage Interface (CSI) volumes. This provider offers capabilities such as mounting and syncing between the secure vault and Kubernetes Secrets. On AKS, the Azure Key Vault Provider for Secrets Store CSI Driver enables integration with Azure Key Vault.

    You may not have an Azure Key Vault created yet, so let's create one and add some secrets to it.

    AKV_NAME=$(az keyvault create \
    --name akv-eshop$RANDOM \
    --resource-group $RG_NAME \
    --query name -o tsv)

    # Database server password
    az keyvault secret set \
    --vault-name $AKV_NAME \
    --name mssql-password \
    --value "@someThingComplicated1234"

    # Catalog database connection string
    az keyvault secret set \
    --vault-name $AKV_NAME \
    --name mssql-connection-catalog \
    --value "Server=db;Database=Microsoft.eShopOnWeb.CatalogDb;User Id=sa;Password=@someThingComplicated1234;TrustServerCertificate=True;"

    # Identity database connection string
    az keyvault secret set \
    --vault-name $AKV_NAME \
    --name mssql-connection-identity \
    --value "Server=db;Database=Microsoft.eShopOnWeb.Identity;User Id=sa;Password=@someThingComplicated1234;TrustServerCertificate=True;"

    Pods authentication using Azure Workload Identity

    In order for our Pods to retrieve secrets from Azure Key Vault, we'll need to set up a way for the Pod to authenticate against Azure AD. This can be achieved by implementing the new Azure Workload Identity feature of AKS.

    info

    At the time of this writing, the workload identity feature of AKS is in Preview.

    The workload identity feature within AKS allows us to leverage native Kubernetes resources and link a Kubernetes ServiceAccount to an Azure Managed Identity to authenticate against Azure AD.

    For the authentication flow, our Kubernetes cluster will act as an Open ID Connect (OIDC) issuer and will be able issue identity tokens to ServiceAccounts which will be assigned to our Pods.

    The Azure Managed Identity will be granted permission to access secrets in our Azure Key Vault and with the ServiceAccount being assigned to our Pods, they will be able to retrieve our secrets.

    For more information on how the authentication mechanism all works, check out this doc.

    To implement all this, start by enabling the new preview feature for AKS.

    az feature register \
    --namespace "Microsoft.ContainerService" \
    --name "EnableWorkloadIdentityPreview"
    caution

    This can take several minutes to complete.

    Check the status and ensure the state shows Regestered before moving forward.

    az feature show \
    --namespace "Microsoft.ContainerService" \
    --name "EnableWorkloadIdentityPreview"

    Update your AKS cluster to enable the workload identity feature and enable the OIDC issuer endpoint.

    az aks update \
    --name $AKS_NAME \
    --resource-group $RG_NAME \
    --enable-workload-identity \
    --enable-oidc-issuer

    Create an Azure Managed Identity and retrieve its client ID.

    MANAGED_IDENTITY_CLIENT_ID=$(az identity create \
    --name aks-workload-identity \
    --resource-group $RG_NAME \
    --subscription $(az account show --query id -o tsv) \
    --query 'clientId' -o tsv)

    Create the Kubernetes ServiceAccount.

    # Set namespace (this must align with the namespace that your app is deployed into)
    SERVICE_ACCOUNT_NAMESPACE=default

    # Set the service account name
    SERVICE_ACCOUNT_NAME=eshop-serviceaccount

    # Create the service account
    kubectl apply -f - <<EOF
    apiVersion: v1
    kind: ServiceAccount
    metadata:
    annotations:
    azure.workload.identity/client-id: ${MANAGED_IDENTITY_CLIENT_ID}
    labels:
    azure.workload.identity/use: "true"
    name: ${SERVICE_ACCOUNT_NAME}
    namespace: ${SERVICE_ACCOUNT_NAMESPACE}
    EOF
    info

    Note to enable this ServiceAccount to work with Azure Workload Identity, you must annotate the resource with azure.workload.identity/client-id, and add a label of azure.workload.identity/use: "true"

    That was a lot... Let's review what we just did.

    We have an Azure Managed Identity (object in Azure AD), an OIDC issuer URL (endpoint in our Kubernetes cluster), and a Kubernetes ServiceAccount.

    The next step is to "tie" these components together and establish a Federated Identity Credential so that Azure AD can trust authentication requests from your Kubernetes cluster.

    info

    This identity federation can be established between Azure AD any Kubernetes cluster; not just AKS 🤗

    To establish the federated credential, we'll need the OIDC issuer URL, and a subject which points to your Kubernetes ServiceAccount.

    # Get the OIDC issuer URL
    OIDC_ISSUER_URL=$(az aks show \
    --name $AKS_NAME \
    --resource-group $RG_NAME \
    --query "oidcIssuerProfile.issuerUrl" -o tsv)

    # Set the subject name using this format: `system:serviceaccount:<YOUR_SERVICE_ACCOUNT_NAMESPACE>:<YOUR_SERVICE_ACCOUNT_NAME>`
    SUBJECT=system:serviceaccount:$SERVICE_ACCOUNT_NAMESPACE:$SERVICE_ACCOUNT_NAME

    az identity federated-credential create \
    --name aks-federated-credential \
    --identity-name aks-workload-identity \
    --resource-group $RG_NAME \
    --issuer $OIDC_ISSUER_URL \
    --subject $SUBJECT

    With the authentication components set, we can now create a SecretProviderClass which includes details about the Azure Key Vault, the secrets to pull out from the vault, and identity used to access the vault.

    # Get the tenant id for the key vault
    TENANT_ID=$(az keyvault show \
    --name $AKV_NAME \
    --resource-group $RG_NAME \
    --query properties.tenantId -o tsv)

    # Create the secret provider for azure key vault
    kubectl apply -f - <<EOF
    apiVersion: secrets-store.csi.x-k8s.io/v1
    kind: SecretProviderClass
    metadata:
    name: eshop-azure-keyvault
    spec:
    provider: azure
    parameters:
    usePodIdentity: "false"
    useVMManagedIdentity: "false"
    clientID: "${MANAGED_IDENTITY_CLIENT_ID}"
    keyvaultName: "${AKV_NAME}"
    cloudName: ""
    objects: |
    array:
    - |
    objectName: mssql-password
    objectType: secret
    objectVersion: ""
    - |
    objectName: mssql-connection-catalog
    objectType: secret
    objectVersion: ""
    - |
    objectName: mssql-connection-identity
    objectType: secret
    objectVersion: ""
    tenantId: "${TENANT_ID}"
    secretObjects:
    - secretName: eshop-secrets
    type: Opaque
    data:
    - objectName: mssql-password
    key: mssql-password
    - objectName: mssql-connection-catalog
    key: mssql-connection-catalog
    - objectName: mssql-connection-identity
    key: mssql-connection-identity
    EOF

    Finally, lets grant the Azure Managed Identity permissions to retrieve secrets from the Azure Key Vault.

    az keyvault set-policy \
    --name $AKV_NAME \
    --secret-permissions get \
    --spn $MANAGED_IDENTITY_CLIENT_ID

    Re-package deployments

    Update your database deployment to load environment variables from our ConfigMap, attach the PVC and SecretProviderClass as volumes, mount the volumes into the Pod, and use the ServiceAccount to retrieve secrets.

    Additionally, you may notice the database Pod is set to use fsGroup:10001 as part of the securityContext. This is required as the MSSQL container runs using a non-root account called mssql and this account has the proper permissions to read/write data at the /var/opt/mssql mount path.

    kubectl apply -f - <<EOF
    apiVersion: apps/v1
    kind: Deployment
    metadata:
    name: db
    labels:
    app: db
    spec:
    replicas: 1
    selector:
    matchLabels:
    app: db
    template:
    metadata:
    labels:
    app: db
    spec:
    securityContext:
    fsGroup: 10001
    serviceAccountName: ${SERVICE_ACCOUNT_NAME}
    containers:
    - name: db
    image: mcr.microsoft.com/mssql/server:2019-latest
    ports:
    - containerPort: 1433
    envFrom:
    - configMapRef:
    name: mssql-settings
    env:
    - name: MSSQL_SA_PASSWORD
    valueFrom:
    secretKeyRef:
    name: eshop-secrets
    key: mssql-password
    resources: {}
    volumeMounts:
    - name: mssqldb
    mountPath: /var/opt/mssql
    - name: eshop-secrets
    mountPath: "/mnt/secrets-store"
    readOnly: true
    volumes:
    - name: mssqldb
    persistentVolumeClaim:
    claimName: mssql-data
    - name: eshop-secrets
    csi:
    driver: secrets-store.csi.k8s.io
    readOnly: true
    volumeAttributes:
    secretProviderClass: eshop-azure-keyvault
    EOF

    We'll update the API and Web deployments in a similar way.

    # Set the image tag
    IMAGE_TAG=<YOUR_IMAGE_TAG>

    # API deployment
    kubectl apply -f - <<EOF
    apiVersion: apps/v1
    kind: Deployment
    metadata:
    name: api
    labels:
    app: api
    spec:
    replicas: 1
    selector:
    matchLabels:
    app: api
    template:
    metadata:
    labels:
    app: api
    spec:
    serviceAccount: ${SERVICE_ACCOUNT_NAME}
    containers:
    - name: api
    image: ${ACR_NAME}.azurecr.io/api:${IMAGE_TAG}
    ports:
    - containerPort: 80
    envFrom:
    - configMapRef:
    name: aspnet-settings
    env:
    - name: ConnectionStrings__CatalogConnection
    valueFrom:
    secretKeyRef:
    name: eshop-secrets
    key: mssql-connection-catalog
    - name: ConnectionStrings__IdentityConnection
    valueFrom:
    secretKeyRef:
    name: eshop-secrets
    key: mssql-connection-identity
    resources: {}
    volumeMounts:
    - name: aspnet
    mountPath: ~/.aspnet/https:/root/.aspnet/https:ro
    - name: eshop-secrets
    mountPath: "/mnt/secrets-store"
    readOnly: true
    volumes:
    - name: aspnet
    persistentVolumeClaim:
    claimName: aspnet-data
    - name: eshop-secrets
    csi:
    driver: secrets-store.csi.k8s.io
    readOnly: true
    volumeAttributes:
    secretProviderClass: eshop-azure-keyvault
    EOF

    ## Web deployment
    kubectl apply -f - <<EOF
    apiVersion: apps/v1
    kind: Deployment
    metadata:
    name: web
    labels:
    app: web
    spec:
    replicas: 1
    selector:
    matchLabels:
    app: web
    template:
    metadata:
    labels:
    app: web
    spec:
    serviceAccount: ${SERVICE_ACCOUNT_NAME}
    containers:
    - name: web
    image: ${ACR_NAME}.azurecr.io/web:${IMAGE_TAG}
    ports:
    - containerPort: 80
    envFrom:
    - configMapRef:
    name: aspnet-settings
    env:
    - name: ConnectionStrings__CatalogConnection
    valueFrom:
    secretKeyRef:
    name: eshop-secrets
    key: mssql-connection-catalog
    - name: ConnectionStrings__IdentityConnection
    valueFrom:
    secretKeyRef:
    name: eshop-secrets
    key: mssql-connection-identity
    resources: {}
    volumeMounts:
    - name: aspnet
    mountPath: ~/.aspnet/https:/root/.aspnet/https:ro
    - name: eshop-secrets
    mountPath: "/mnt/secrets-store"
    readOnly: true
    volumes:
    - name: aspnet
    persistentVolumeClaim:
    claimName: aspnet-data
    - name: eshop-secrets
    csi:
    driver: secrets-store.csi.k8s.io
    readOnly: true
    volumeAttributes:
    secretProviderClass: eshop-azure-keyvault
    EOF

    If all went well with your deployment updates, you should be able to browse to your website and buy some merchandise again 🥳

    echo "http://$(kubectl get service web -o jsonpath='{.status.loadBalancer.ingress[0].ip}')"

    Conclusion

    Although there is no visible changes on with our website, we've made a ton of changes on the Kubernetes backend to make this application much more secure and resilient.

    We used a combination of Kubernetes resources and AKS-specific features to achieve our goal of securing our secrets and ensuring data is not lost on container crashes and restarts.

    To learn more about the components we leveraged here today, checkout the resources and additional tutorials listed below.

    You can also find manifests with all the changes made in today's post in the Azure-Samples/eShopOnAKS repository.

    See you in the next post!

    Resources

    Take the Cloud Skills Challenge!

    Enroll in the Cloud Skills Challenge!

    Don't miss out on this opportunity to level up your skills and stay ahead of the curve in the world of cloud native.

    - + \ No newline at end of file diff --git a/cnny-2023/tags/secure-supply-chain/index.html b/cnny-2023/tags/secure-supply-chain/index.html index 018a8f4056..457242dc55 100644 --- a/cnny-2023/tags/secure-supply-chain/index.html +++ b/cnny-2023/tags/secure-supply-chain/index.html @@ -14,13 +14,13 @@ - +

    One post tagged with "secure-supply-chain"

    View All Tags

    · 6 min read
    Josh Duffney

    Welcome to Day 5 of Week 3 of #CloudNativeNewYear!

    The theme for this week is Bringing Your Application to Kubernetes. Yesterday we talked about debugging and instrumenting our application. Today we'll explore the topic of container image signing and secure supply chain.

    Ask the Experts Thursday, February 9th at 9 AM PST
    Friday, February 10th at 11 AM PST

    Watch the recorded demo and conversation about this week's topics.

    We were live on YouTube walking through today's (and the rest of this week's) demos. Join us Friday, February 10th and bring your questions!

    What We'll Cover

    • Introduction
    • Prerequisites
    • Create a digital signing certificate
    • Generate an Azure Container Registry Token
    • Set up Notation
    • Install the Notation Azure Key Vault Plugin
    • Add the signing Certificate to Notation
    • Sign Container Images
    • Conclusion

    Introduction

    The secure supply chain is a crucial aspect of software development, delivery, and deployment, and digital signing plays a critical role in this process.

    By using digital signatures to verify the authenticity and integrity of container images, organizations can improve the security of your software supply chain and reduce the risk of security breaches and data compromise.

    In this article, you'll learn how to use Notary, an open-source project hosted by the Cloud Native Computing Foundation (CNCF) to digitally sign container images stored on Azure Container Registry.

    Prerequisites

    To follow along, you'll need an instance of:

    Create a digital signing certificate

    A digital signing certificate is a certificate that is used to digitally sign and verify the authenticity and integrity of digital artifacts. Such documents, software, and of course container images.

    Before you can implement digital signatures, you must first create a digital signing certificate.

    Run the following command to generate the certificate:

    1. Create the policy file

      cat <<EOF > ./my_policy.json
      {
      "issuerParameters": {
      "certificateTransparency": null,
      "name": "Self"
      },
      "x509CertificateProperties": {
      "ekus": [
      "1.3.6.1.5.5.7.3.3"
      ],
      "key_usage": [
      "digitalSignature"
      ],
      "subject": "CN=${keySubjectName}",
      "validityInMonths": 12
      }
      }
      EOF

      The ekus and key usage of this certificate policy dictate that the certificate can only be used for digital signatures.

    2. Create the certificate in Azure Key Vault

      az keyvault certificate create --name $keyName --vault-name $keyVaultName --policy @my_policy.json

      Replace $keyName and $keyVaultName with your desired certificate name and Azure Key Vault instance name.

    Generate a Azure Container Registry token

    Azure Container Registry tokens are used to grant access to the contents of the registry. Tokens can be used for a variety of things such as pulling images, pushing images, or managing the registry.

    As part of the container image signing workflow, you'll need a token to authenticate the Notation CLI with your Azure Container Registry.

    Run the following command to generate an ACR token:

    az acr token create \
    --name $tokenName \
    --registry $registry \
    --scope-map _repositories_admin \
    --query 'credentials.passwords[0].value' \
    --only-show-errors \
    --output tsv

    Replace $tokenName with your name for the ACR token and $registry with the name of your ACR instance.

    Setup Notation

    Notation is the command-line interface for the CNCF Notary project. You'll use it to digitally sign the api and web container images for the eShopOnWeb application.

    Run the following commands to download and install the NotationCli:

    1. Open a terminal or command prompt window

    2. Download the Notary notation release

      curl -Lo notation.tar.gz https://github.com/notaryproject/notation/releases/download/v1.0.0-rc.1/notation_1.0.0-rc.1_linux_amd64.tar.gz > /dev/null 2>&1

      If you're not using Linux, you can find the releases here.

    3. Extract the contents of the notation.tar.gz

      tar xvzf notation.tar.gz > /dev/null 2>&1
    4. Copy the notation binary to the $HOME/bin directory

      cp ./notation $HOME/bin
    5. Add the $HOME/bin directory to the PATH environment variable

      export PATH="$HOME/bin:$PATH"
    6. Remove the downloaded files

      rm notation.tar.gz LICENSE
    7. Check the notation version

      notation --version

    Install the Notation Azure Key Vault plugin

    By design the NotationCli supports plugins that extend its digital signing capabilities to remote registries. And in order to sign your container images stored in Azure Container Registry, you'll need to install the Azure Key Vault plugin for Notation.

    Run the following commands to install the azure-kv plugin:

    1. Download the plugin

      curl -Lo notation-azure-kv.tar.gz \
      https://github.com/Azure/notation-azure-kv/releases/download/v0.5.0-rc.1/notation-azure-kv_0.5.0-rc.1_linux_amd64.tar.gz > /dev/null 2>&1

      Non-Linux releases can be found here.

    2. Extract to the plugin directory & delete download files

      tar xvzf notation-azure-kv.tar.gz -C ~/.config/notation/plugins/azure-kv notation-azure-kv > /dev/null 2>&

      rm -rf notation-azure-kv.tar.gz
    3. Verify the plugin was installed

      notation plugin ls

    Add the signing certificate to Notation

    Now that you have Notation and the Azure Key Vault plugin installed, add the certificate's keyId created above to Notation.

    1. Get the Certificate Key ID from Azure Key Vault

      az keyvault certificate show \
      --vault-name $keyVaultName \
      --name $keyName \
      --query "kid" --only-show-errors --output tsv

      Replace $keyVaultName and $keyName with the appropriate information.

    2. Add the Key ID to KMS using Notation

      notation key add --plugin azure-kv --id $keyID $keyName
    3. Check the key list

      notation key ls

    Sign Container Images

    At this point, all that's left is to sign the container images.

    Run the notation sign command to sign the api and web container images:

    notation sign $registry.azurecr.io/web:$tag \
    --username $tokenName \
    --password $tokenPassword

    notation sign $registry.azurecr.io/api:$tag \
    --username $tokenName \
    --password $tokenPassword

    Replace $registry, $tag, $tokenName, and $tokenPassword with the appropriate values. To improve security, use a SHA hash for the tag.

    NOTE: If you didn't take note of the token password, you can rerun the az acr token create command to generate a new password.

    Conclusion

    Digital signing plays a critical role in ensuring the security of software supply chains.

    By signing software components, organizations can verify the authenticity and integrity of software, helping to prevent unauthorized modifications, tampering, and malware.

    And if you want to take digital signing to a whole new level by using them to prevent the deployment of unsigned container images, check out the Ratify project on GitHub!

    Take the Cloud Skills Challenge!

    Enroll in the Cloud Skills Challenge!

    Don't miss out on this opportunity to level up your skills and stay ahead of the curve in the world of cloud native.

    - + \ No newline at end of file diff --git a/cnny-2023/tags/service/index.html b/cnny-2023/tags/service/index.html index 0fa3cbad2e..3e75553594 100644 --- a/cnny-2023/tags/service/index.html +++ b/cnny-2023/tags/service/index.html @@ -14,13 +14,13 @@ - +

    One post tagged with "service"

    View All Tags

    · 11 min read
    Paul Yu

    Welcome to Day 2 of Week 2 of #CloudNativeNewYear!

    The theme for this week is #Kubernetes fundamentals. Yesterday we talked about how to deploy a containerized web app workload to Azure Kubernetes Service (AKS). Today we'll explore the topic of services and ingress and walk through the steps of making our containers accessible both internally as well as over the internet so that you can share it with the world 😊

    Ask the Experts Thursday, February 9th at 9 AM PST
    Catch the Replay of the Live Demo

    Watch the recorded demo and conversation about this week's topics.

    We were live on YouTube walking through today's (and the rest of this week's) demos.

    What We'll Cover

    • Exposing Pods via Service
    • Exposing Services via Ingress
    • Takeaways
    • Resources

    Exposing Pods via Service

    There are a few ways to expose your pod in Kubernetes. One way is to take an imperative approach and use the kubectl expose command. This is probably the quickest way to achieve your goal but it isn't the best way. A better way to expose your pod by taking a declarative approach by creating a services manifest file and deploying it using the kubectl apply command.

    Don't worry if you are unsure of how to make this manifest, we'll use kubectl to help generate it.

    First, let's ensure we have the database deployed on our AKS cluster.

    📝 NOTE: If you don't have an AKS cluster deployed, please head over to Azure-Samples/azure-voting-app-rust, clone the repo, and follow the instructions in the README.md to execute the Azure deployment and setup your kubectl context. Check out the first post this week for more on the environment setup.

    kubectl apply -f ./manifests/deployment-db.yaml

    Next, let's deploy the application. If you are following along from yesterday's content, there isn't anything you need to change; however, if you are deploy the app from scratch, you'll need to modify the deployment-app.yaml manifest and update it with your image tag and database pod's IP address.

    kubectl apply -f ./manifests/deployment-app.yaml

    Now, let's expose the database using a service so that we can leverage Kubernetes' built-in service discovery to be able to reference it by name; not pod IP. Run the following command.

    kubectl expose deployment azure-voting-db \
    --port=5432 \
    --target-port=5432

    With the database exposed using service, we can update the app deployment manifest to use the service name instead of pod IP. This way, if the pod ever gets assigned a new IP, we don't have to worry about updating the IP each time and redeploying our web application. Kubernetes has internal service discovery mechanism in place that allows us to reference a service by its name.

    Let's make an update to the manifest. Replace the environment variable for DATABASE_SERVER with the following:

    - name: DATABASE_SERVER
    value: azure-voting-db

    Re-deploy the app with the updated configuration.

    kubectl apply -f ./manifests/deployment-app.yaml

    One service down, one to go. Run the following command to expose the web application.

    kubectl expose deployment azure-voting-app \
    --type=LoadBalancer \
    --port=80 \
    --target-port=8080

    Notice the --type argument has a value of LoadBalancer. This service type is implemented by the cloud-controller-manager which is part of the Kubernetes control plane. When using a managed Kubernetes cluster such as Azure Kubernetes Service, a public standard load balancer will be able to provisioned when the service type is set to LoadBalancer. The load balancer will also have a public IP assigned which will make your deployment publicly available.

    Kubernetes supports four service types:

    • ClusterIP: this is the default and limits service access to internal traffic within the cluster
    • NodePort: this assigns a port mapping on the node's IP address and allows traffic from the virtual network (outside the cluster)
    • LoadBalancer: as mentioned above, this creates a cloud-based load balancer
    • ExternalName: this is used in special case scenarios where you want to map a service to an external DNS name

    📝 NOTE: When exposing a web application to the internet, allowing external users to connect to your Service directly is not the best approach. Instead, you should use an Ingress, which we'll cover in the next section.

    Now, let's confirm you can reach the web app from the internet. You can use the following command to print the URL to your terminal.

    echo "http://$(kubectl get service azure-voting-app -o jsonpath='{.status.loadBalancer.ingress[0].ip}')"

    Great! The kubectl expose command gets the job done, but as mentioned above, it is not the best method of exposing deployments. It is better to expose deployments declaratively using a service manifest, so let's delete the services and redeploy using manifests.

    kubectl delete service azure-voting-db azure-voting-app

    To use kubectl to generate our manifest file, we can use the same kubectl expose command that we ran earlier but this time, we'll include --output=yaml and --dry-run=client. This will instruct the command to output the manifest that would be sent to the kube-api server in YAML format to the terminal.

    Generate the manifest for the database service.

    kubectl expose deployment azure-voting-db \
    --type=ClusterIP \
    --port=5432 \
    --target-port=5432 \
    --output=yaml \
    --dry-run=client > ./manifests/service-db.yaml

    Generate the manifest for the application service.

    kubectl expose deployment azure-voting-app \
    --type=LoadBalancer \
    --port=80 \
    --target-port=8080 \
    --output=yaml \
    --dry-run=client > ./manifests/service-app.yaml

    The command above redirected the YAML output to your manifests directory. Here is what the web application service looks like.

    apiVersion: v1
    kind: Service
    metadata:
    creationTimestamp: null
    labels:
    app: azure-voting-app
    name: azure-voting-app
    spec:
    ports:
    - port: 80
    protocol: TCP
    targetPort: 8080
    selector:
    app: azure-voting-app
    type: LoadBalancer
    status:
    loadBalancer: {}

    💡 TIP: To view the schema of any api-resource in Kubernetes, you can use the kubectl explain command. In this case the kubectl explain service command will tell us exactly what each of these fields do.

    Re-deploy the services using the new service manifests.

    kubectl apply -f ./manifests/service-db.yaml -f ./manifests/service-app.yaml

    # You should see TYPE is set to LoadBalancer and the EXTERNAL-IP is set
    kubectl get service azure-voting-db azure-voting-app

    Confirm again that our application is accessible again. Run the following command to print the URL to the terminal.

    echo "http://$(kubectl get service azure-voting-app -o jsonpath='{.status.loadBalancer.ingress[0].ip}')"

    That was easy, right? We just exposed both of our pods using Kubernetes services. The database only needs to be accessible from within the cluster so ClusterIP is perfect for that. For the web application, we specified the type to be LoadBalancer so that we can access the application over the public internet.

    But wait... remember that if you want to expose web applications over the public internet, a Service with a public IP is not the best way; the better approach is to use an Ingress resource.

    Exposing Services via Ingress

    If you read through the Kubernetes documentation on Ingress you will see a diagram that depicts the Ingress sitting in front of the Service resource with a routing rule between it. In order to use Ingress, you need to deploy an Ingress Controller and it can be configured with many routing rules to forward traffic to one or many backend services. So effectively, an Ingress is a load balancer for your Services.

    With that said, we no longer need a service type of LoadBalancer since the service does not need to be accessible from the internet. It only needs to be accessible from the Ingress Controller (internal to the cluster) so we can change the service type to ClusterIP.

    Update your service.yaml file to look like this:

    apiVersion: v1
    kind: Service
    metadata:
    creationTimestamp: null
    labels:
    app: azure-voting-app
    name: azure-voting-app
    spec:
    ports:
    - port: 80
    protocol: TCP
    targetPort: 8080
    selector:
    app: azure-voting-app

    📝 NOTE: The default service type is ClusterIP so we can omit the type altogether.

    Re-apply the app service manifest.

    kubectl apply -f ./manifests/service-app.yaml

    # You should see TYPE set to ClusterIP and EXTERNAL-IP set to <none>
    kubectl get service azure-voting-app

    Next, we need to install an Ingress Controller. There are quite a few options, and the Kubernetes-maintained NGINX Ingress Controller is commonly deployed.

    You could install this manually by following these instructions, but if you do that you'll be responsible for maintaining and supporting the resource.

    I like to take advantage of free maintenance and support when I can get it, so I'll opt to use the Web Application Routing add-on for AKS.

    💡 TIP: Whenever you install an AKS add-on, it will be maintained and fully supported by Azure Support.

    Enable the web application routing add-on in our AKS cluster with the following command.

    az aks addon enable \
    --name <YOUR_AKS_NAME> \
    --resource-group <YOUR_AKS_RESOURCE_GROUP>
    --addon web_application_routing

    ⚠️ WARNING: This command can take a few minutes to complete

    Now, let's use the same approach we took in creating our service to create our Ingress resource. Run the following command to generate the Ingress manifest.

    kubectl create ingress azure-voting-app \
    --class=webapprouting.kubernetes.azure.com \
    --rule="/*=azure-voting-app:80" \
    --output yaml \
    --dry-run=client > ./manifests/ingress.yaml

    The --class=webapprouting.kubernetes.azure.com option activates the AKS web application routing add-on. This AKS add-on can also integrate with other Azure services such as Azure DNS and Azure Key Vault for TLS certificate management and this special class makes it all work.

    The --rule="/*=azure-voting-app:80" option looks confusing but we can use kubectl again to help us understand how to format the value for the option.

    kubectl create ingress --help

    In the output you will see the following:

    --rule=[]:
    Rule in format host/path=service:port[,tls=secretname]. Paths containing the leading character '*' are
    considered pathType=Prefix. tls argument is optional.

    It expects a host and path separated by a forward-slash, then expects the backend service name and port separated by a colon. We're not using a hostname for this demo so we can omit it. For the path, an asterisk is used to specify a wildcard path prefix.

    So, the value of /*=azure-voting-app:80 creates a routing rule for all paths following the domain (or in our case since we don't have a hostname specified, the IP) to route traffic to our azure-voting-app backend service on port 80.

    📝 NOTE: Configuring the hostname and TLS is outside the scope of this demo but please visit this URL https://bit.ly/aks-webapp-routing for an in-depth hands-on lab centered around Web Application Routing on AKS.

    Your ingress.yaml file should look like this:

    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
    creationTimestamp: null
    name: azure-voting-app
    spec:
    ingressClassName: webapprouting.kubernetes.azure.com
    rules:
    - http:
    paths:
    - backend:
    service:
    name: azure-voting-app
    port:
    number: 80
    path: /
    pathType: Prefix
    status:
    loadBalancer: {}

    Apply the app ingress manifest.

    kubectl apply -f ./manifests/ingress.yaml

    Validate the web application is available from the internet again. You can run the following command to print the URL to the terminal.

    echo "http://$(kubectl get ingress azure-voting-app -o jsonpath='{.status.loadBalancer.ingress[0].ip}')"

    Takeaways

    Exposing your applications both internally and externally can be easily achieved using Service and Ingress resources respectively. If your service is HTTP or HTTPS based and needs to be accessible from outsie the cluster, use Ingress with an internal Service (i.e., ClusterIP or NodePort); otherwise, use the Service resource. If your TCP-based Service needs to be publicly accessible, you set the type to LoadBalancer to expose a public IP for it. To learn more about these resources, please visit the links listed below.

    Lastly, if you are unsure how to begin writing your service manifest, you can use kubectl and have it do most of the work for you 🥳

    Resources

    Take the Cloud Skills Challenge!

    Enroll in the Cloud Skills Challenge!

    Don't miss out on this opportunity to level up your skills and stay ahead of the curve in the world of cloud native.

    - + \ No newline at end of file diff --git a/cnny-2023/tags/windows/index.html b/cnny-2023/tags/windows/index.html index ebfe06ece3..1bd14bf53b 100644 --- a/cnny-2023/tags/windows/index.html +++ b/cnny-2023/tags/windows/index.html @@ -14,14 +14,14 @@ - +

    One post tagged with "windows"

    View All Tags

    · 7 min read
    Vinicius Apolinario

    Welcome to Day 3 of Week 4 of #CloudNativeNewYear!

    The theme for this week is going further with Cloud Native. Yesterday we talked about using Draft to accelerate your Kubernetes adoption. Today we'll explore the topic of Windows containers.

    What We'll Cover

    • Introduction
    • Windows containers overview
    • Windows base container images
    • Isolation
    • Exercise: Try this yourself!
    • Resources: For self-study!

    Introduction

    Windows containers were launched along with Windows Server 2016, and have evolved since. In its latest release, Windows Server 2022, Windows containers have reached a great level of maturity and allow for customers to run production grade workloads.

    While suitable for new developments, Windows containers also provide developers and operations with a different approach than Linux containers. It allows for existing Windows applications to be containerized with little or no code changes. It also allows for professionals that are more comfortable with the Windows platform and OS, to leverage their skill set, while taking advantage of the containers platform.

    Windows container overview

    In essence, Windows containers are very similar to Linux. Since Windows containers use the same foundation of Docker containers, you can expect that the same architecture applies - with the specific notes of the Windows OS. For example, when running a Windows container via Docker, you use the same commands, such as docker run. To pull a container image, you can use docker pull, just like on Linux. However, to run a Windows container, you also need a Windows container host. This requirement is there because, as you might remember, a container shares the OS kernel with its container host.

    On Kubernetes, Windows containers are supported since Windows Server 2019. Just like with Docker, you can manage Windows containers like any other resource on the Kubernetes ecosystem. A Windows node can be part of a Kubernetes cluster, allowing you to run Windows container based applications on services like Azure Kubernetes Service. To deploy an Windows application to a Windows pod in Kubernetes, you can author a YAML specification much like you would for Linux. The main difference is that you would point to an image that runs on Windows, and you need to specify a node selection tag to indicate said pod needs to run on a Windows node.

    Windows base container images

    On Windows containers, you will always use a base container image provided by Microsoft. This base container image contains the OS binaries for the container to run. This image can be as large as 3GB+, or small as ~300MB. The difference in the size is a consequence of the APIs and components available in each Windows container base container image. There are primarily, three images: Nano Server, Server Core, and Server.

    Nano Server is the smallest image, ranging around 300MB. It's a base container image for new developments and cloud-native scenarios. Applications need to target Nano Server as the Windows OS, so not all frameworks will work. For example, .Net works on Nano Server, but .Net Framework doesn't. Other third-party frameworks also work on Nano Server, such as Apache, NodeJS, Phyton, Tomcat, Java runtime, JBoss, Redis, among others.

    Server Core is a much larger base container image, ranging around 1.25GB. It's larger size is compensated by it's application compatibility. Simply put, any application that meets the requirements to be run on a Windows container, can be containerized with this image.

    The Server image builds on the Server Core one. It ranges around 3.1GB and has even greater application compatibility than the Server Core image. In addition to the traditional Windows APIs and components, this image allows for scenarios such as Machine Learning via DirectX with GPU access.

    The best image for your scenario is dependent on the requirements your application has on the Windows OS inside a container. However, there are some scenarios that are not supported at all on Windows containers - such as GUI or RDP dependent applications, some Windows Server infrastructure roles, such as Active Directory, among others.

    Isolation

    When running containers, the kernel of the container host is shared with the containers running on it. While extremely convenient, this poses a potential risk for multi-tenant scenarios. If one container is compromised and has access to the host, it could potentially compromise other containers in the same system.

    For enterprise customers running on-premises (or even in the cloud), this can be mitigated by using a VM as a container host and considering the VM itself a security boundary. However, if multiple workloads from different tenants need to share the same host, Windows containers offer another option: Hyper-V isolation. While the name Hyper-V is associated with VMs, its virtualization capabilities extend to other services, including containers. Hyper-V isolated containers run on a purpose built, extremely small, highly performant VM. However, you manage a container running with Hyper-V isolation the same way you do with a process isolated one. In fact, the only notable difference is that you need to append the --isolation=hyperv tag to the docker run command.

    Exercise

    Here are a few examples of how to use Windows containers:

    Run Windows containers via Docker on your machine

    To pull a Windows base container image:

    docker pull mcr.microsoft.com/windows/servercore:ltsc2022

    To run a basic IIS container:

    #This command will pull and start a IIS container. You can access it from http://<your local IP>:8080
    docker run -d -p 8080:80 mcr.microsoft.com/windows/servercore/iis:windowsservercore-ltsc2022

    Run the same IIS container with Hyper-V isolation

    #This command will pull and start a IIS container. You can access it from http://<your local IP>:8080
    docker run -d -p 8080:80 --isolation=hyperv mcr.microsoft.com/windows/servercore/iis:windowsservercore-ltsc2022

    To run a Windows container interactively:

    docker run -it mcr.microsoft.com/windows/servercore:ltsc2022 powershell

    Run Windows containers on Kubernetes

    To prepare an AKS cluster for Windows containers: Note: Replace the values on the example below with the ones from your environment.

    echo "Please enter the username to use as administrator credentials for Windows Server nodes on your cluster: " && read WINDOWS_USERNAME
    az aks create \
    --resource-group myResourceGroup \
    --name myAKSCluster \
    --node-count 2 \
    --generate-ssh-keys \
    --windows-admin-username $WINDOWS_USERNAME \
    --vm-set-type VirtualMachineScaleSets \
    --network-plugin azure

    To add a Windows node pool for Windows containers:

    az aks nodepool add \
    --resource-group myResourceGroup \
    --cluster-name myAKSCluster \
    --os-type Windows \
    --name npwin \
    --node-count 1

    Deploy a sample ASP.Net application to the AKS cluster above using the YAML file below:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
    name: sample
    labels:
    app: sample
    spec:
    replicas: 1
    template:
    metadata:
    name: sample
    labels:
    app: sample
    spec:
    nodeSelector:
    "kubernetes.io/os": windows
    containers:
    - name: sample
    image: mcr.microsoft.com/dotnet/framework/samples:aspnetapp
    resources:
    limits:
    cpu: 1
    memory: 800M
    ports:
    - containerPort: 80
    selector:
    matchLabels:
    app: sample
    ---
    apiVersion: v1
    kind: Service
    metadata:
    name: sample
    spec:
    type: LoadBalancer
    ports:
    - protocol: TCP
    port: 80
    selector:
    app: sample

    Save the file above and run the command below on your Kubernetes cluster:

    kubectl apply -f <filename> .

    Once deployed, you can access the application by getting the IP address of your service:

    kubectl get service

    Resources

    It's not too late to sign up for and complete the Cloud Skills Challenge!
    - + \ No newline at end of file diff --git a/cnny-2023/tags/workload-identity/index.html b/cnny-2023/tags/workload-identity/index.html index 43d85c5827..fd93dd5510 100644 --- a/cnny-2023/tags/workload-identity/index.html +++ b/cnny-2023/tags/workload-identity/index.html @@ -14,13 +14,13 @@ - +

    One post tagged with "workload-identity"

    View All Tags

    · 12 min read
    Paul Yu

    Welcome to Day 2 of Week 3 of #CloudNativeNewYear!

    The theme for this week is Bringing Your Application to Kubernetes. Yesterday we talked about getting an existing application running in Kubernetes with a full pipeline in GitHub Actions. Today we'll evaluate our sample application's configuration, storage, and networking requirements and implement using Kubernetes and Azure resources.

    Ask the Experts Thursday, February 9th at 9 AM PST
    Friday, February 10th at 11 AM PST

    Watch the recorded demo and conversation about this week's topics.

    We were live on YouTube walking through today's (and the rest of this week's) demos. Join us Friday, February 10th and bring your questions!

    What We'll Cover

    • Gather requirements
    • Implement environment variables using ConfigMaps
    • Implement persistent volumes using Azure Files
    • Implement secrets using Azure Key Vault
    • Re-package deployments
    • Conclusion
    • Resources
    caution

    Before you begin, make sure you've gone through yesterday's post to set up your AKS cluster.

    Gather requirements

    The eShopOnWeb application is written in .NET 7 and has two major pieces of functionality. The web UI is where customers can browse and shop. The web UI also includes an admin portal for managing the product catalog. This admin portal, is packaged as a WebAssembly application and relies on a separate REST API service. Both the web UI and the REST API connect to the same SQL Server container.

    Looking through the source code which can be found here we can identify requirements for configs, persistent storage, and secrets.

    Database server

    • Need to store the password for the sa account as a secure secret
    • Need persistent storage volume for data directory
    • Need to inject environment variables for SQL Server license type and EULA acceptance

    Web UI and REST API service

    • Need to store database connection string as a secure secret
    • Need to inject ASP.NET environment variables to override app settings
    • Need persistent storage volume for ASP.NET key storage

    Implement environment variables using ConfigMaps

    ConfigMaps are relatively straight-forward to create. If you were following along with the examples last week, this should be review 😉

    Create a ConfigMap to store database environment variables.

    kubectl apply -f - <<EOF
    apiVersion: v1
    kind: ConfigMap
    metadata:
    name: mssql-settings
    data:
    MSSQL_PID: Developer
    ACCEPT_EULA: "Y"
    EOF

    Create another ConfigMap to store ASP.NET environment variables.

    kubectl apply -f - <<EOF
    apiVersion: v1
    kind: ConfigMap
    metadata:
    name: aspnet-settings
    data:
    ASPNETCORE_ENVIRONMENT: Development
    EOF

    Implement persistent volumes using Azure Files

    Similar to last week, we'll take advantage of storage classes built into AKS. For our SQL Server data, we'll use the azurefile-csi-premium storage class and leverage an Azure Files resource as our PersistentVolume.

    Create a PersistentVolumeClaim (PVC) for persisting SQL Server data.

    kubectl apply -f - <<EOF
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
    name: mssql-data
    spec:
    accessModes:
    - ReadWriteMany
    storageClassName: azurefile-csi-premium
    resources:
    requests:
    storage: 5Gi
    EOF

    Create another PVC for persisting ASP.NET data.

    kubectl apply -f - <<EOF
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
    name: aspnet-data
    spec:
    accessModes:
    - ReadWriteMany
    storageClassName: azurefile-csi-premium
    resources:
    requests:
    storage: 5Gi
    EOF

    Implement secrets using Azure Key Vault

    It's a well known fact that Kubernetes secretes are not really secrets. They're just base64-encoded values and not secure, especially if malicious users have access to your Kubernetes cluster.

    In a production scenario, you will want to leverage an external vault like Azure Key Vault or HashiCorp Vault to encrypt and store secrets.

    With AKS, we can enable the Secrets Store CSI driver add-on which will allow us to leverage Azure Key Vault.

    # Set some variables
    RG_NAME=<YOUR_RESOURCE_GROUP_NAME>
    AKS_NAME=<YOUR_AKS_CLUSTER_NAME>
    ACR_NAME=<YOUR_ACR_NAME>

    az aks enable-addons \
    --addons azure-keyvault-secrets-provider \
    --name $AKS_NAME \
    --resource-group $RG_NAME

    With the add-on enabled, you should see aks-secrets-store-csi-driver and aks-secrets-store-provider-azure resources installed on each node in your Kubernetes cluster.

    Run the command below to verify.

    kubectl get pods \
    --namespace kube-system \
    --selector 'app in (secrets-store-csi-driver, secrets-store-provider-azure)'

    The Secrets Store CSI driver allows us to use secret stores via Container Storage Interface (CSI) volumes. This provider offers capabilities such as mounting and syncing between the secure vault and Kubernetes Secrets. On AKS, the Azure Key Vault Provider for Secrets Store CSI Driver enables integration with Azure Key Vault.

    You may not have an Azure Key Vault created yet, so let's create one and add some secrets to it.

    AKV_NAME=$(az keyvault create \
    --name akv-eshop$RANDOM \
    --resource-group $RG_NAME \
    --query name -o tsv)

    # Database server password
    az keyvault secret set \
    --vault-name $AKV_NAME \
    --name mssql-password \
    --value "@someThingComplicated1234"

    # Catalog database connection string
    az keyvault secret set \
    --vault-name $AKV_NAME \
    --name mssql-connection-catalog \
    --value "Server=db;Database=Microsoft.eShopOnWeb.CatalogDb;User Id=sa;Password=@someThingComplicated1234;TrustServerCertificate=True;"

    # Identity database connection string
    az keyvault secret set \
    --vault-name $AKV_NAME \
    --name mssql-connection-identity \
    --value "Server=db;Database=Microsoft.eShopOnWeb.Identity;User Id=sa;Password=@someThingComplicated1234;TrustServerCertificate=True;"

    Pods authentication using Azure Workload Identity

    In order for our Pods to retrieve secrets from Azure Key Vault, we'll need to set up a way for the Pod to authenticate against Azure AD. This can be achieved by implementing the new Azure Workload Identity feature of AKS.

    info

    At the time of this writing, the workload identity feature of AKS is in Preview.

    The workload identity feature within AKS allows us to leverage native Kubernetes resources and link a Kubernetes ServiceAccount to an Azure Managed Identity to authenticate against Azure AD.

    For the authentication flow, our Kubernetes cluster will act as an Open ID Connect (OIDC) issuer and will be able issue identity tokens to ServiceAccounts which will be assigned to our Pods.

    The Azure Managed Identity will be granted permission to access secrets in our Azure Key Vault and with the ServiceAccount being assigned to our Pods, they will be able to retrieve our secrets.

    For more information on how the authentication mechanism all works, check out this doc.

    To implement all this, start by enabling the new preview feature for AKS.

    az feature register \
    --namespace "Microsoft.ContainerService" \
    --name "EnableWorkloadIdentityPreview"
    caution

    This can take several minutes to complete.

    Check the status and ensure the state shows Regestered before moving forward.

    az feature show \
    --namespace "Microsoft.ContainerService" \
    --name "EnableWorkloadIdentityPreview"

    Update your AKS cluster to enable the workload identity feature and enable the OIDC issuer endpoint.

    az aks update \
    --name $AKS_NAME \
    --resource-group $RG_NAME \
    --enable-workload-identity \
    --enable-oidc-issuer

    Create an Azure Managed Identity and retrieve its client ID.

    MANAGED_IDENTITY_CLIENT_ID=$(az identity create \
    --name aks-workload-identity \
    --resource-group $RG_NAME \
    --subscription $(az account show --query id -o tsv) \
    --query 'clientId' -o tsv)

    Create the Kubernetes ServiceAccount.

    # Set namespace (this must align with the namespace that your app is deployed into)
    SERVICE_ACCOUNT_NAMESPACE=default

    # Set the service account name
    SERVICE_ACCOUNT_NAME=eshop-serviceaccount

    # Create the service account
    kubectl apply -f - <<EOF
    apiVersion: v1
    kind: ServiceAccount
    metadata:
    annotations:
    azure.workload.identity/client-id: ${MANAGED_IDENTITY_CLIENT_ID}
    labels:
    azure.workload.identity/use: "true"
    name: ${SERVICE_ACCOUNT_NAME}
    namespace: ${SERVICE_ACCOUNT_NAMESPACE}
    EOF
    info

    Note to enable this ServiceAccount to work with Azure Workload Identity, you must annotate the resource with azure.workload.identity/client-id, and add a label of azure.workload.identity/use: "true"

    That was a lot... Let's review what we just did.

    We have an Azure Managed Identity (object in Azure AD), an OIDC issuer URL (endpoint in our Kubernetes cluster), and a Kubernetes ServiceAccount.

    The next step is to "tie" these components together and establish a Federated Identity Credential so that Azure AD can trust authentication requests from your Kubernetes cluster.

    info

    This identity federation can be established between Azure AD any Kubernetes cluster; not just AKS 🤗

    To establish the federated credential, we'll need the OIDC issuer URL, and a subject which points to your Kubernetes ServiceAccount.

    # Get the OIDC issuer URL
    OIDC_ISSUER_URL=$(az aks show \
    --name $AKS_NAME \
    --resource-group $RG_NAME \
    --query "oidcIssuerProfile.issuerUrl" -o tsv)

    # Set the subject name using this format: `system:serviceaccount:<YOUR_SERVICE_ACCOUNT_NAMESPACE>:<YOUR_SERVICE_ACCOUNT_NAME>`
    SUBJECT=system:serviceaccount:$SERVICE_ACCOUNT_NAMESPACE:$SERVICE_ACCOUNT_NAME

    az identity federated-credential create \
    --name aks-federated-credential \
    --identity-name aks-workload-identity \
    --resource-group $RG_NAME \
    --issuer $OIDC_ISSUER_URL \
    --subject $SUBJECT

    With the authentication components set, we can now create a SecretProviderClass which includes details about the Azure Key Vault, the secrets to pull out from the vault, and identity used to access the vault.

    # Get the tenant id for the key vault
    TENANT_ID=$(az keyvault show \
    --name $AKV_NAME \
    --resource-group $RG_NAME \
    --query properties.tenantId -o tsv)

    # Create the secret provider for azure key vault
    kubectl apply -f - <<EOF
    apiVersion: secrets-store.csi.x-k8s.io/v1
    kind: SecretProviderClass
    metadata:
    name: eshop-azure-keyvault
    spec:
    provider: azure
    parameters:
    usePodIdentity: "false"
    useVMManagedIdentity: "false"
    clientID: "${MANAGED_IDENTITY_CLIENT_ID}"
    keyvaultName: "${AKV_NAME}"
    cloudName: ""
    objects: |
    array:
    - |
    objectName: mssql-password
    objectType: secret
    objectVersion: ""
    - |
    objectName: mssql-connection-catalog
    objectType: secret
    objectVersion: ""
    - |
    objectName: mssql-connection-identity
    objectType: secret
    objectVersion: ""
    tenantId: "${TENANT_ID}"
    secretObjects:
    - secretName: eshop-secrets
    type: Opaque
    data:
    - objectName: mssql-password
    key: mssql-password
    - objectName: mssql-connection-catalog
    key: mssql-connection-catalog
    - objectName: mssql-connection-identity
    key: mssql-connection-identity
    EOF

    Finally, lets grant the Azure Managed Identity permissions to retrieve secrets from the Azure Key Vault.

    az keyvault set-policy \
    --name $AKV_NAME \
    --secret-permissions get \
    --spn $MANAGED_IDENTITY_CLIENT_ID

    Re-package deployments

    Update your database deployment to load environment variables from our ConfigMap, attach the PVC and SecretProviderClass as volumes, mount the volumes into the Pod, and use the ServiceAccount to retrieve secrets.

    Additionally, you may notice the database Pod is set to use fsGroup:10001 as part of the securityContext. This is required as the MSSQL container runs using a non-root account called mssql and this account has the proper permissions to read/write data at the /var/opt/mssql mount path.

    kubectl apply -f - <<EOF
    apiVersion: apps/v1
    kind: Deployment
    metadata:
    name: db
    labels:
    app: db
    spec:
    replicas: 1
    selector:
    matchLabels:
    app: db
    template:
    metadata:
    labels:
    app: db
    spec:
    securityContext:
    fsGroup: 10001
    serviceAccountName: ${SERVICE_ACCOUNT_NAME}
    containers:
    - name: db
    image: mcr.microsoft.com/mssql/server:2019-latest
    ports:
    - containerPort: 1433
    envFrom:
    - configMapRef:
    name: mssql-settings
    env:
    - name: MSSQL_SA_PASSWORD
    valueFrom:
    secretKeyRef:
    name: eshop-secrets
    key: mssql-password
    resources: {}
    volumeMounts:
    - name: mssqldb
    mountPath: /var/opt/mssql
    - name: eshop-secrets
    mountPath: "/mnt/secrets-store"
    readOnly: true
    volumes:
    - name: mssqldb
    persistentVolumeClaim:
    claimName: mssql-data
    - name: eshop-secrets
    csi:
    driver: secrets-store.csi.k8s.io
    readOnly: true
    volumeAttributes:
    secretProviderClass: eshop-azure-keyvault
    EOF

    We'll update the API and Web deployments in a similar way.

    # Set the image tag
    IMAGE_TAG=<YOUR_IMAGE_TAG>

    # API deployment
    kubectl apply -f - <<EOF
    apiVersion: apps/v1
    kind: Deployment
    metadata:
    name: api
    labels:
    app: api
    spec:
    replicas: 1
    selector:
    matchLabels:
    app: api
    template:
    metadata:
    labels:
    app: api
    spec:
    serviceAccount: ${SERVICE_ACCOUNT_NAME}
    containers:
    - name: api
    image: ${ACR_NAME}.azurecr.io/api:${IMAGE_TAG}
    ports:
    - containerPort: 80
    envFrom:
    - configMapRef:
    name: aspnet-settings
    env:
    - name: ConnectionStrings__CatalogConnection
    valueFrom:
    secretKeyRef:
    name: eshop-secrets
    key: mssql-connection-catalog
    - name: ConnectionStrings__IdentityConnection
    valueFrom:
    secretKeyRef:
    name: eshop-secrets
    key: mssql-connection-identity
    resources: {}
    volumeMounts:
    - name: aspnet
    mountPath: ~/.aspnet/https:/root/.aspnet/https:ro
    - name: eshop-secrets
    mountPath: "/mnt/secrets-store"
    readOnly: true
    volumes:
    - name: aspnet
    persistentVolumeClaim:
    claimName: aspnet-data
    - name: eshop-secrets
    csi:
    driver: secrets-store.csi.k8s.io
    readOnly: true
    volumeAttributes:
    secretProviderClass: eshop-azure-keyvault
    EOF

    ## Web deployment
    kubectl apply -f - <<EOF
    apiVersion: apps/v1
    kind: Deployment
    metadata:
    name: web
    labels:
    app: web
    spec:
    replicas: 1
    selector:
    matchLabels:
    app: web
    template:
    metadata:
    labels:
    app: web
    spec:
    serviceAccount: ${SERVICE_ACCOUNT_NAME}
    containers:
    - name: web
    image: ${ACR_NAME}.azurecr.io/web:${IMAGE_TAG}
    ports:
    - containerPort: 80
    envFrom:
    - configMapRef:
    name: aspnet-settings
    env:
    - name: ConnectionStrings__CatalogConnection
    valueFrom:
    secretKeyRef:
    name: eshop-secrets
    key: mssql-connection-catalog
    - name: ConnectionStrings__IdentityConnection
    valueFrom:
    secretKeyRef:
    name: eshop-secrets
    key: mssql-connection-identity
    resources: {}
    volumeMounts:
    - name: aspnet
    mountPath: ~/.aspnet/https:/root/.aspnet/https:ro
    - name: eshop-secrets
    mountPath: "/mnt/secrets-store"
    readOnly: true
    volumes:
    - name: aspnet
    persistentVolumeClaim:
    claimName: aspnet-data
    - name: eshop-secrets
    csi:
    driver: secrets-store.csi.k8s.io
    readOnly: true
    volumeAttributes:
    secretProviderClass: eshop-azure-keyvault
    EOF

    If all went well with your deployment updates, you should be able to browse to your website and buy some merchandise again 🥳

    echo "http://$(kubectl get service web -o jsonpath='{.status.loadBalancer.ingress[0].ip}')"

    Conclusion

    Although there is no visible changes on with our website, we've made a ton of changes on the Kubernetes backend to make this application much more secure and resilient.

    We used a combination of Kubernetes resources and AKS-specific features to achieve our goal of securing our secrets and ensuring data is not lost on container crashes and restarts.

    To learn more about the components we leveraged here today, checkout the resources and additional tutorials listed below.

    You can also find manifests with all the changes made in today's post in the Azure-Samples/eShopOnAKS repository.

    See you in the next post!

    Resources

    Take the Cloud Skills Challenge!

    Enroll in the Cloud Skills Challenge!

    Don't miss out on this opportunity to level up your skills and stay ahead of the curve in the world of cloud native.

    - + \ No newline at end of file diff --git a/cnny-2023/tags/zero-to-hero/index.html b/cnny-2023/tags/zero-to-hero/index.html index 4579af2d36..e6523a6f9b 100644 --- a/cnny-2023/tags/zero-to-hero/index.html +++ b/cnny-2023/tags/zero-to-hero/index.html @@ -14,13 +14,13 @@ - +

    16 posts tagged with "zero-to-hero"

    View All Tags

    · 4 min read
    Cory Skimming
    Devanshi Joshi
    Steven Murawski
    Nitya Narasimhan

    Welcome to the Kick-off Post for #30DaysOfCloudNative - one of the core initiatives within #CloudNativeNewYear! Over the next four weeks, join us as we take you from fundamentals to functional usage of Cloud-native technologies, one blog post at a time! Read on to learn a little bit about this initiative and what you can expect to learn from this journey!

    What We'll Cover


    Cloud-native New Year

    Welcome to Week 01 of 🥳 #CloudNativeNewYear ! Today, we kick off a full month of content and activities to skill you up on all things Cloud-native on Azure with content, events, and community interactions! Read on to learn about what we have planned!


    Explore our initiatives

    We have a number of initiatives planned for the month to help you learn and skill up on relevant technologies. Click on the links to visit the relevant pages for each.

    We'll go into more details about #30DaysOfCloudNative in this post - don't forget to subscribe to the blog to get daily posts delivered directly to your preferred feed reader!


    Register for events!

    What are 3 things you can do today, to jumpstart your learning journey?


    #30DaysOfCloudNative

    #30DaysOfCloudNative is a month-long series of daily blog posts grouped into 4 themed weeks - taking you from core concepts to end-to-end solution examples in 30 days. Each article will be short (5-8 mins reading time) and provide exercises and resources to help you reinforce learnings and take next steps.

    This series focuses on the Cloud-native On Azure learning journey in four stages, each building on the previous week to help you skill up in a beginner-friendly way:

    We have a tentative weekly-themed roadmap for the topics we hope to cover and will keep this updated as we go with links to actual articles as they get published.

    Week 1: FOCUS ON CLOUD-NATIVE FUNDAMENTALS

    Here's a sneak peek at the week 1 schedule. We'll start with a broad review of cloud-native fundamentals and walkthrough the core concepts of microservices, containers and Kubernetes.

    • Jan 23: Learn Core Concepts for Cloud-native
    • Jan 24: Container 101
    • Jan 25: Adopting Microservices with Kubernetes
    • Jan 26: Kubernetes 101
    • Jan 27: Exploring your Cloud Native Options

    Let's Get Started!

    Now you know everything! We hope you are as excited as we are to dive into a full month of active learning and doing! Don't forget to subscribe for updates in your favorite feed reader! And look out for our first Cloud-native Fundamentals post on January 23rd!


    - + \ No newline at end of file diff --git a/cnny-2023/tags/zero-to-hero/page/10/index.html b/cnny-2023/tags/zero-to-hero/page/10/index.html index 713bdd935e..17299b0d9c 100644 --- a/cnny-2023/tags/zero-to-hero/page/10/index.html +++ b/cnny-2023/tags/zero-to-hero/page/10/index.html @@ -14,7 +14,7 @@ - + @@ -22,7 +22,7 @@

    16 posts tagged with "zero-to-hero"

    View All Tags

    · 14 min read
    Steven Murawski

    Welcome to Day 1 of Week 3 of #CloudNativeNewYear!

    The theme for this week is Bringing Your Application to Kubernetes. Last we talked about Kubernetes Fundamentals. Today we'll explore getting an existing application running in Kubernetes with a full pipeline in GitHub Actions.

    Ask the Experts Thursday, February 9th at 9 AM PST
    Friday, February 10th at 11 AM PST

    Watch the recorded demo and conversation about this week's topics.

    We were live on YouTube walking through today's (and the rest of this week's) demos. Join us Friday, February 10th and bring your questions!

    What We'll Cover

    • Our Application
    • Adding Some Infrastructure as Code
    • Building and Publishing a Container Image
    • Deploying to Kubernetes
    • Summary
    • Resources

    Our Application

    This week we'll be taking an exisiting application - something similar to a typical line of business application - and setting it up to run in Kubernetes. Over the course of the week, we'll address different concerns. Today we'll focus on updating our CI/CD process to handle standing up (or validating that we have) an Azure Kubernetes Service (AKS) environment, building and publishing container images for our web site and API server, and getting those services running in Kubernetes.

    The application we'll be starting with is eShopOnWeb. This application has a web site and API which are backed by a SQL Server instance. It's built in .NET 7, so it's cross-platform.

    info

    For the enterprising among you, you may notice that there are a number of different eShopOn* variants on GitHub, including eShopOnContainers. We aren't using that example as it's more of an end state than a starting place. Afterwards, feel free to check out that example as what this solution could look like as a series of microservices.

    Adding Some Infrastructure as Code

    Just like last week, we need to stand up an AKS environment. This week, however, rather than running commands in our own shell, we'll set up GitHub Actions to do that for us.

    There is a LOT of plumbing in this section, but once it's set up, it'll make our lives a lot easier. This section ensures that we have an environment to deploy our application into configured the way we want. We can easily extend this to accomodate multiple environments or add additional microservices with minimal new effort.

    Federated Identity

    Setting up a federated identity will allow us a more securable and auditable way of accessing Azure from GitHub Actions. For more about setting up a federated identity, Microsoft Learn has the details on connecting GitHub Actions to Azure.

    Here, we'll just walk through the setup of the identity and configure GitHub to use that idenity to deploy our AKS environment and interact with our Azure Container Registry.

    The examples will use PowerShell, but a Bash version of the setup commands is available in the week3/day1 branch.

    Prerequisites

    To follow along, you'll need:

    • a GitHub account
    • an Azure Subscription
    • the Azure CLI
    • and the Git CLI.

    You'll need to fork the source repository under your GitHub user or organization where you can manage secrets and GitHub Actions.

    It would be helpful to have the GitHub CLI, but it's not required.

    Set Up Some Defaults

    You will need to update one or more of the variables (your user or organization, what branch you want to work off of, and possibly the Azure AD application name if there is a conflict).

    # Replace the gitHubOrganizationName value
    # with the user or organization you forked
    # the repository under.

    $githubOrganizationName = 'Azure-Samples'
    $githubRepositoryName = 'eShopOnAKS'
    $branchName = 'week3/day1'
    $applicationName = 'cnny-week3-day1'

    Create an Azure AD Application

    Next, we need to create an Azure AD application.

    # Create an Azure AD application
    $aksDeploymentApplication = New-AzADApplication -DisplayName $applicationName

    Set Up Federation for that Azure AD Application

    And configure that application to allow federated credential requests from our GitHub repository for a particular branch.

    # Create a federated identity credential for the application
    New-AzADAppFederatedCredential `
    -Name $applicationName `
    -ApplicationObjectId $aksDeploymentApplication.Id `
    -Issuer 'https://token.actions.githubusercontent.com' `
    -Audience 'api://AzureADTokenExchange' `
    -Subject "repo:$($githubOrganizationName)/$($githubRepositoryName):ref:refs/heads/$branchName"

    Create a Service Principal for the Azure AD Application

    Once the application has been created, we need a service principal tied to that application. The service principal can be granted rights in Azure.

    # Create a service principal for the application
    New-AzADServicePrincipal -AppId $($aksDeploymentApplication.AppId)

    Give that Service Principal Rights to Azure Resources

    Because our Bicep deployment exists at the subscription level and we are creating role assignments, we need to give it Owner rights. If we changed the scope of the deployment to just a resource group, we could apply more scoped permissions.

    $azureContext = Get-AzContext
    New-AzRoleAssignment `
    -ApplicationId $($aksDeploymentApplication.AppId) `
    -RoleDefinitionName Owner `
    -Scope $azureContext.Subscription.Id

    Add Secrets to GitHub Repository

    If you have the GitHub CLI, you can use that right from your shell to set the secrets needed.

    gh secret set AZURE_CLIENT_ID --body $aksDeploymentApplication.AppId
    gh secret set AZURE_TENANT_ID --body $azureContext.Tenant.Id
    gh secret set AZURE_SUBSCRIPTION_ID --body $azureContext.Subscription.Id

    Otherwise, you can create them through the web interface like I did in the Learn Live event below.

    info

    It may look like the whole video will play, but it'll stop after configuring the secrets in GitHub (after about 9 minutes)

    The video shows creating the Azure AD application, service principals, and configuring the federated identity in Azure AD and GitHub.

    Creating a Bicep Deployment

    Resuable Workflows

    We'll create our Bicep deployment in a reusable workflows. What are they? The previous link has the documentation or the video below has my colleague Brandon Martinez and I talking about them.

    This workflow is basically the same deployment we did in last week's series, just in GitHub Actions.

    Start by creating a file called deploy_aks.yml in the .github/workflows directory with the below contents.

    name: deploy

    on:
    workflow_call:
    inputs:
    resourceGroupName:
    required: true
    type: string
    secrets:
    AZURE_CLIENT_ID:
    required: true
    AZURE_TENANT_ID:
    required: true
    AZURE_SUBSCRIPTION_ID:
    required: true
    outputs:
    containerRegistryName:
    description: Container Registry Name
    value: ${{ jobs.deploy.outputs.containerRegistryName }}
    containerRegistryUrl:
    description: Container Registry Login Url
    value: ${{ jobs.deploy.outputs.containerRegistryUrl }}
    resourceGroupName:
    description: Resource Group Name
    value: ${{ jobs.deploy.outputs.resourceGroupName }}
    aksName:
    description: Azure Kubernetes Service Cluster Name
    value: ${{ jobs.deploy.outputs.aksName }}

    permissions:
    id-token: write
    contents: read

    jobs:
    validate:
    runs-on: ubuntu-latest
    steps:
    - uses: actions/checkout@v2
    - uses: azure/login@v1
    name: Sign in to Azure
    with:
    client-id: ${{ secrets.AZURE_CLIENT_ID }}
    tenant-id: ${{ secrets.AZURE_TENANT_ID }}
    subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}
    - uses: azure/arm-deploy@v1
    name: Run preflight validation
    with:
    deploymentName: ${{ github.run_number }}
    scope: subscription
    region: eastus
    template: ./deploy/main.bicep
    parameters: >
    resourceGroup=${{ inputs.resourceGroupName }}
    deploymentMode: Validate

    deploy:
    needs: validate
    runs-on: ubuntu-latest
    outputs:
    containerRegistryName: ${{ steps.deploy.outputs.acr_name }}
    containerRegistryUrl: ${{ steps.deploy.outputs.acr_login_server_url }}
    resourceGroupName: ${{ steps.deploy.outputs.resource_group_name }}
    aksName: ${{ steps.deploy.outputs.aks_name }}
    steps:
    - uses: actions/checkout@v2
    - uses: azure/login@v1
    name: Sign in to Azure
    with:
    client-id: ${{ secrets.AZURE_CLIENT_ID }}
    tenant-id: ${{ secrets.AZURE_TENANT_ID }}
    subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}
    - uses: azure/arm-deploy@v1
    id: deploy
    name: Deploy Bicep file
    with:
    failOnStdErr: false
    deploymentName: ${{ github.run_number }}
    scope: subscription
    region: eastus
    template: ./deploy/main.bicep
    parameters: >
    resourceGroup=${{ inputs.resourceGroupName }}

    Adding the Bicep Deployment

    Once we have the Bicep deployment workflow, we can add it to the primary build definition in .github/workflows/dotnetcore.yml

    Permissions

    First, we need to add a permissions block to let the workflow request our Azure AD token. This can go towards the top of the YAML file (I started it on line 5).

    permissions:
    id-token: write
    contents: read

    Deploy AKS Job

    Next, we'll add a reference to our reusable workflow. This will go after the build job.

      deploy_aks:
    needs: [build]
    uses: ./.github/workflows/deploy_aks.yml
    with:
    resourceGroupName: 'cnny-week3'
    secrets:
    AZURE_CLIENT_ID: ${{ secrets.AZURE_CLIENT_ID }}
    AZURE_TENANT_ID: ${{ secrets.AZURE_TENANT_ID }}
    AZURE_SUBSCRIPTION_ID: ${{ secrets.AZURE_SUBSCRIPTION_ID }}

    Building and Publishing a Container Image

    Now that we have our target environment in place and an Azure Container Registry, we can build and publish our container images.

    Add a Reusable Workflow

    First, we'll create a new file for our reusable workflow at .github/workflows/publish_container_image.yml.

    We'll start the file with a name, the parameters it needs to run, and the permissions requirements for the federated identity request.

    name: Publish Container Images

    on:
    workflow_call:
    inputs:
    containerRegistryName:
    required: true
    type: string
    containerRegistryUrl:
    required: true
    type: string
    githubSha:
    required: true
    type: string
    secrets:
    AZURE_CLIENT_ID:
    required: true
    AZURE_TENANT_ID:
    required: true
    AZURE_SUBSCRIPTION_ID:
    required: true

    permissions:
    id-token: write
    contents: read

    Build the Container Images

    Our next step is to build the two container images we'll need for the application, the website and the API. We'll build the container images on our build worker and tag it with the git SHA, so there'll be a direct tie between the point in time in our codebase and the container images that represent it.

    jobs:
    publish_container_image:
    runs-on: ubuntu-latest

    steps:
    - uses: actions/checkout@v2
    - name: docker build
    run: |
    docker build . -f src/Web/Dockerfile -t ${{ inputs.containerRegistryUrl }}/web:${{ inputs.githubSha }}
    docker build . -f src/PublicApi/Dockerfile -t ${{ inputs.containerRegistryUrl }}/api:${{ inputs.githubSha}}

    Scan the Container Images

    Before we publish those container images, we'll scan them for vulnerabilities and best practice violations. We can add these two steps (one scan for each image).

        - name: scan web container image
    uses: Azure/container-scan@v0
    with:
    image-name: ${{ inputs.containerRegistryUrl }}/web:${{ inputs.githubSha}}
    - name: scan api container image
    uses: Azure/container-scan@v0
    with:
    image-name: ${{ inputs.containerRegistryUrl }}/web:${{ inputs.githubSha}}

    The container images provided have a few items that'll be found. We can create an allowed list at .github/containerscan/allowedlist.yaml to define vulnerabilities or best practice violations that we'll explictly allow to not fail our build.

    general:
    vulnerabilities:
    - CVE-2022-29458
    - CVE-2022-3715
    - CVE-2022-1304
    - CVE-2021-33560
    - CVE-2020-16156
    - CVE-2019-8457
    - CVE-2018-8292
    bestPracticeViolations:
    - CIS-DI-0001
    - CIS-DI-0005
    - CIS-DI-0006
    - CIS-DI-0008

    Publish the Container Images

    Finally, we'll log in to Azure, then log in to our Azure Container Registry, and push our images.

        - uses: azure/login@v1
    name: Sign in to Azure
    with:
    client-id: ${{ secrets.AZURE_CLIENT_ID }}
    tenant-id: ${{ secrets.AZURE_TENANT_ID }}
    subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}
    - name: acr login
    run: az acr login --name ${{ inputs.containerRegistryName }}
    - name: docker push
    run: |
    docker push ${{ inputs.containerRegistryUrl }}/web:${{ inputs.githubSha}}
    docker push ${{ inputs.containerRegistryUrl }}/api:${{ inputs.githubSha}}

    Update the Build With the Image Build and Publish

    Now that we have our reusable workflow to create and publish our container images, we can include that in our primary build defnition at .github/workflows/dotnetcore.yml.

      publish_container_image:
    needs: [deploy_aks]
    uses: ./.github/workflows/publish_container_image.yml
    with:
    containerRegistryName: ${{ needs.deploy_aks.outputs.containerRegistryName }}
    containerRegistryUrl: ${{ needs.deploy_aks.outputs.containerRegistryUrl }}
    githubSha: ${{ github.sha }}
    secrets:
    AZURE_CLIENT_ID: ${{ secrets.AZURE_CLIENT_ID }}
    AZURE_TENANT_ID: ${{ secrets.AZURE_TENANT_ID }}
    AZURE_SUBSCRIPTION_ID: ${{ secrets.AZURE_SUBSCRIPTION_ID }}

    Deploying to Kubernetes

    Finally, we've gotten enough set up that a commit to the target branch will:

    • build and test our application code
    • set up (or validate) our AKS and ACR environment
    • and create, scan, and publish our container images to ACR

    Our last step will be to deploy our application to Kubernetes. We'll use the basic building blocks we worked with last week, deployments and services.

    Starting the Reusable Workflow to Deploy to AKS

    We'll start our workflow with our parameters that we need, as well as the permissions to access the token to log in to Azure.

    We'll check out our code, then log in to Azure, and use the az CLI to get credentials for our AKS cluster.

    name: deploy_to_aks

    on:
    workflow_call:
    inputs:
    aksName:
    required: true
    type: string
    resourceGroupName:
    required: true
    type: string
    containerRegistryUrl:
    required: true
    type: string
    githubSha:
    required: true
    type: string
    secrets:
    AZURE_CLIENT_ID:
    required: true
    AZURE_TENANT_ID:
    required: true
    AZURE_SUBSCRIPTION_ID:
    required: true

    permissions:
    id-token: write
    contents: read

    jobs:
    deploy:
    runs-on: ubuntu-latest
    steps:
    - uses: actions/checkout@v2
    - uses: azure/login@v1
    name: Sign in to Azure
    with:
    client-id: ${{ secrets.AZURE_CLIENT_ID }}
    tenant-id: ${{ secrets.AZURE_TENANT_ID }}
    subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}
    - name: Get AKS Credentials
    run: |
    az aks get-credentials --resource-group ${{ inputs.resourceGroupName }} --name ${{ inputs.aksName }}

    Edit the Deployment For Our Current Image Tag

    Let's add the Kubernetes manifests to our repo. This post is long enough, so you can find the content for the manifests folder in the manifests folder in the source repo under the week3/day1 branch.

    tip

    If you only forked the main branch of the source repo, you can easily get the updated manifests by using the following git commands:

    git remote add upstream https://github.com/Azure-Samples/eShopOnAks
    git fetch upstream week3/day1
    git checkout upstream/week3/day1 manifests

    This will make the week3/day1 branch available locally and then we can update the manifests directory to match the state of that branch.

    The deployments and the service defintions should be familiar from last week's content (but not the same). This week, however, there's a new file in the manifests - ./manifests/kustomization.yaml

    This file helps us more dynamically edit our kubernetes manifests and support is baked right in to the kubectl command.

    Kustomize Definition

    Kustomize allows us to specify specific resource manifests and areas of that manifest to replace. We've put some placeholders in our file as well, so we can replace those for each run of our CI/CD system.

    In ./manifests/kustomization.yaml you will see:

    resources:
    - deployment-api.yaml
    - deployment-web.yaml

    # Change the image name and version
    images:
    - name: notavalidregistry.azurecr.io/api:v0.1.0
    newName: <YOUR_ACR_SERVER>/api
    newTag: <YOUR_IMAGE_TAG>
    - name: notavalidregistry.azurecr.io/web:v0.1.0
    newName: <YOUR_ACR_SERVER>/web
    newTag: <YOUR_IMAGE_TAG>

    Replacing Values in our Build

    Now, we encounter a little problem - our deployment files need to know the tag and ACR server. We can do a bit of sed magic to edit the file on the fly.

    In .github/workflows/deploy_to_aks.yml, we'll add:

          - name: replace_placeholders_with_current_run
    run: |
    sed -i "s/<YOUR_ACR_SERVER>/${{ inputs.containerRegistryUrl }}/g" ./manifests/kustomization.yaml
    sed -i "s/<YOUR_IMAGE_TAG>/${{ inputs.githubSha }}/g" ./manifests/kustomization.yaml

    Deploying the Manifests

    We have our manifests in place and our kustomization.yaml file (with commands to update it at runtime) ready to go, we can deploy our manifests.

    First, we'll deploy our database (deployment and service). Next, we'll use the -k parameter on kubectl to tell it to look for a kustomize configuration, transform the requested manifests and apply those. Finally, we apply the service defintions for the web and API deployments.

            run: |
    kubectl apply -f ./manifests/deployment-db.yaml \
    -f ./manifests/service-db.yaml
    kubectl apply -k ./manifests
    kubectl apply -f ./manifests/service-api.yaml \
    -f ./manifests/service-web.yaml

    Summary

    We've covered a lot of ground in today's post. We set up federated credentials with GitHub. Then we added reusable workflows to deploy an AKS environment and build/scan/publish our container images, and then to deploy them into our AKS environment.

    This sets us up to start making changes to our application and Kubernetes configuration and have those changes automatically validated and deployed by our CI/CD system. Tomorrow, we'll look at updating our application environment with runtime configuration, persistent storage, and more.

    Resources

    Take the Cloud Skills Challenge!

    Enroll in the Cloud Skills Challenge!

    Don't miss out on this opportunity to level up your skills and stay ahead of the curve in the world of cloud native.

    - + \ No newline at end of file diff --git a/cnny-2023/tags/zero-to-hero/page/11/index.html b/cnny-2023/tags/zero-to-hero/page/11/index.html index 9183bd8723..98f48badb5 100644 --- a/cnny-2023/tags/zero-to-hero/page/11/index.html +++ b/cnny-2023/tags/zero-to-hero/page/11/index.html @@ -14,13 +14,13 @@ - +

    16 posts tagged with "zero-to-hero"

    View All Tags

    · 9 min read
    Steven Murawski

    Welcome to Day 4 of Week 3 of #CloudNativeNewYear!

    The theme for this week is Bringing Your Application to Kubernetes. Yesterday we exposed the eShopOnWeb app so that customers can reach it over the internet using a custom domain name and TLS. Today we'll explore the topic of debugging and instrumentation.

    Ask the Experts Thursday, February 9th at 9 AM PST
    Friday, February 10th at 11 AM PST

    Watch the recorded demo and conversation about this week's topics.

    We were live on YouTube walking through today's (and the rest of this week's) demos. Join us Friday, February 10th and bring your questions!

    What We'll Cover

    • Debugging
    • Bridge To Kubernetes
    • Instrumentation
    • Resources: For self-study!

    Debugging

    Debugging applications in a Kubernetes cluster can be challenging for several reasons:

    • Complexity: Kubernetes is a complex system with many moving parts, including pods, nodes, services, and config maps, all of which can interact in unexpected ways and cause issues.
    • Distributed Environment: Applications running in a Kubernetes cluster are often distributed across multiple nodes, which makes it harder to determine the root cause of an issue.
    • Logging and Monitoring: Debugging an application in a Kubernetes cluster requires access to logs and performance metrics, which can be difficult to obtain in a large and dynamic environment.
    • Resource Management: Kubernetes manages resources such as CPU and memory, which can impact the performance and behavior of applications. Debugging resource-related issues requires a deep understanding of the Kubernetes resource model and the underlying infrastructure.
    • Dynamic Nature: Kubernetes is designed to be dynamic, with the ability to add and remove resources as needed. This dynamic nature can make it difficult to reproduce issues and debug problems.

    However, there are many tools and practices that can help make debugging applications in a Kubernetes cluster easier, such as using centralized logging, monitoring, and tracing solutions, and following best practices for managing resources and deployment configurations.

    There's also another great tool in our toolbox - Bridge to Kubernetes.

    Bridge to Kubernetes

    Bridge to Kubernetes is a great tool for microservice development and debugging applications without having to locally replicate all the required microservices.

    Bridge to Kubernetes works with Visual Studio or Visual Studio Code.

    We'll walk through using it with Visual Studio Code.

    Connecting Bridge to Kubernetes to Our Cluster

    Ensure your AKS cluster is the default for kubectl

    If you've recently spun up a new AKS cluster or you have been working with a different cluster, you may need to change what cluster credentials you have configured.

    If it's a new cluster, we can use:

    RESOURCE_GROUP=<YOUR RESOURCE GROUP NAME>
    CLUSTER_NAME=<YOUR AKS CLUSTER NAME>
    az aks get-credentials az aks get-credentials --resource-group $RESOURCE_GROUP --name $CLUSTER_NAME

    Open the command palette

    Open the command palette and find Bridge to Kubernetes: Configure. You may need to start typing the name to get it to show up.

    The command palette for Visual Studio Code is open and the first item is Bridge to Kubernetes: Configure

    Pick the service you want to debug

    Bridge to Kubernetes will redirect a service for you. Pick the service you want to redirect, in this case we'll pick web.

    Selecting the `web` service to redirect in Visual Studio Code

    Identify the port your application runs on

    Next, we'll be prompted to identify what port our application will run on locally. For this application it'll be 5001, but that's just specific to this application (and the default for ASP.NET 7, I believe).

    Setting port 5001 as the port to redirect to the `web` Kubernetes service in Visual Studio Code

    Pick a debug configuration to extend

    Bridge to Kubernetes has a couple of ways to run - it can inject it's setup and teardown to your existing debug configurations. We'll pick .NET Core Launch (web).

    Telling Bridge to Kubernetes to use the .NET Core Launch (web) debug configuration in Visual Studio Code

    Forward Traffic for All Requests

    The last prompt you'll get in the configuration is about how you want Bridge to Kubernetes to handle re-routing traffic. The default is that all requests into the service will get your local version.

    You can also redirect specific traffic. Bridge to Kubernetes will set up a subdomain and route specific traffic to your local service, while allowing other traffic to the deployed service.

    Allowing the launch of Endpoint Manager on Windows

    Using Bridge to Kubernetes to Debug Our Service

    Now that we've configured Bridge to Kubernetes, we see that tasks and a new launch configuration have been added.

    Added to .vscode/tasks.json:

            {
    "label": "bridge-to-kubernetes.resource",
    "type": "bridge-to-kubernetes.resource",
    "resource": "web",
    "resourceType": "service",
    "ports": [
    5001
    ],
    "targetCluster": "aks1",
    "targetNamespace": "default",
    "useKubernetesServiceEnvironmentVariables": false
    },
    {
    "label": "bridge-to-kubernetes.compound",
    "dependsOn": [
    "bridge-to-kubernetes.resource",
    "build"
    ],
    "dependsOrder": "sequence"
    }

    And added to .vscode/launch.json:

    {
    "name": ".NET Core Launch (web) with Kubernetes",
    "type": "coreclr",
    "request": "launch",
    "preLaunchTask": "bridge-to-kubernetes.compound",
    "program": "${workspaceFolder}/src/Web/bin/Debug/net7.0/Web.dll",
    "args": [],
    "cwd": "${workspaceFolder}/src/Web",
    "stopAtEntry": false,
    "env": {
    "ASPNETCORE_ENVIRONMENT": "Development",
    "ASPNETCORE_URLS": "http://+:5001"
    },
    "sourceFileMap": {
    "/Views": "${workspaceFolder}/Views"
    }
    }

    Launch the debug configuration

    We can start the process with the .NET Core Launch (web) with Kubernetes launch configuration in the Debug pane in Visual Studio Code.

    Launch the `.NET Core Launch (web) with Kubernetes` from the Debug pane in Visual Studio Code

    Enable the Endpoint Manager

    Part of this process includes a local service to help manage the traffic routing and your hosts file. This will require admin or sudo privileges. On Windows, you'll get a prompt like:

    Prompt to launch the endpoint manager.

    Access your Kubernetes cluster "locally"

    Bridge to Kubernetes will set up a tunnel (thanks to port forwarding) to your local workstation and create local endpoints for the other Kubernetes hosted services in your cluster, as well as pretending to be a pod in that cluster (for the application you are debugging).

    Output from Bridge To Kubernetes setup task.

    After making the connection to your Kubernetes cluster, the launch configuration will continue. In this case, we'll make a debug build of the application and attach the debugger. (This process may cause the terminal in VS Code to scroll with build output. You can find the Bridge to Kubernetes output with the local IP addresses and ports in the Output pane for Bridge to Kubernetes.)

    You can set breakpoints, use your debug console, set watches, run tests against your local version of the service.

    Exploring the Running Application Environment

    One of the cool things that Bridge to Kubernetes does for our debugging experience is bring the environment configuration that our deployed pod would inherit. When we launch our app, it'll see configuration and secrets that we'd expect our pod to be running with.

    To test this, we'll set a breakpoint in our application's start up to see what SQL Server is being used. We'll set a breakpoint at src/Infrastructure/Dependencies.cs on line 32.

    Then, we will start debugging the application with Bridge to Kubernetes. When it hits the breakpoint, we'll open the Debug pane and type configuration.GetConnectionString("CatalogConnection").

    When we run locally (not with Bridge to Kubernetes), we'd see:

    configuration.GetConnectionString("CatalogConnection")
    "Server=(localdb)\\mssqllocaldb;Integrated Security=true;Initial Catalog=Microsoft.eShopOnWeb.CatalogDb;"

    But, with Bridge to Kubernetes we see something more like (yours will vary based on the password ):

    configuration.GetConnectionString("CatalogConnection")
    "Server=db;Database=Microsoft.eShopOnWeb.CatalogDb;User Id=sa;Password=*****************;TrustServerCertificate=True;"

    Debugging our local application connected to Kubernetes.

    We can see that the database server configured is based on our db service and the password is pulled from our secret in Azure KeyVault (via AKS).

    This helps us run our local application just like it was actually in our cluster.

    Going Further

    Bridge to Kubernetes also supports more advanced scenarios and, as you need to start routing traffic around inside your cluster, may require you to modify your application to pass along a kubernetes-route-as header to help ensure that traffic for your debugging workloads is properly handled. The docs go into much greater detail about that.

    Instrumentation

    Now that we've figured out our debugging story, we'll need to ensure that we have the right context clues to find where we need to debug or to give us a better idea of how well our microservices are running.

    Logging and Tracing

    Logging and tracing become even more critical in Kubernetes, where your application could be running in a number of pods across different nodes. When you have an issue, in addition to the normal application data, you'll want to know what pod and what node had the issue, what the state of those resources were (were you resource constrained or were shared resources unavailable?), and if autoscaling is enabled, you'll want to know if a scale event has been triggered. There are a multitude of other concerns based on your application and the environment you maintain.

    Given these informational needs, it's crucial to revisit your existing logging and instrumentation. Most frameworks and languages have extensible logging, tracing, and instrumentation libraries that you can iteratively add information to, such as pod and node states, and ensuring that requests can be traced across your microservices. This will pay you back time and time again when you have to troubleshoot issues in your existing environment.

    Centralized Logging

    To enhance the troubleshooting process further, it's important to implement centralized logging to consolidate logs from all your microservices into a single location. This makes it easier to search and analyze logs when you're troubleshooting an issue.

    Automated Alerting

    Additionally, implementing automated alerting, such as sending notifications when specific conditions occur in the logs, can help you detect issues before they escalate.

    End to end Visibility

    End-to-end visibility is also essential in understanding the flow of requests and responses between microservices in a distributed system. With end-to-end visibility, you can quickly identify bottlenecks and slowdowns in the system, helping you to resolve issues more efficiently.

    Resources

    Take the Cloud Skills Challenge!

    Enroll in the Cloud Skills Challenge!

    Don't miss out on this opportunity to level up your skills and stay ahead of the curve in the world of cloud native.

    - + \ No newline at end of file diff --git a/cnny-2023/tags/zero-to-hero/page/12/index.html b/cnny-2023/tags/zero-to-hero/page/12/index.html index 878c61c257..e4765bdd96 100644 --- a/cnny-2023/tags/zero-to-hero/page/12/index.html +++ b/cnny-2023/tags/zero-to-hero/page/12/index.html @@ -14,13 +14,13 @@ - +

    16 posts tagged with "zero-to-hero"

    View All Tags

    · 7 min read
    Nitya Narasimhan

    Welcome to Week 4 of #CloudNativeNewYear!

    This week we'll go further with Cloud-native by exploring advanced topics and best practices for the Cloud-native practitioner. We'll start with an exploration of Serverless Container Options - ranging from managed services to Azure Kubernetes Service (AKS) and Azure Container Apps (ACA), to options that allow more granular control!

    What We'll Cover

    • The Azure Compute Landscape
    • Serverless Compute on Azure
    • Comparing Container Options On Azure
    • Other Considerations
    • Exercise: Try this yourself!
    • Resources: For self-study!


    We started this series with an introduction to core concepts:

    • In Containers 101, we learned why containerization matters. Think portability, isolation, scalability, resource-efficiency and cost-effectiveness. But not all apps can be containerized.
    • In Kubernetes 101, we learned how orchestration works. Think systems to automate container deployment, scaling, and management. But using Kubernetes directly can be complex.
    • In Exploring Cloud Native Options we asked the real questions: can we containerize - and should we?. The first depends on app characteristics, the second on your requirements.

    For example:

    • Can we containerize? The answer might be no if your app has system or OS dependencies, requires access to low-level hardware, or maintains complex state across sessions.
    • Should we containerize? The answer might be yes if your app is microservices-based, is stateless by default, requires portability, or is a legaacy app that can benefit from container isolation.

    As with every technology adoption decision process, there are no clear yes/no answers - just tradeoffs that you need to evaluate based on your architecture and application requirements. In today's post, we try to look at this from two main perspectives:

    1. Should you go serverless? Think: managed services that let you focus on app, not infra.
    2. What Azure Compute should I use? Think: best fit for my architecture & technology choices.

    Azure Compute Landscape

    Let's answer the second question first by exploring all available compute options on Azure. The illustrated decision-flow below is my favorite ways to navigate the choices, with questions like:

    • Are you migrating an existing app or building a new one?
    • Can you app be containerized?
    • Does it use a specific technology (Spring Boot, Red Hat Openshift)?
    • Do you need access to the Kubernetes API?
    • What characterizes the workload? (event-driven, web app, microservices etc.)

    Read the docs to understand how your choices can be influenced by the hosting model (IaaS, PaaS, FaaS), supported features (Networking, DevOps, Scalability, Availability, Security), architectural styles (Microservices, Event-driven, High-Performance Compute, Task Automation,Web-Queue Worker) etc.

    Compute Choices

    Now that we know all available compute options, let's address the second question: why go serverless? and what are my serverless compute options on Azure?

    Azure Serverless Compute

    Serverless gets defined many ways, but from a compute perspective, we can focus on a few key characteristics that are key to influencing this decision:

    • managed services - focus on application, let cloud provider handle infrastructure.
    • pay for what you use - get cost-effective resource utilization, flexible pricing options.
    • autoscaling on demand - take advantage of built-in features like KEDA-compliant triggers.
    • use preferred languages - write code in Java, JS, C#, Python etc. (specifics based on service)
    • cloud-native architectures - can support event-driven solutions, APIs, Microservices, DevOps!

    So what are some of the key options for Serverless Compute on Azure? The article dives into serverless support for fully-managed end-to-end serverless solutions with comprehensive support for DevOps, DevTools, AI/ML, Database, Storage, Monitoring and Analytics integrations. But we'll just focus on the 4 categories of applications when we look at Compute!

    1. Serverless Containerized Microservices using Azure Container Apps. Code in your preferred language, exploit full Dapr support, scale easily with any KEDA-compliant trigger.
    2. Serverless Application Environments using Azure App Service. Suitable for hosting monolithic apps (vs. microservices) in a managed service, with built-in support for on-demand scaling.
    3. Serverless Kubernetes using Azure Kubernetes Service (AKS). Spin up pods inside container instances and deploy Kubernetes-based applications with built-in KEDA-compliant autoscaling.
    4. Serverless Functions using Azure Functions. Execute "code at the granularity of functions" in your preferred language, and scale on demand with event-driven compute.

    We'll talk about these, and other compute comparisons, at the end of the article. But let's start with the core option you might choose if you want a managed serverless compute solution with built-in features for delivering containerized microservices at scale. Hello, Azure Container Apps!.

    Azure Container Apps

    Azure Container Apps (ACA) became generally available in May 2022 - providing customers with the ability to run microservices and containerized applications on a serverless, consumption-based platform. The figure below showcases the different types of applications that can be built with ACA. Note that it comes with built-in KEDA-compliant autoscaling triggers, and other auto-scale criteria that may be better-suited to the type of application you are building.

    About ACA

    So far in the series, we've put the spotlight on Azure Kubernetes Service (AKS) - so you're probably asking yourself: How does ACA compare to AKS?. We're glad you asked. Check out our Go Cloud-native with Azure Container Apps post from the #ServerlessSeptember series last year for a deeper-dive, or review the figure below for the main comparison points.

    The key takeaway is this. Azure Container Apps (ACA) also runs on Kubernetes but abstracts away its complexity in a managed service offering that lets you get productive quickly without requiring detailed knowledge of Kubernetes workings or APIs. However, if you want full access and control over the Kubernetes API then go with Azure Kubernetes Service (AKS) instead.

    Comparison

    Other Container Options

    Azure Container Apps is the preferred Platform As a Service (PaaS) option for a fully-managed serverless solution on Azure that is purpose-built for cloud-native microservices-based application workloads. But - there are other options that may be suitable for your specific needs, from a requirements and tradeoffs perspective. Let's review them quickly:

    1. Azure Functions is the serverless Functions-as-a-Service (FaaS) option, as opposed to ACA which supports a PaaS approach. It's optimized for running event-driven applications built at the granularity of ephemeral functions that can be deployed as code or containers.
    2. Azure App Service provides fully managed hosting for web applications that may be deployed using code or containers. It can be integrated with other services including Azure Container Apps and Azure Functions. It's optimized for deploying traditional web apps.
    3. Azure Kubernetes Service provides a fully managed Kubernetes option capable of running any Kubernetes workload, with direct access to the Kubernetes API.
    4. Azure Container Instances provides a single pod of Hyper-V isolated containers on demand, making them more of a low-level "building block" option compared to ACA.

    Based on the technology choices you made for application development you may also have more specialized options you want to consider. For instance:

    1. Azure Spring Apps is ideal if you're running Spring Boot or Spring Cloud workloads on Azure,
    2. Azure Red Hat OpenShift is ideal for integrated Kubernetes-powered OpenShift on Azure.
    3. Azure Confidential Computing is ideal if you have data/code integrity and confidentiality needs.
    4. Kubernetes At The Edge is ideal for bare-metal options that extend compute to edge devices.

    This is just the tip of the iceberg in your decision-making journey - but hopefully, it gave you a good sense of the options and criteria that influences your final choices. Let's wrap this up with a look at self-study resources for skilling up further.

    Exercise

    Want to get hands on learning related to these technologies?

    TAKE THE CLOUD SKILLS CHALLENGE

    Register today and level up your skills by completing free learning modules, while competing with your peers for a place on the leaderboards!

    Resources

    - + \ No newline at end of file diff --git a/cnny-2023/tags/zero-to-hero/page/13/index.html b/cnny-2023/tags/zero-to-hero/page/13/index.html index c6bae57e7f..2439af392e 100644 --- a/cnny-2023/tags/zero-to-hero/page/13/index.html +++ b/cnny-2023/tags/zero-to-hero/page/13/index.html @@ -14,13 +14,13 @@ - +

    16 posts tagged with "zero-to-hero"

    View All Tags

    · 3 min read
    Cory Skimming

    It's the final week of #CloudNativeNewYear! This week we'll go further with Cloud-native by exploring advanced topics and best practices for the Cloud-native practitioner. In today's post, we will introduce you to the basics of the open-source project Draft and how it can be used to easily create and deploy applications to Kubernetes.

    It's not too late to sign up for and complete the Cloud Skills Challenge!

    What We'll Cover

    • What is Draft?
    • Draft basics
    • Demo: Developing to AKS with Draft
    • Resources


    What is Draft?

    Draft is an open-source tool that can be used to streamline the development and deployment of applications on Kubernetes clusters. It provides a simple and easy-to-use workflow for creating and deploying applications, making it easier for developers to focus on writing code and building features, rather than worrying about the underlying infrastructure. This is great for users who are just getting started with Kubernetes, or those who are just looking to simplify their experience.

    New to Kubernetes?

    Draft basics

    Draft streamlines Kubernetes development by taking a non-containerized application and generating the Dockerfiles, K8s manifests, Helm charts, and other artifacts associated with a containerized application. Draft can also create a GitHub Action workflow file to quickly build and deploy your application onto any Kubernetes cluster.

    1. 'draft create'': Create a new Draft project by simply running the 'draft create' command - this command will walk you through a series of questions on your application specification (such as the application language) and create a Dockerfile, Helm char, and Kubernetes
    2. 'draft generate-workflow'': Automatically build out a GitHub Action using the 'draft generate-workflow' command
    3. 'draft setup-gh'': If you are using Azure, use this command to automate the GitHub OIDC set up process to ensure that you will be able to deploy your application using your GitHub Action.

    At this point, you will have all the files needed to deploy your app onto a Kubernetes cluster (we told you it was easy!).

    You can also use the 'draft info' command if you are looking for information on supported languages and deployment types. Let's see it in action, shall we?


    Developing to AKS with Draft

    In this Microsoft Reactor session below, we'll briefly introduce Kubernetes and the Azure Kubernetes Service (AKS) and then demo how enable your applications for Kubernetes using the open-source tool Draft. We'll show how Draft can help you create the boilerplate code to containerize your applications and add routing and scaling behaviours.

    ##Conclusion

    Overall, Draft simplifies the process of building, deploying, and managing applications on Kubernetes, and can make the overall journey from code to Kubernetes significantly easier.


    Resources


    - + \ No newline at end of file diff --git a/cnny-2023/tags/zero-to-hero/page/14/index.html b/cnny-2023/tags/zero-to-hero/page/14/index.html index 8dc016b9a6..c51e1d785e 100644 --- a/cnny-2023/tags/zero-to-hero/page/14/index.html +++ b/cnny-2023/tags/zero-to-hero/page/14/index.html @@ -14,14 +14,14 @@ - +

    16 posts tagged with "zero-to-hero"

    View All Tags

    · 7 min read
    Vinicius Apolinario

    Welcome to Day 3 of Week 4 of #CloudNativeNewYear!

    The theme for this week is going further with Cloud Native. Yesterday we talked about using Draft to accelerate your Kubernetes adoption. Today we'll explore the topic of Windows containers.

    What We'll Cover

    • Introduction
    • Windows containers overview
    • Windows base container images
    • Isolation
    • Exercise: Try this yourself!
    • Resources: For self-study!

    Introduction

    Windows containers were launched along with Windows Server 2016, and have evolved since. In its latest release, Windows Server 2022, Windows containers have reached a great level of maturity and allow for customers to run production grade workloads.

    While suitable for new developments, Windows containers also provide developers and operations with a different approach than Linux containers. It allows for existing Windows applications to be containerized with little or no code changes. It also allows for professionals that are more comfortable with the Windows platform and OS, to leverage their skill set, while taking advantage of the containers platform.

    Windows container overview

    In essence, Windows containers are very similar to Linux. Since Windows containers use the same foundation of Docker containers, you can expect that the same architecture applies - with the specific notes of the Windows OS. For example, when running a Windows container via Docker, you use the same commands, such as docker run. To pull a container image, you can use docker pull, just like on Linux. However, to run a Windows container, you also need a Windows container host. This requirement is there because, as you might remember, a container shares the OS kernel with its container host.

    On Kubernetes, Windows containers are supported since Windows Server 2019. Just like with Docker, you can manage Windows containers like any other resource on the Kubernetes ecosystem. A Windows node can be part of a Kubernetes cluster, allowing you to run Windows container based applications on services like Azure Kubernetes Service. To deploy an Windows application to a Windows pod in Kubernetes, you can author a YAML specification much like you would for Linux. The main difference is that you would point to an image that runs on Windows, and you need to specify a node selection tag to indicate said pod needs to run on a Windows node.

    Windows base container images

    On Windows containers, you will always use a base container image provided by Microsoft. This base container image contains the OS binaries for the container to run. This image can be as large as 3GB+, or small as ~300MB. The difference in the size is a consequence of the APIs and components available in each Windows container base container image. There are primarily, three images: Nano Server, Server Core, and Server.

    Nano Server is the smallest image, ranging around 300MB. It's a base container image for new developments and cloud-native scenarios. Applications need to target Nano Server as the Windows OS, so not all frameworks will work. For example, .Net works on Nano Server, but .Net Framework doesn't. Other third-party frameworks also work on Nano Server, such as Apache, NodeJS, Phyton, Tomcat, Java runtime, JBoss, Redis, among others.

    Server Core is a much larger base container image, ranging around 1.25GB. It's larger size is compensated by it's application compatibility. Simply put, any application that meets the requirements to be run on a Windows container, can be containerized with this image.

    The Server image builds on the Server Core one. It ranges around 3.1GB and has even greater application compatibility than the Server Core image. In addition to the traditional Windows APIs and components, this image allows for scenarios such as Machine Learning via DirectX with GPU access.

    The best image for your scenario is dependent on the requirements your application has on the Windows OS inside a container. However, there are some scenarios that are not supported at all on Windows containers - such as GUI or RDP dependent applications, some Windows Server infrastructure roles, such as Active Directory, among others.

    Isolation

    When running containers, the kernel of the container host is shared with the containers running on it. While extremely convenient, this poses a potential risk for multi-tenant scenarios. If one container is compromised and has access to the host, it could potentially compromise other containers in the same system.

    For enterprise customers running on-premises (or even in the cloud), this can be mitigated by using a VM as a container host and considering the VM itself a security boundary. However, if multiple workloads from different tenants need to share the same host, Windows containers offer another option: Hyper-V isolation. While the name Hyper-V is associated with VMs, its virtualization capabilities extend to other services, including containers. Hyper-V isolated containers run on a purpose built, extremely small, highly performant VM. However, you manage a container running with Hyper-V isolation the same way you do with a process isolated one. In fact, the only notable difference is that you need to append the --isolation=hyperv tag to the docker run command.

    Exercise

    Here are a few examples of how to use Windows containers:

    Run Windows containers via Docker on your machine

    To pull a Windows base container image:

    docker pull mcr.microsoft.com/windows/servercore:ltsc2022

    To run a basic IIS container:

    #This command will pull and start a IIS container. You can access it from http://<your local IP>:8080
    docker run -d -p 8080:80 mcr.microsoft.com/windows/servercore/iis:windowsservercore-ltsc2022

    Run the same IIS container with Hyper-V isolation

    #This command will pull and start a IIS container. You can access it from http://<your local IP>:8080
    docker run -d -p 8080:80 --isolation=hyperv mcr.microsoft.com/windows/servercore/iis:windowsservercore-ltsc2022

    To run a Windows container interactively:

    docker run -it mcr.microsoft.com/windows/servercore:ltsc2022 powershell

    Run Windows containers on Kubernetes

    To prepare an AKS cluster for Windows containers: Note: Replace the values on the example below with the ones from your environment.

    echo "Please enter the username to use as administrator credentials for Windows Server nodes on your cluster: " && read WINDOWS_USERNAME
    az aks create \
    --resource-group myResourceGroup \
    --name myAKSCluster \
    --node-count 2 \
    --generate-ssh-keys \
    --windows-admin-username $WINDOWS_USERNAME \
    --vm-set-type VirtualMachineScaleSets \
    --network-plugin azure

    To add a Windows node pool for Windows containers:

    az aks nodepool add \
    --resource-group myResourceGroup \
    --cluster-name myAKSCluster \
    --os-type Windows \
    --name npwin \
    --node-count 1

    Deploy a sample ASP.Net application to the AKS cluster above using the YAML file below:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
    name: sample
    labels:
    app: sample
    spec:
    replicas: 1
    template:
    metadata:
    name: sample
    labels:
    app: sample
    spec:
    nodeSelector:
    "kubernetes.io/os": windows
    containers:
    - name: sample
    image: mcr.microsoft.com/dotnet/framework/samples:aspnetapp
    resources:
    limits:
    cpu: 1
    memory: 800M
    ports:
    - containerPort: 80
    selector:
    matchLabels:
    app: sample
    ---
    apiVersion: v1
    kind: Service
    metadata:
    name: sample
    spec:
    type: LoadBalancer
    ports:
    - protocol: TCP
    port: 80
    selector:
    app: sample

    Save the file above and run the command below on your Kubernetes cluster:

    kubectl apply -f <filename> .

    Once deployed, you can access the application by getting the IP address of your service:

    kubectl get service

    Resources

    It's not too late to sign up for and complete the Cloud Skills Challenge!
    - + \ No newline at end of file diff --git a/cnny-2023/tags/zero-to-hero/page/15/index.html b/cnny-2023/tags/zero-to-hero/page/15/index.html index d1f3416a63..8129274d5b 100644 --- a/cnny-2023/tags/zero-to-hero/page/15/index.html +++ b/cnny-2023/tags/zero-to-hero/page/15/index.html @@ -14,13 +14,13 @@ - +

    16 posts tagged with "zero-to-hero"

    View All Tags

    · 4 min read
    Jorge Arteiro

    Welcome to Day 4 of Week 4 of #CloudNativeNewYear!

    The theme for this week is going further with Cloud Native. Yesterday we talked about Windows Containers. Today we'll explore addons and extensions available to Azure Kubernetes Services (AKS).

    What We'll Cover

    • Introduction
    • Add-ons
    • Extensions
    • Add-ons vs Extensions
    • Resources

    Introduction

    Azure Kubernetes Service (AKS) is a fully managed container orchestration service that makes it easy to deploy and manage containerized applications on Azure. AKS offers a number of features and capabilities, including the ability to extend its supported functionality through the use of add-ons and extensions.

    There are also integrations available from open-source projects and third parties, but they are not covered by the AKS support policy.

    Add-ons

    Add-ons provide a supported way to extend AKS. Installation, configuration and lifecycle are managed by AKS following pre-determine updates rules.

    As an example, let's enable Container Insights with the monitoring addon. on an existing AKS cluster using az aks enable-addons --addons CLI command

    az aks enable-addons \
    --name MyManagedCluster \
    --resource-group MyResourceGroup \
    --addons monitoring

    or you can use az aks create --enable-addons when creating new clusters

    az aks create \
    --name MyManagedCluster \
    --resource-group MyResourceGroup \
    --enable-addons monitoring

    The current available add-ons are:

    1. http_application_routing - Configure ingress with automatic public DNS name creation. Only recommended for development.
    2. monitoring - Container Insights monitoring.
    3. virtual-node - CNCF virtual nodes open source project.
    4. azure-policy - Azure Policy for AKS.
    5. ingress-appgw - Application Gateway Ingress Controller (AGIC).
    6. open-service-mesh - CNCF Open Service Mesh project.
    7. azure-keyvault-secrets-provider - Azure Key Vault Secrets Provider for Secret Store CSI Driver.
    8. web_application_routing - Managed NGINX ingress Controller.
    9. keda - CNCF Event-driven autoscaling project.

    For more details, get the updated list of AKS Add-ons here

    Extensions

    Cluster Extensions uses Helm charts and integrates with Azure Resource Manager (ARM) to provide installation and lifecycle management of capabilities on top of AKS.

    Extensions can be auto upgraded using minor versions, but it requires extra management and configuration. Using Scope parameter, it can be installed on the whole cluster or per namespace.

    AKS Extensions requires an Azure CLI extension to be installed. To add or update this CLI extension use the following commands:

    az extension add --name k8s-extension

    and to update an existing extension

    az extension update --name k8s-extension

    There are only 3 available extensions:

    1. Dapr - CNCF Dapr project.
    2. Azure ML - Integrate Azure Machine Learning with AKS to train, inference and manage ML models.
    3. Flux (GitOps) - CNCF Flux project integrated with AKS to enable cluster configuration and application deployment using GitOps.

    As an example, you can install Azure ML using the following command:

    az k8s-extension create \
    --name aml-compute --extension-type Microsoft.AzureML.Kubernetes \
    --scope cluster --cluster-name <clusterName> \
    --resource-group <resourceGroupName> \
    --cluster-type managedClusters \
    --configuration-settings enableInference=True allowInsecureConnections=True

    For more details, get the updated list of AKS Extensions here

    Add-ons vs Extensions

    AKS Add-ons brings an advantage of been fully managed by AKS itself, and AKS Extensions are more flexible and configurable but requires extra level of management.

    Add-ons are part of the AKS resource provider in the Azure API, and AKS Extensions are a separate resource provider on the Azure API.

    Resources

    It's not too late to sign up for and complete the Cloud Skills Challenge!
    - + \ No newline at end of file diff --git a/cnny-2023/tags/zero-to-hero/page/16/index.html b/cnny-2023/tags/zero-to-hero/page/16/index.html index 30760c6680..d69e0353e4 100644 --- a/cnny-2023/tags/zero-to-hero/page/16/index.html +++ b/cnny-2023/tags/zero-to-hero/page/16/index.html @@ -14,13 +14,13 @@ - +

    16 posts tagged with "zero-to-hero"

    View All Tags

    · 6 min read
    Cory Skimming
    Steven Murawski
    Paul Yu
    Josh Duffney
    Nitya Narasimhan
    Vinicius Apolinario
    Jorge Arteiro
    Devanshi Joshi

    And that's a wrap on the inaugural #CloudNativeNewYear! Thank you for joining us to kick off the new year with this learning journey into cloud-native! In this final post of the 2023 celebration of all things cloud-native, we'll do two things:

    • Look Back - with a quick retrospective of what was covered.
    • Look Ahead - with resources and suggestions for how you can continue your skilling journey!

    We appreciate your time and attention and we hope you found this curated learning valuable. Feedback and suggestions are always welcome. From our entire team, we wish you good luck with the learning journey - now go build some apps and share your knowledge! 🎉


    What We'll Cover

    • Cloud-native fundamentals
    • Kubernetes fundamentals
    • Bringing your applications to Kubernetes
    • Go further with cloud-native
    • Resources to keep the celebration going!

    Week 1: Cloud-native Fundamentals

    In Week 1, we took a tour through the fundamentals of cloud-native technologies, including a walkthrough of the core concepts of containers, microservices, and Kubernetes.

    • Jan 23 - Cloud-native Fundamentals: The answers to life and all the universe - what is cloud-native? What makes an application cloud-native? What are the benefits? (yes, we all know it's 42, but hey, gotta start somewhere!)
    • Jan 24 - Containers 101: Containers are an essential component of cloud-native development. In this intro post, we cover how containers work and why they have become so popular.
    • Jan 25 - Kubernetes 101: Kuber-what-now? Learn the basics of Kubernetes and how it enables us to deploy and manage our applications effectively and consistently.
    A QUICKSTART GUIDE TO KUBERNETES CONCEPTS

    Missed it Live? Tune in to A Quickstart Guide to Kubernetes Concepts on demand, now!

    • Jan 26 - Microservices 101: What is a microservices architecture and how can we go about designing one?
    • Jan 27 - Exploring your Cloud Native Options: Cloud-native, while catchy, can be a very broad term. What technologies should you use? Learn some basic guidelines for when it is optimal to use different technologies for your project.

    Week 2: Kubernetes Fundamentals

    In Week 2, we took a deeper dive into the Fundamentals of Kubernetes. The posts and live demo from this week took us through how to build a simple application on Kubernetes, covering everything from deployment to networking and scaling. Note: for our samples and demo we have used Azure Kubernetes Service, but the principles apply to any Kubernetes!

    • Jan 30 - Pods and Deployments: how to use pods and deployments in Kubernetes.
    • Jan 31 - Services and Ingress: how to use services and ingress and a walk through the steps of making our containers accessible internally and externally!
    • Feb 1 - ConfigMaps and Secrets: how to of passing configuration and secrets to our applications in Kubernetes with ConfigMaps and Secrets.
    • Feb 2 - Volumes, Mounts, and Claims: how to use persistent storage on Kubernetes (and ensure your data can survive container restarts!).
    • Feb 3 - Scaling Pods and Nodes: how to scale pods and nodes in our Kubernetes cluster.
    ASK THE EXPERTS: AZURE KUBERNETES SERVICE

    Missed it Live? Tune in to Ask the Expert with Azure Kubernetes Service on demand, now!


    Week 3: Bringing your applications to Kubernetes

    So, you have learned how to build an application on Kubernetes. What about your existing applications? In Week 3, we explored how to take an existing application and set it up to run in Kubernetes:

    • Feb 6 - CI/CD: learn how to get an existing application running in Kubernetes with a full pipeline in GitHub Actions.
    • Feb 7 - Adapting Storage, Secrets, and Configuration: how to evaluate our sample application's configuration, storage, and networking requirements and implement using Kubernetes.
    • Feb 8 - Opening your Application with Ingress: how to expose the eShopOnWeb app so that customers can reach it over the internet using a custom domain name and TLS.
    • Feb 9 - Debugging and Instrumentation: how to debug and instrument your application now that it is on Kubernetes.
    • Feb 10 - CI/CD Secure Supply Chain: now that we have set up our application on Kubernetes, let's talk about container image signing and how to set up a secure supply change.

    Week 4: Go Further with Cloud-Native

    This week we have gone further with Cloud-native by exploring advanced topics and best practices for the Cloud-native practitioner.

    And today, February 17th, with this one post to rule (er, collect) them all!


    Keep the Learning Going!

    Learning is great, so why stop here? We have a host of great resources and samples for you to continue your cloud-native journey with Azure below:


    - + \ No newline at end of file diff --git a/cnny-2023/tags/zero-to-hero/page/2/index.html b/cnny-2023/tags/zero-to-hero/page/2/index.html index 4520ef55d4..99f10b3e3b 100644 --- a/cnny-2023/tags/zero-to-hero/page/2/index.html +++ b/cnny-2023/tags/zero-to-hero/page/2/index.html @@ -14,14 +14,14 @@ - +

    16 posts tagged with "zero-to-hero"

    View All Tags

    · 5 min read
    Cory Skimming

    Welcome to Week 1 of #CloudNativeNewYear!

    Cloud-native New Year

    You will often hear the term "cloud-native" when discussing modern application development, but even a quick online search will return a huge number of articles, tweets, and web pages with a variety of definitions. So, what does cloud-native actually mean? Also, what makes an application a cloud-native application versus a "regular" application?

    Today, we will address these questions and more as we kickstart our learning journey (and our new year!) with an introductory dive into the wonderful world of cloud-native.


    What We'll Cover

    • What is cloud-native?
    • What is a cloud-native application?
    • The benefits of cloud-native
    • The five pillars of cloud-native
    • Exercise: Take the Cloud Skills Challenge!

    1. What is cloud-native?

    The term "cloud-native" can seem pretty self-evident (yes, hello, native to the cloud?), and in a way, it is. While there are lots of definitions of cloud-native floating around, at it's core, cloud-native simply refers to a modern approach to building software that takes advantage of cloud services and environments. This includes using cloud-native technologies, such as containers, microservices, and serverless, and following best practices for deploying, scaling, and managing applications in a cloud environment.

    Official definition from the Cloud Native Computing Foundation:

    Cloud-native technologies empower organizations to build and run scalable applications in modern, dynamic environments such as public, private, and hybrid clouds. Containers, service meshes, microservices, immutable infrastructure, and declarative APIs exemplify this approach.

    These techniques enable loosely coupled systems that are resilient, manageable, and observable. Combined with robust automation, they allow engineers to make high-impact changes frequently and predictably with minimal toil. Source


    2. So, what exactly is a cloud-native application?

    Cloud-native applications are specifically designed to take advantage of the scalability, resiliency, and distributed nature of modern cloud infrastructure. But how does this differ from a "traditional" application?

    Traditional applications are generally been built, tested, and deployed as a single, monolithic unit. The monolithic nature of this type of architecture creates close dependencies between components. This complexity and interweaving only increases as an application grows and can make it difficult to evolve (not to mention troubleshoot) and challenging to operate over time.

    To contrast, in cloud-native architectures the application components are decomposed into loosely coupled services, rather than built and deployed as one block of code. This decomposition into multiple self-contained services enables teams to manage complexity and improve the speed, agility, and scale of software delivery. Many small parts enables teams to make targeted updates, deliver new features, and fix any issues without leading to broader service disruption.


    3. The benefits of cloud-native

    Cloud-native architectures can bring many benefits to an organization, including:

    1. Scalability: easily scale up or down based on demand, allowing organizations to adjust their resource usage and costs as needed.
    2. Flexibility: deploy and run on any cloud platform, and easily move between clouds and on-premises environments.
    3. High-availability: techniques such as redundancy, self-healing, and automatic failover help ensure that cloud-native applications are designed to be highly-available and fault tolerant.
    4. Reduced costs: take advantage of the pay-as-you-go model of cloud computing, reducing the need for expensive infrastructure investments.
    5. Improved security: tap in to cloud security features, such as encryption and identity management, to improve the security of the application.
    6. Increased agility: easily add new features or services to your applications to meet changing business needs and market demand.

    4. The pillars of cloud-native

    There are five areas that are generally cited as the core building blocks of cloud-native architecture:

    1. Microservices: Breaking down monolithic applications into smaller, independent, and loosely-coupled services that can be developed, deployed, and scaled independently.
    2. Containers: Packaging software in lightweight, portable, and self-sufficient containers that can run consistently across different environments.
    3. Automation: Using automation tools and DevOps processes to manage and operate the cloud-native infrastructure and applications, including deployment, scaling, monitoring, and self-healing.
    4. Service discovery: Using service discovery mechanisms, such as APIs & service meshes, to enable services to discover and communicate with each other.
    5. Observability: Collecting and analyzing data from the infrastructure and applications to understand and optimize the performance, behavior, and health of the system.

    These can (and should!) be used in combination to deliver cloud-native solutions that are highly scalable, flexible, and available.

    WHAT'S NEXT

    Stay tuned, as we will be diving deeper into these topics in the coming weeks:

    • Jan 24: Containers 101
    • Jan 25: Adopting Microservices with Kubernetes
    • Jan 26: Kubernetes 101
    • Jan 27: Exploring your Cloud-native Options

    Resources


    Don't forget to subscribe to the blog to get daily posts delivered directly to your favorite feed reader!


    - + \ No newline at end of file diff --git a/cnny-2023/tags/zero-to-hero/page/3/index.html b/cnny-2023/tags/zero-to-hero/page/3/index.html index ee3176e16b..593c308e99 100644 --- a/cnny-2023/tags/zero-to-hero/page/3/index.html +++ b/cnny-2023/tags/zero-to-hero/page/3/index.html @@ -14,14 +14,14 @@ - +

    16 posts tagged with "zero-to-hero"

    View All Tags

    · 4 min read
    Steven Murawski
    Paul Yu
    Josh Duffney

    Welcome to Day 2 of Week 1 of #CloudNativeNewYear!

    Today, we'll focus on building an understanding of containers.

    What We'll Cover

    • Introduction
    • How do Containers Work?
    • Why are Containers Becoming so Popular?
    • Conclusion
    • Resources
    • Learning Path

    REGISTER & LEARN: KUBERNETES 101

    Interested in a dive into Kubernetes and a chance to talk to experts?

    🎙: Join us Jan 26 @1pm PST by registering here

    Here's what you will learn:

    • Key concepts and core principles of Kubernetes.
    • How to deploy, scale and manage containerized workloads.
    • Live Demo of the concepts explained
    • How to get started with Azure Kubernetes Service for free.

    Start your free Azure Kubernetes Trial Today!!: aka.ms/TryAKS

    Introduction

    In the beginning, we deployed our applications onto physical servers. We only had a certain number of those servers, so often they hosted multiple applications. This led to some problems when those applications shared dependencies. Upgrading one application could break another application on the same server.

    Enter virtualization. Virtualization allowed us to run our applications in an isolated operating system instance. This removed much of the risk of updating shared dependencies. However, it increased our overhead since we had to run a full operating system for each application environment.

    To address the challenges created by virtualization, containerization was created to improve isolation without duplicating kernel level resources. Containers provide efficient and consistent deployment and runtime experiences for our applications and have become very popular as a way of packaging and distributing applications.

    How do Containers Work?

    Containers build on two capabilities in the Linux operating system, namespaces and cgroups. These constructs allow the operating system to provide isolation to a process or group of processes, keeping their access to filesystem resources separate and providing controls on resource utilization. This, combined with tooling to help package, deploy, and run container images has led to their popularity in today’s operating environment. This provides us our isolation without the overhead of additional operating system resources.

    When a container host is deployed on an operating system, it works at scheduling the access to the OS (operating systems) components. This is done by providing a logical isolated group that can contain processes for a given application, called a namespace. The container host then manages /schedules access from the namespace to the host OS. The container host then uses cgroups to allocate compute resources. Together, the container host with the help of cgroups and namespaces can schedule multiple applications to access host OS resources.

    Overall, this gives the illusion of virtualizing the host OS, where each application gets its own OS. In actuality, all the applications are running on the same operating system and sharing the same kernel as the container host.

    Containers are popular in the software development industry because they provide several benefits over traditional virtualization methods. Some of these benefits include:

    • Portability: Containers make it easy to move an application from one environment to another without having to worry about compatibility issues or missing dependencies.
    • Isolation: Containers provide a level of isolation between the application and the host system, which means that the application running in the container cannot access the host system's resources.
    • Scalability: Containers make it easy to scale an application up or down as needed, which is useful for applications that experience a lot of traffic or need to handle a lot of data.
    • Resource Efficiency: Containers are more resource-efficient than traditional virtualization methods because they don't require a full operating system to be running on each virtual machine.
    • Cost-Effective: Containers are more cost-effective than traditional virtualization methods because they don't require expensive hardware or licensing fees.

    Conclusion

    Containers are a powerful technology that allows developers to package and deploy applications in a portable and isolated environment. This technology is becoming increasingly popular in the world of software development and is being used by many companies and organizations to improve their application deployment and management processes. With the benefits of portability, isolation, scalability, resource efficiency, and cost-effectiveness, containers are definitely worth considering for your next application development project.

    Containerizing applications is a key step in modernizing them, and there are many other patterns that can be adopted to achieve cloud-native architectures, including using serverless platforms, Kubernetes, and implementing DevOps practices.

    Resources

    Learning Path

    - + \ No newline at end of file diff --git a/cnny-2023/tags/zero-to-hero/page/4/index.html b/cnny-2023/tags/zero-to-hero/page/4/index.html index 4485aab2c7..5f0baf26ba 100644 --- a/cnny-2023/tags/zero-to-hero/page/4/index.html +++ b/cnny-2023/tags/zero-to-hero/page/4/index.html @@ -14,14 +14,14 @@ - +

    16 posts tagged with "zero-to-hero"

    View All Tags

    · 3 min read
    Steven Murawski

    Welcome to Day 3 of Week 1 of #CloudNativeNewYear!

    This week we'll focus on what Kubernetes is.

    What We'll Cover

    • Introduction
    • What is Kubernetes? (Video)
    • How does Kubernetes Work? (Video)
    • Conclusion


    REGISTER & LEARN: KUBERNETES 101

    Interested in a dive into Kubernetes and a chance to talk to experts?

    🎙: Join us Jan 26 @1pm PST by registering here

    Here's what you will learn:

    • Key concepts and core principles of Kubernetes.
    • How to deploy, scale and manage containerized workloads.
    • Live Demo of the concepts explained
    • How to get started with Azure Kubernetes Service for free.

    Start your free Azure Kubernetes Trial Today!!: aka.ms/TryAKS

    Introduction

    Kubernetes is an open source container orchestration engine that can help with automated deployment, scaling, and management of our applications.

    Kubernetes takes physical (or virtual) resources and provides a consistent API over them, bringing a consistency to the management and runtime experience for our applications. Kubernetes provides us with a number of capabilities such as:

    • Container scheduling
    • Service discovery and load balancing
    • Storage orchestration
    • Automated rollouts and rollbacks
    • Automatic bin packing
    • Self-healing
    • Secret and configuration management

    We'll learn more about most of these topics as we progress through Cloud Native New Year.

    What is Kubernetes?

    Let's hear from Brendan Burns, one of the founders of Kubernetes as to what Kubernetes actually is.

    How does Kubernetes Work?

    And Brendan shares a bit more with us about how Kubernetes works.

    Conclusion

    Kubernetes allows us to deploy and manage our applications effectively and consistently.

    By providing a consistent API across many of the concerns our applications have, like load balancing, networking, storage, and compute, Kubernetes improves both our ability to build and ship new software.

    There are standards for the applications to depend on for resources needed. Deployments, metrics, and logs are provided in a standardized fashion allowing more effecient operations across our application environments.

    And since Kubernetes is an open source platform, it can be found in just about every type of operating environment - cloud, virtual machines, physical hardware, shared data centers, even small devices like Rasberry Pi's!

    Want to learn more? Join us for a webinar on Kubernetes Concepts (or catch the playback) on Thursday, January 26th at 1 PM PST and watch for the rest of this series right here!

    - + \ No newline at end of file diff --git a/cnny-2023/tags/zero-to-hero/page/5/index.html b/cnny-2023/tags/zero-to-hero/page/5/index.html index 27336a5cdd..91cbc3e97e 100644 --- a/cnny-2023/tags/zero-to-hero/page/5/index.html +++ b/cnny-2023/tags/zero-to-hero/page/5/index.html @@ -14,13 +14,13 @@ - +

    16 posts tagged with "zero-to-hero"

    View All Tags

    · 6 min read
    Josh Duffney

    Welcome to Day 4 of Week 1 of #CloudNativeNewYear!

    This week we'll focus on advanced topics and best practices for Cloud-Native practitioners, kicking off with this post on Serverless Container Options with Azure. We'll look at technologies, tools and best practices that range from managed services like Azure Kubernetes Service, to options allowing finer granularity of control and oversight.

    What We'll Cover

    • What is Microservice Architecture?
    • How do you design a Microservice?
    • What challenges do Microservices introduce?
    • Conclusion
    • Resources


    Microservices are a modern way of designing and building software that increases deployment velocity by decomposing an application into small autonomous services that can be deployed independently.

    By deploying loosely coupled microservices your applications can be developed, deployed, and scaled independently. Because each service is independent, it can be updated or replaced without having to worry about the impact on the rest of the application. This means that if a bug is found in one service, it can be fixed without having to redeploy the entire application. All of which gives an organization the ability to deliver value to their customers faster.

    In this article, we will explore the basics of microservices architecture, its benefits and challenges, and how it can help improve the development, deployment, and maintenance of software applications.

    What is Microservice Architecture?

    Before explaining what Microservice architecture is, it’s important to understand what problems microservices aim to address.

    Traditional software development is centered around building monolithic applications. Monolithic applications are built as a single, large codebase. Meaning your code is tightly coupled causing the monolithic app to suffer from the following:

    Too much Complexity: Monolithic applications can become complex and difficult to understand and maintain as they grow. This can make it hard to identify and fix bugs and add new features.

    Difficult to Scale: Monolithic applications can be difficult to scale as they often have a single point of failure, which can cause the whole application to crash if a service fails.

    Slow Deployment: Deploying a monolithic application can be risky and time-consuming, as a small change in one part of the codebase can affect the entire application.

    Microservice architecture (often called microservices) is an architecture style that addresses the challenges created by Monolithic applications. Microservices architecture is a way of designing and building software applications as a collection of small, independent services that communicate with each other through APIs. This allows for faster development and deployment cycles, as well as easier scaling and maintenance than is possible with a monolithic application.

    How do you design a Microservice?

    Building applications with Microservices architecture requires a different approach. Microservices architecture focuses on business capabilities rather than technical layers, such as data access or messaging. Doing so requires that you shift your focus away from the technical stack and model your applications based upon the various domains that exist within the business.

    Domain-driven design (DDD) is a way to design software by focusing on the business needs. You can use Domain-driven design as a framework that guides the development of well-designed microservices by building services that encapsulate knowledge in each domain and abstract that knowledge from clients.

    In Domain-driven design you start by modeling the business domain and creating a domain model. A domain model is an abstract model of the business model that distills and organizes a domain of knowledge and provides a common language for developers and domain experts. It’s the resulting domain model that microservices a best suited to be built around because it helps establish a well-defined boundary between external systems and other internal applications.

    In short, before you begin designing microservices, start by mapping the functions of the business and their connections to create a domain model for the microservice(s) to be built around.

    What challenges do Microservices introduce?

    Microservices solve a lot of problems and have several advantages, but the grass isn’t always greener on the other side.

    One of the key challenges of microservices is managing communication between services. Because services are independent, they need to communicate with each other through APIs. This can be complex and difficult to manage, especially as the number of services grows. To address this challenge, it is important to have a clear API design, with well-defined inputs and outputs for each service. It is also important to have a system for managing and monitoring communication between services, to ensure that everything is running smoothly.

    Another challenge of microservices is managing the deployment and scaling of services. Because each service is independent, it needs to be deployed and scaled separately from the rest of the application. This can be complex and difficult to manage, especially as the number of services grows. To address this challenge, it is important to have a clear and consistent deployment process, with well-defined steps for deploying and scaling each service. Furthermore, it is advisable to host them on a system with self-healing capabilities to reduce operational burden.

    It is also important to have a system for monitoring and managing the deployment and scaling of services, to ensure optimal performance.

    Each of these challenges has created fertile ground for tooling and process that exists in the cloud-native ecosystem. Kubernetes, CI CD, and other DevOps practices are part of the package of adopting the microservices architecture.

    Conclusion

    In summary, microservices architecture focuses on software applications as a collection of small, independent services that communicate with each other over well-defined APIs.

    The main advantages of microservices include:

    • increased flexibility and scalability per microservice,
    • efficient resource utilization (with help from a container orchestrator like Kubernetes),
    • and faster development cycles.

    Continue following along with this series to see how you can use Kubernetes to help adopt microservices patterns in your own environments!

    Resources

    - + \ No newline at end of file diff --git a/cnny-2023/tags/zero-to-hero/page/6/index.html b/cnny-2023/tags/zero-to-hero/page/6/index.html index 59832d4490..da7cc70698 100644 --- a/cnny-2023/tags/zero-to-hero/page/6/index.html +++ b/cnny-2023/tags/zero-to-hero/page/6/index.html @@ -14,14 +14,14 @@ - +

    16 posts tagged with "zero-to-hero"

    View All Tags

    · 6 min read
    Cory Skimming

    We are excited to be wrapping up our first week of #CloudNativeNewYear! This week, we have tried to set the stage by covering the fundamentals of cloud-native practices and technologies, including primers on containerization, microservices, and Kubernetes.

    Don't forget to sign up for the the Cloud Skills Challenge!

    Today, we will do a brief recap of some of these technologies and provide some basic guidelines for when it is optimal to use each.


    What We'll Cover

    • To Containerize or not to Containerize?
    • The power of Kubernetes
    • Where does Serverless fit?
    • Resources
    • What's coming next!


    Just joining us now? Check out these other Week 1 posts:

    To Containerize or not to Containerize?

    As mentioned in our Containers 101 post earlier this week, containers can provide several benefits over traditional virtualization methods, which has made them popular within the software development community. Containers provide a consistent and predictable runtime environment, which can help reduce the risk of compatibility issues and simplify the deployment process. Additionally, containers can improve resource efficiency by allowing multiple applications to run on the same host while isolating their dependencies.

    Some types of apps that are a particularly good fit for containerization include:

    1. Microservices: Containers are particularly well-suited for microservices-based applications, as they can be used to isolate and deploy individual components of the system. This allows for more flexibility and scalability in the deployment process.
    2. Stateless applications: Applications that do not maintain state across multiple sessions, such as web applications, are well-suited for containers. Containers can be easily scaled up or down as needed and replaced with new instances, without losing data.
    3. Portable applications: Applications that need to be deployed in different environments, such as on-premises, in the cloud, or on edge devices, can benefit from containerization. The consistent and portable runtime environment of containers can make it easier to move the application between different environments.
    4. Legacy applications: Applications that are built using older technologies or that have compatibility issues can be containerized to run in an isolated environment, without impacting other applications or the host system.
    5. Dev and testing environments: Containerization can be used to create isolated development and testing environments, which can be easily created and destroyed as needed.

    While there are many types of applications that can benefit from a containerized approach, it's worth noting that containerization is not always the best option, and it's important to weigh the benefits and trade-offs before deciding to containerize an application. Additionally, some types of applications may not be a good fit for containers including:

    • Apps that require full access to host resources: Containers are isolated from the host system, so if an application needs direct access to hardware resources such as GPUs or specialized devices, it might not work well in a containerized environment.
    • Apps that require low-level system access: If an application requires deep access to the underlying operating system, it may not be suitable for running in a container.
    • Applications that have specific OS dependencies: Apps that have specific dependencies on a certain version of an operating system or libraries may not be able to run in a container.
    • Stateful applications: Apps that maintain state across multiple sessions, such as databases, may not be well suited for containers. Containers are ephemeral by design, so the data stored inside a container may not persist between restarts.

    The good news is that some of these limitations can be overcome with the use of specialized containerization technologies such as Kubernetes, and by carefully designing the architecture of the application.


    The power of Kubernetes

    Speaking of Kubernetes...

    Kubernetes is a powerful tool for managing and deploying containerized applications in production environments, particularly for applications that need to scale, handle large numbers of requests, or run in multi-cloud or hybrid environments.

    Kubernetes is well-suited for a wide variety of applications, but it is particularly well-suited for the following types of applications:

    1. Microservices-based applications: Kubernetes provides a powerful set of tools for managing and deploying microservices-based applications, making it easy to scale, update, and manage the individual components of the application.
    2. Stateful applications: Kubernetes provides support for stateful applications through the use of Persistent Volumes and StatefulSets, allowing for applications that need to maintain state across multiple instances.
    3. Large-scale, highly-available systems: Kubernetes provides built-in support for scaling, self-healing, and rolling updates, making it an ideal choice for large-scale, highly-available systems that need to handle large numbers of users and requests.
    4. Multi-cloud and hybrid environments: Kubernetes can be used to deploy and manage applications across multiple cloud providers and on-premises environments, making it a good choice for organizations that want to take advantage of the benefits of multiple cloud providers or that need to deploy applications in a hybrid environment.
    New to Kubernetes?

    Where does Serverless fit in?

    Serverless is a cloud computing model where the cloud provider (like Azure) is responsible for executing a piece of code by dynamically allocating the resources. With serverless, you only pay for the exact amount of compute time that you use, rather than paying for a fixed amount of resources. This can lead to significant cost savings, particularly for applications with variable or unpredictable workloads.

    Serverless is commonly used for building applications like web or mobile apps, IoT, data processing, and real-time streaming - apps where the workloads are variable and high scalability is required. It's important to note that serverless is not a replacement for all types of workloads - it's best suited for stateless, short-lived and small-scale workloads.

    For a detailed look into the world of Serverless and lots of great learning content, revisit #30DaysofServerless.


    Resources


    What's up next in #CloudNativeNewYear?

    Week 1 has been all about the fundamentals of cloud-native. Next week, the team will be diving in to application deployment with Azure Kubernetes Service. Don't forget to subscribe to the blog to get daily posts delivered directly to your favorite feed reader!


    - + \ No newline at end of file diff --git a/cnny-2023/tags/zero-to-hero/page/7/index.html b/cnny-2023/tags/zero-to-hero/page/7/index.html index 3118939914..305ee34d0f 100644 --- a/cnny-2023/tags/zero-to-hero/page/7/index.html +++ b/cnny-2023/tags/zero-to-hero/page/7/index.html @@ -14,13 +14,13 @@ - +

    16 posts tagged with "zero-to-hero"

    View All Tags

    · 14 min read
    Steven Murawski

    Welcome to Day #1 of Week 2 of #CloudNativeNewYear!

    The theme for this week is Kubernetes fundamentals. Last week we talked about Cloud Native architectures and the Cloud Native landscape. Today we'll explore the topic of Pods and Deployments in Kubernetes.

    Ask the Experts Thursday, February 9th at 9 AM PST
    Catch the Replay of the Live Demo

    Watch the recorded demo and conversation about this week's topics.

    We were live on YouTube walking through today's (and the rest of this week's) demos.

    What We'll Cover

    • Setting Up A Kubernetes Environment in Azure
    • Running Containers in Kubernetes Pods
    • Making the Pods Resilient with Deployments
    • Exercise
    • Resources

    Setting Up A Kubernetes Environment in Azure

    For this week, we'll be working with a simple app - the Azure Voting App. My teammate Paul Yu ported the app to Rust and we tweaked it a bit to let us highlight some of the basic features of Kubernetes.

    You should be able to replicate this in just about any Kubernetes environment, but we'll use Azure Kubernetes Service (AKS) as our working environment for this week.

    To make it easier to get started, there's a Bicep template to deploy an AKS cluster, an Azure Container Registry (ACR) (to host our container image), and connect the two so that we can easily deploy our application.

    Step 0 - Prerequisites

    There are a few things you'll need if you want to work through this and the following examples this week.

    Required:

    • Git (and probably a GitHub account if you want to persist your work outside of your computer)
    • Azure CLI
    • An Azure subscription (if you want to follow along with the Azure steps)
    • Kubectl (the command line tool for managing Kubernetes)

    Helpful:

    • Visual Studio Code (or equivalent editor)

    Step 1 - Clone the application repository

    First, I forked the source repository to my account.

    $GitHubOrg = 'smurawski' # Replace this with your GitHub account name or org name
    git clone "https://github.com/$GitHubOrg/azure-voting-app-rust"
    cd azure-voting-app-rust

    Leave your shell opened with your current location inside the application repository.

    Step 2 - Set up AKS

    Running the template deployment from the demo script (I'm using the PowerShell example in cnny23-week2-day1.ps1, but there's a Bash variant at cnny23-week2-day1.sh) stands up the environment. The second, third, and fourth commands take some of the output from the Bicep deployment to set up for later commands, so don't close out your shell after you run these commands.

    az deployment sub create --template-file ./deploy/main.bicep --location eastus --parameters 'resourceGroup=cnny-week2'
    $AcrName = az deployment sub show --name main --query 'properties.outputs.acr_name.value' -o tsv
    $AksName = az deployment sub show --name main --query 'properties.outputs.aks_name.value' -o tsv
    $ResourceGroup = az deployment sub show --name main --query 'properties.outputs.resource_group_name.value' -o tsv

    az aks get-credentials --resource-group $ResourceGroup --name $AksName

    Step 3 - Build our application container

    Since we have an Azure Container Registry set up, I'll use ACR Build Tasks to build and store my container image.

    az acr build --registry $AcrName --% --image cnny2023/azure-voting-app-rust:{{.Run.ID}} .
    $BuildTag = az acr repository show-tags `
    --name $AcrName `
    --repository cnny2023/azure-voting-app-rust `
    --orderby time_desc `
    --query '[0]' -o tsv
    tip

    Wondering what the --% is in the first command line? That tells the PowerShell interpreter to pass the input after it "as is" to the command without parsing/evaluating it. Otherwise, PowerShell messes a bit with the templated {{.Run.ID}} bit.

    Running Containers in Kubernetes Pods

    Now that we have our AKS cluster and application image ready to go, let's look into how Kubernetes runs containers.

    If you've been in tech for any length of time, you've seen that every framework, runtime, orchestrator, etc.. can have their own naming scheme for their concepts. So let's get into some of what Kubernetes calls things.

    The Pod

    A container running in Kubernetes is called a Pod. A Pod is basically a running container on a Node or VM. It can be more. For example you can run multiple containers and specify some funky configuration, but we'll keep it simple for now - add the complexity when you need it.

    Our Pod definition can be created via the kubectl command imperatively from arguments or declaratively from a configuration file. We'll do a little of both. We'll use the kubectl command to help us write our configuration files. Kubernetes configuration files are YAML, so having an editor that supports and can help you syntax check YAML is really helpful.

    Creating a Pod Definition

    Let's create a few Pod definitions. Our application requires two containers to get working - the application and a database.

    Let's create the database Pod first. And before you comment, the configuration isn't secure nor best practice. We'll fix that later this week. For now, let's focus on getting up and running.

    This is a trick I learned from one of my teammates - Paul. By using the --output yaml and --dry-run=client options, we can have the command help us write our YAML. And with a bit of output redirection, we can stash it safely in a file for later use.

    kubectl run azure-voting-db `
    --image "postgres:15.0-alpine" `
    --env "POSTGRES_PASSWORD=mypassword" `
    --output yaml `
    --dry-run=client > manifests/pod-db.yaml

    This creates a file that looks like:

    apiVersion: v1
    kind: Pod
    metadata:
    creationTimestamp: null
    labels:
    run: azure-voting-db
    name: azure-voting-db
    spec:
    containers:
    - env:
    - name: POSTGRES_PASSWORD
    value: mypassword
    image: postgres:15.0-alpine
    name: azure-voting-db
    resources: {}
    dnsPolicy: ClusterFirst
    restartPolicy: Always
    status: {}

    The file, when supplied to the Kubernetes API, will identify what kind of resource to create, the API version to use, and the details of the container (as well as an environment variable to be supplied).

    We'll get that container image started with the kubectl command. Because the details of what to create are in the file, we don't need to specify much else to the kubectl command but the path to the file.

    kubectl apply -f ./manifests/pod-db.yaml

    I'm going to need the IP address of the Pod, so that my application can connect to it, so we can use kubectl to get some information about our pod. By default, kubectl get pod only displays certain information but it retrieves a lot more. We can use the JSONPath syntax to index into the response and get the information you want.

    tip

    To see what you can get, I usually run the kubectl command with the output type (-o JSON) of JSON and then I can find where the data I want is and create my JSONPath query to get it.

    $DB_IP = kubectl get pod azure-voting-db -o jsonpath='{.status.podIP}'

    Now, let's create our Pod definition for our application. We'll use the same technique as before.

    kubectl run azure-voting-app `
    --image "$AcrName.azurecr.io/cnny2023/azure-voting-app-rust:$BuildTag" `
    --env "DATABASE_SERVER=$DB_IP" `
    --env "DATABASE_PASSWORD=mypassword`
    --output yaml `
    --dry-run=client > manifests/pod-app.yaml

    That command gets us a similar YAML file to the database container - you can see the full file here

    Let's get our application container running.

    kubectl apply -f ./manifests/pod-app.yaml

    Now that the Application is Running

    We can check the status of our Pods with:

    kubectl get pods

    And we should see something like:

    azure-voting-app-rust ❯  kubectl get pods
    NAME READY STATUS RESTARTS AGE
    azure-voting-app 1/1 Running 0 36s
    azure-voting-db 1/1 Running 0 84s

    Once our pod is running, we can check to make sure everything is working by letting kubectl proxy network connections to our Pod running the application. If we get the voting web page, we'll know the application found the database and we can start voting!

    kubectl port-forward pod/azure-voting-app 8080:8080

    Azure voting website in a browser with three buttons, one for Dogs, one for Cats, and one for Reset.  The counter is Dogs - 0 and Cats - 0.

    When you are done voting, you can stop the port forwarding by using Control-C to break the command.

    Clean Up

    Let's clean up after ourselves and see if we can't get Kubernetes to help us keep our application running. We can use the same configuration files to ensure that Kubernetes only removes what we want removed.

    kubectl delete -f ./manifests/pod-app.yaml
    kubectl delete -f ./manifests/pod-db.yaml

    Summary - Pods

    A Pod is the most basic unit of work inside Kubernetes. Once the Pod is deleted, it's gone. That leads us to our next topic (and final topic for today.)

    Making the Pods Resilient with Deployments

    We've seen how easy it is to deploy a Pod and get our containers running on Nodes in our Kubernetes cluster. But there's a problem with that. Let's illustrate it.

    Breaking Stuff

    Setting Back Up

    First, let's redeploy our application environment. We'll start with our application container.

    kubectl apply -f ./manifests/pod-db.yaml
    kubectl get pod azure-voting-db -o jsonpath='{.status.podIP}'

    The second command will report out the new IP Address for our database container. Let's open ./manifests/pod-app.yaml and update the container IP to our new one.

    - name: DATABASE_SERVER
    value: YOUR_NEW_IP_HERE

    Then we can deploy the application with the information it needs to find its database. We'll also list out our pods to see what is running.

    kubectl apply -f ./manifests/pod-app.yaml
    kubectl get pods

    Feel free to look back and use the port forwarding trick to make sure your app is running if you'd like.

    Knocking It Down

    The first thing we'll try to break is our application pod. Let's delete it.

    kubectl delete pod azure-voting-app

    Then, we'll check our pod's status:

    kubectl get pods

    Which should show something like:

    azure-voting-app-rust ❯  kubectl get pods
    NAME READY STATUS RESTARTS AGE
    azure-voting-db 1/1 Running 0 50s

    We should be able to recreate our application pod deployment with no problem, since it has the current database IP address and nothing else depends on it.

    kubectl apply -f ./manifests/pod-app.yaml

    Again, feel free to do some fun port forwarding and check your site is running.

    Uncomfortable Truths

    Here's where it gets a bit stickier, what if we delete the database container?

    If we delete our database container and recreate it, it'll likely have a new IP address, which would force us to update our application configuration. We'll look at some solutions for these problems in the next three posts this week.

    Because our database problem is a bit tricky, we'll primarily focus on making our application layer more resilient and prepare our database layer for those other techniques over the next few days.

    Let's clean back up and look into making things more resilient.

    kubectl delete -f ./manifests/pod-app.yaml
    kubectl delete -f ./manifests/pod-db.yaml

    The Deployment

    One of the reasons you may want to use Kubernetes is it's ability to orchestrate workloads. Part of that orchestration includes being able to ensure that certain workloads are running (regardless of what Node they might be on).

    We saw that we could delete our application pod and then restart it from the manifest with little problem. It just meant that we had to run a command to restart it. We can use the Deployment in Kubernetes to tell the orchestrator to ensure we have our application pod running.

    The Deployment also can encompass a lot of extra configuration - controlling how many containers of a particular type should be running, how upgrades of container images should proceed, and more.

    Creating the Deployment

    First, we'll create a Deployment for our database. We'll use a technique similar to what we did for the Pod, with just a bit of difference.

    kubectl create deployment azure-voting-db `
    --image "postgres:15.0-alpine" `
    --port 5432 `
    --output yaml `
    --dry-run=client > manifests/deployment-db.yaml

    Unlike our Pod definition creation, we can't pass in environment variable configuration from the command line. We'll have to edit the YAML file to add that.

    So, let's open ./manifests/deployment-db.yaml in our editor and add the following in the spec/containers configuration.

            env:
    - name: POSTGRES_PASSWORD
    value: "mypassword"

    Your file should look like this deployment-db.yaml.

    Once we have our configuration file updated, we can deploy our database container image.

    kubectl apply -f ./manifests/deployment-db.yaml

    For our application, we'll use the same technique.

    kubectl create deployment azure-voting-app `
    --image "$AcrName.azurecr.io/cnny2023/azure-voting-app-rust:$BuildTag" `
    --port 8080 `
    --output yaml `
    --dry-run=client > manifests/deployment-app.yaml

    Next, we'll need to add an environment variable to the generated configuration. We'll also need the new IP address for the database deployment.

    Previously, we named the pod and were able to ask for the IP address with kubectl and a bit of JSONPath. Now, the deployment created the pod for us, so there's a bit of random in the naming. Check out:

    kubectl get pods

    Should return something like:

    azure-voting-app-rust ❯  kubectl get pods
    NAME READY STATUS RESTARTS AGE
    azure-voting-db-686d758fbf-8jnq8 1/1 Running 0 7s

    We can either ask for the IP with the new pod name, or we can use a selector to find our desired pod.

    kubectl get pod --selector app=azure-voting-db -o jsonpath='{.items[0].status.podIP}'

    Now, we can update our application deployment configuration file with:

            env:
    - name: DATABASE_SERVER
    value: YOUR_NEW_IP_HERE
    - name: DATABASE_PASSWORD
    value: mypassword

    Your file should look like this deployment-app.yaml (but with IPs and image names matching your environment).

    After we save those changes, we can deploy our application.

    kubectl apply -f ./manifests/deployment-app.yaml

    Let's test the resilience of our app now. First, we'll delete the pod running our application, then we'll check to make sure Kubernetes restarted our application pod.

    kubectl get pods
    azure-voting-app-rust ❯  kubectl get pods
    NAME READY STATUS RESTARTS AGE
    azure-voting-app-56c9ccc89d-skv7x 1/1 Running 0 71s
    azure-voting-db-686d758fbf-8jnq8 1/1 Running 0 12m
    kubectl delete pod azure-voting-app-56c9ccc89d-skv7x
    kubectl get pods
    azure-voting-app-rust ❯  kubectl delete pod azure-voting-app-56c9ccc89d-skv7x
    >> kubectl get pods
    pod "azure-voting-app-56c9ccc89d-skv7x" deleted
    NAME READY STATUS RESTARTS AGE
    azure-voting-app-56c9ccc89d-2b5mx 1/1 Running 0 2s
    azure-voting-db-686d758fbf-8jnq8 1/1 Running 0 14m
    info

    Your Pods will likely have different identifiers at the end, so adjust your commands to match the names in your environment.

    As you can see, by the time the kubectl get pods command was run, Kubernetes had already spun up a new pod for the application container image. Thanks Kubernetes!

    Clean up

    Since we can't just delete the pods, we have to delete the deployments.

    kubectl delete -f ./manifests/deployment-app.yaml
    kubectl delete -f ./manifests/deployment-db.yaml

    Summary - Deployments

    Deployments allow us to create more durable configuration for the workloads we deploy into Kubernetes. As we dig deeper, we'll discover more capabilities the deployments offer. Check out the Resources below for more.

    Exercise

    If you want to try these steps, head over to the source repository, fork it, clone it locally, and give it a spin!

    You can check your manifests against the manifests in the week2/day1 branch of the source repository.

    Resources

    Take the Cloud Skills Challenge!

    Enroll in the Cloud Skills Challenge!

    Don't miss out on this opportunity to level up your skills and stay ahead of the curve in the world of cloud native.

    Documentation

    Training

    - + \ No newline at end of file diff --git a/cnny-2023/tags/zero-to-hero/page/8/index.html b/cnny-2023/tags/zero-to-hero/page/8/index.html index 613da927f9..756b8aa602 100644 --- a/cnny-2023/tags/zero-to-hero/page/8/index.html +++ b/cnny-2023/tags/zero-to-hero/page/8/index.html @@ -14,14 +14,14 @@ - +

    16 posts tagged with "zero-to-hero"

    View All Tags

    · 6 min read
    Josh Duffney

    Welcome to Day 3 of Week 2 of #CloudNativeNewYear!

    The theme for this week is Kubernetes fundamentals. Yesterday we talked about Services and Ingress. Today we'll explore the topic of passing configuration and secrets to our applications in Kubernetes with ConfigMaps and Secrets.

    Ask the Experts Thursday, February 9th at 9 AM PST
    Catch the Replay of the Live Demo

    Watch the recorded demo and conversation about this week's topics.

    We were live on YouTube walking through today's (and the rest of this week's) demos.

    What We'll Cover

    • Decouple configurations with ConfigMaps and Secerts
    • Passing Environment Data with ConfigMaps and Secrets
    • Conclusion

    Decouple configurations with ConfigMaps and Secerts

    A ConfigMap is a Kubernetes object that decouples configuration data from pod definitions. Kubernetes secerts are similar, but were designed to decouple senstive information.

    Separating the configuration and secerts from your application promotes better organization and security of your Kubernetes environment. It also enables you to share the same configuration and different secerts across multiple pods and deployments which can simplify scaling and management. Using ConfigMaps and Secerts in Kubernetes is a best practice that can help to improve the scalability, security, and maintainability of your cluster.

    By the end of this tutorial, you'll have added a Kubernetes ConfigMap and Secret to the Azure Voting deployment.

    Passing Environment Data with ConfigMaps and Secrets

    📝 NOTE: If you don't have an AKS cluster deployed, please head over to Azure-Samples/azure-voting-app-rust, clone the repo, and follow the instructions in the README.md to execute the Azure deployment and setup your kubectl context. Check out the first post this week for more on the environment setup.

    Create the ConfigMap

    ConfigMaps can be used in one of two ways; as environment variables or volumes.

    For this tutorial you'll use a ConfigMap to create three environment variables inside the pod; DATABASE_SERVER, FISRT_VALUE, and SECOND_VALUE. The DATABASE_SERVER provides part of connection string to a Postgres. FIRST_VALUE and SECOND_VALUE are configuration options that change what voting options the application presents to the users.

    Follow the below steps to create a new ConfigMap:

    1. Create a YAML file named 'config-map.yaml'. In this file, specify the environment variables for the application.

      apiVersion: v1
      kind: ConfigMap
      metadata:
      name: azure-voting-config
      data:
      DATABASE_SERVER: azure-voting-db
      FIRST_VALUE: "Go"
      SECOND_VALUE: "Rust"
    2. Create the config map in your Kubernetes cluster by running the following command:

      kubectl create -f config-map.yaml

    Create the Secret

    The deployment-db.yaml and deployment-app.yaml are Kubernetes manifests that deploy the Azure Voting App. Currently, those deployment manifests contain the environment variables POSTGRES_PASSWORD and DATABASE_PASSWORD with the value stored as plain text. Your task is to replace that environment variable with a Kubernetes Secret.

    Create a Secret running the following commands:

    1. Encode mypassword.

      echo -n "mypassword" | base64
    2. Create a YAML file named secret.yaml. In this file, add POSTGRES_PASSWORD as the key and the encoded value returned above under as the value in the data section.

      apiVersion: v1
      kind: Secret
      metadata:
      name: azure-voting-secret
      type: Opaque
      data:
      POSTGRES_PASSWORD: bXlwYXNzd29yZA==
    3. Create the Secret in your Kubernetes cluster by running the following command:

      kubectl create -f secret.yaml

    [!WARNING] base64 encoding is a simple and widely supported way to obscure plaintext data, it is not secure, as it can easily be decoded. If you want to store sensitive data like password, you should use a more secure method like encrypting with a Key Management Service (KMS) before storing it in the Secret.

    Modify the app deployment manifest

    With the ConfigMap and Secert both created the next step is to replace the environment variables provided in the application deployment manuscript with the values stored in the ConfigMap and the Secert.

    Complete the following steps to add the ConfigMap and Secert to the deployment mainifest:

    1. Open the Kubernetes manifest file deployment-app.yaml.

    2. In the containers section, add an envFrom section and upate the env section.

      envFrom:
      - configMapRef:
      name: azure-voting-config
      env:
      - name: DATABASE_PASSWORD
      valueFrom:
      secretKeyRef:
      name: azure-voting-secret
      key: POSTGRES_PASSWORD

      Using envFrom exposes all the values witin the ConfigMap as environment variables. Making it so you don't have to list them individually.

    3. Save the changes to the deployment manifest file.

    4. Apply the changes to the deployment by running the following command:

      kubectl apply -f deployment-app.yaml

    Modify the database deployment manifest

    Next, update the database deployment manifest and replace the plain text environment variable with the Kubernetes Secert.

    1. Open the deployment-db.yaml.

    2. To add the secret to the deployment, replace the env section with the following code:

      env:
      - name: POSTGRES_PASSWORD
      valueFrom:
      secretKeyRef:
      name: azure-voting-secret
      key: POSTGRES_PASSWORD
    3. Apply the updated manifest.

      kubectl apply -f deployment-db.yaml

    Verify the ConfigMap and output environment variables

    Verify that the ConfigMap was added to your deploy by running the following command:

    ```bash
    kubectl describe deployment azure-voting-app
    ```

    Browse the output until you find the envFrom section with the config map reference.

    You can also verify that the environment variables from the config map are being passed to the container by running the command kubectl exec -it <pod-name> -- printenv. This command will show you all the environment variables passed to the pod including the one from configmap.

    By following these steps, you will have successfully added a config map to the Azure Voting App Kubernetes deployment, and the environment variables defined in the config map will be passed to the container running in the pod.

    Verify the Secret and describe the deployment

    Once the secret has been created you can verify it exists by running the following command:

    kubectl get secrets

    You can view additional information, such as labels, annotations, type, and the Data by running kubectl describe:

    kubectl describe secret azure-voting-secret

    By default, the describe command doesn't output the encoded value, but if you output the results as JSON or YAML you'll be able to see the secret's encoded value.

     kubectl get secret azure-voting-secret -o json

    Conclusion

    In conclusion, using ConfigMaps and Secrets in Kubernetes can help to improve the scalability, security, and maintainability of your cluster. By decoupling configuration data and sensitive information from pod definitions, you can promote better organization and security in your Kubernetes environment. Additionally, separating these elements allows for sharing the same configuration and different secrets across multiple pods and deployments, simplifying scaling and management.

    Take the Cloud Skills Challenge!

    Enroll in the Cloud Skills Challenge!

    Don't miss out on this opportunity to level up your skills and stay ahead of the curve in the world of cloud native.

    - + \ No newline at end of file diff --git a/cnny-2023/tags/zero-to-hero/page/9/index.html b/cnny-2023/tags/zero-to-hero/page/9/index.html index cf9e78d5af..c65f0dccfa 100644 --- a/cnny-2023/tags/zero-to-hero/page/9/index.html +++ b/cnny-2023/tags/zero-to-hero/page/9/index.html @@ -14,13 +14,13 @@ - +

    16 posts tagged with "zero-to-hero"

    View All Tags

    · 10 min read
    Steven Murawski

    Welcome to Day 5 of Week 2 of #CloudNativeNewYear!

    The theme for this week is Kubernetes fundamentals. Yesterday we talked about adding persistent storage to our deployment. Today we'll explore the topic of scaling pods and nodes in our Kubernetes cluster.

    Ask the Experts Thursday, February 9th at 9 AM PST
    Catch the Replay of the Live Demo

    Watch the recorded demo and conversation about this week's topics.

    We were live on YouTube walking through today's (and the rest of this week's) demos.

    What We'll Cover

    • Scaling Our Application
    • Scaling Pods
    • Scaling Nodes
    • Exercise
    • Resources

    Scaling Our Application

    One of our primary reasons to use a service like Kubernetes to orchestrate our workloads is the ability to scale. We've approached scaling in a multitude of ways over the years, taking advantage of the ever-evolving levels of hardware and software. Kubernetes allows us to scale our units of work, Pods, and the Nodes they run on. This allows us to take advantage of both hardware and software scaling abilities. Kubernetes can help improve the utilization of existing hardware (by scheduling Pods on Nodes that have resource capacity). And, with the capabilities of virtualization and/or cloud hosting (or a bit more work, if you have a pool of physical machines), Kubernetes can expand (or contract) the number of Nodes capable of hosting Pods. Scaling is primarily driven by resource utilization, but can be triggered by a variety of other sources thanks to projects like Kubernetes Event-driven Autoscaling (KEDA).

    Scaling Pods

    Our first level of scaling is with our Pods. Earlier, when we worked on our deployment, we talked about how the Kubernetes would use the deployment configuration to ensure that we had the desired workloads running. One thing we didn't explore was running more than one instance of a pod. We can define a number of replicas of a pod in our Deployment.

    Manually Scale Pods

    So, if we wanted to define more pods right at the start (or at any point really), we could update our deployment configuration file with the number of replicas and apply that configuration file.

    spec:
    replicas: 5

    Or we could use the kubectl scale command to update the deployment with a number of pods to create.

    kubectl scale --replicas=5 deployment/azure-voting-app

    Both of these approaches modify the running configuration of our Kubernetes cluster and request that it ensure that we have that set number of replicas running. Because this was a manual change, the Kubernetes cluster won't automatically increase or decrease the number of pods. It'll just ensure that there are always the specified number of pods running.

    Autoscale Pods with the Horizontal Pod Autoscaler

    Another approach to scaling our pods is to allow the Horizontal Pod Autoscaler to help us scale in response to resources being used by the pod. This requires a bit more configuration up front. When we define our pod in our deployment, we need to include resource requests and limits. The requests help Kubernetes determine what nodes may have capacity for a new instance of a pod. The limit tells us where the node should cap utilization for a particular instance of a pod. For example, we'll update our deployment to request 0.25 CPU and set a limit of 0.5 CPU.

        spec:
    containers:
    - image: acrudavoz.azurecr.io/cnny2023/azure-voting-app-rust:ca4
    name: azure-voting-app-rust
    ports:
    - containerPort: 8080
    env:
    - name: DATABASE_URL
    value: postgres://postgres:mypassword@10.244.0.29
    resources:
    requests:
    cpu: 250m
    limits:
    cpu: 500m

    Now that we've given Kubernetes an allowed range and an idea of what free resources a node should have to place new pods, we can set up autoscaling. Because autoscaling is a persistent configuration, I like to define it in a configuration file that I'll be able to keep with the rest of my cluster configuration. We'll use the kubectl command to help us write the configuration file. We'll request that Kubernetes watch our pods and when the average CPU utilization if 50% of the requested usage (in our case if it's using more than 0.375 CPU across the current number of pods), it can grow the number of pods serving requests up to 10. If the utilization drops, Kubernetes will have the permission to deprovision pods down to the minimum (three in our example).

    kubectl autoscale deployment azure-voting-app --cpu-percent=50 --min=3 --max=10 -o YAML --dry-run=client

    Which would give us:

    apiVersion: autoscaling/v1
    kind: HorizontalPodAutoscaler
    metadata:
    creationTimestamp: null
    name: azure-voting-app
    spec:
    maxReplicas: 10
    minReplicas: 3
    scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: azure-voting-app
    targetCPUUtilizationPercentage: 50
    status:
    currentReplicas: 0
    desiredReplicas: 0

    So, how often does the autoscaler check the metrics being monitored? The autoscaler checks the Metrics API every 15 seconds, however the pods stats are only updated every 60 seconds. This means that an autoscale event may be evaluated about once a minute. Once an autoscale down event happens however, Kubernetes has a cooldown period to give the new pods a chance to distribute the workload and let the new metrics accumulate. There is no delay on scale up events.

    Application Architecture Considerations

    We've focused in this example on our front end, which is an easier scaling story. When we start talking about scaling our database layers or anything that deals with persistent storage or has primary/replica configuration requirements things get a bit more complicated. Some of these applications may have built-in leader election or could use sidecars to help use existing features in Kubernetes to perform that function. For shared storage scenarios, persistent volumes (or persistent volumes with Azure) can be of help, if the application knows how to play well with shared file access.

    Ultimately, you know your application architecture and, while Kubernetes may not have an exact match to how you are doing things today, the underlying capability is probably there under a different name. This abstraction allows you to more effectively use Kubernetes to operate a variety of workloads with the levels of controls you need.

    Scaling Nodes

    We've looked at how to scale our pods, but that assumes we have enough resources in our existing pool of nodes to accomodate those scaling requests. Kubernetes can also help scale our available nodes to ensure that our applications have the necessary resources to meet their performance requirements.

    Manually Scale Nodes

    Manually scaling nodes isn't a direct function of Kubernetes, so your operating environment instructions may vary. On Azure, it's pretty straight forward. Using the Azure CLI (or other tools), we can tell our AKS cluster to scale up or scale down the number of nodes in our node pool.

    First, we'll check out how many nodes we currently have in our working environment.

    kubectl get nodes

    This will show us

    azure-voting-app-rust ❯  kubectl get nodes
    NAME STATUS ROLES AGE VERSION
    aks-pool0-37917684-vmss000000 Ready agent 5d21h v1.24.6

    Then, we'll scale it up to three nodes.

    az aks scale --resource-group $ResourceGroup --name $AksName --node-count 3

    Then, we'll check out how many nodes we now have in our working environment.

    kubectl get nodes

    Which returns:

    azure-voting-app-rust ❯  kubectl get nodes
    NAME STATUS ROLES AGE VERSION
    aks-pool0-37917684-vmss000000 Ready agent 5d21h v1.24.6
    aks-pool0-37917684-vmss000001 Ready agent 5m27s v1.24.6
    aks-pool0-37917684-vmss000002 Ready agent 5m10s v1.24.6

    Autoscale Nodes with the Cluster Autoscaler

    Things get more interesting when we start working with the Cluster Autoscaler. The Cluster Autoscaler watches for the inability of Kubernetes to schedule the required number of pods due to resource constraints (and a few other criteria like affinity/anti-affinity). If there are insufficient resources available on the existing nodes, the autoscaler can provision new nodes into the nodepool. Likewise, the autoscaler watches to see if the existing pods could be consolidated to a smaller set of nodes and can remove excess nodes.

    Enabling the autoscaler is likewise an update that can be dependent on where and how your Kubernetes cluster is hosted. Azure makes it easy with a simple Azure CLI command.

    az aks update `
    --resource-group $ResourceGroup `
    --name $AksName `
    --update-cluster-autoscaler `
    --min-count 1 `
    --max-count 5

    There are a variety of settings that can be configured to tune how the autoscaler works.

    Scaling on Different Events

    CPU and memory utilization are the primary drivers for the Horizontal Pod Autoscaler, but those might not be the best measures as to when you might want to scale workloads. There are other options for scaling triggers and one of the more common plugins to help with that is the Kubernetes Event-driven Autoscaling (KEDA) project. The KEDA project makes it easy to plug in different event sources to help drive scaling. Find more information about using KEDA on AKS here.

    Exercise

    Let's try out the scaling configurations that we just walked through using our sample application. If you still have your environment from Day 1, you can use that.

    📝 NOTE: If you don't have an AKS cluster deployed, please head over to Azure-Samples/azure-voting-app-rust, clone the repo, and follow the instructions in the README.md to execute the Azure deployment and setup your kubectl context. Check out the first post this week for more on the environment setup.

    Configure Horizontal Pod Autoscaler

    • Edit ./manifests/deployment-app.yaml to include resource requests and limits.
            resources:
    requests:
    cpu: 250m
    limits:
    cpu: 500m
    • Apply the updated deployment configuration.
    kubectl apply -f ./manifests/deployment-app.yaml
    • Create the horizontal pod autoscaler configuration and apply it
    kubectl autoscale deployment azure-voting-app --cpu-percent=50 --min=3 --max=10 -o YAML --dry-run=client > ./manifests/scaler-app.yaml
    kubectl apply -f ./manifests/scaler-app.yaml
    • Check to see your pods scale out to the minimum.
    kubectl get pods

    Configure Cluster Autoscaler

    Configuring the basic behavior of the Cluster Autoscaler is a bit simpler. We just need to run the Azure CLI command to enable the autoscaler and define our lower and upper limits.

    • Check the current nodes available (should be 1).
    kubectl get nodes
    • Update the cluster to enable the autoscaler
    az aks update `
    --resource-group $ResourceGroup `
    --name $AksName `
    --update-cluster-autoscaler `
    --min-count 2 `
    --max-count 5
    • Check to see the current number of nodes (should be 2 now).
    kubectl get nodes

    Resources

    Take the Cloud Skills Challenge!

    Enroll in the Cloud Skills Challenge!

    Don't miss out on this opportunity to level up your skills and stay ahead of the curve in the world of cloud native.

    Documentation

    Training

    - + \ No newline at end of file diff --git a/cnny-2023/windows-containers/index.html b/cnny-2023/windows-containers/index.html index 4500f8ab44..f76b8370c2 100644 --- a/cnny-2023/windows-containers/index.html +++ b/cnny-2023/windows-containers/index.html @@ -14,14 +14,14 @@ - +

    4-3. Windows Containers

    · 7 min read
    Vinicius Apolinario

    Welcome to Day 3 of Week 4 of #CloudNativeNewYear!

    The theme for this week is going further with Cloud Native. Yesterday we talked about using Draft to accelerate your Kubernetes adoption. Today we'll explore the topic of Windows containers.

    What We'll Cover

    • Introduction
    • Windows containers overview
    • Windows base container images
    • Isolation
    • Exercise: Try this yourself!
    • Resources: For self-study!

    Introduction

    Windows containers were launched along with Windows Server 2016, and have evolved since. In its latest release, Windows Server 2022, Windows containers have reached a great level of maturity and allow for customers to run production grade workloads.

    While suitable for new developments, Windows containers also provide developers and operations with a different approach than Linux containers. It allows for existing Windows applications to be containerized with little or no code changes. It also allows for professionals that are more comfortable with the Windows platform and OS, to leverage their skill set, while taking advantage of the containers platform.

    Windows container overview

    In essence, Windows containers are very similar to Linux. Since Windows containers use the same foundation of Docker containers, you can expect that the same architecture applies - with the specific notes of the Windows OS. For example, when running a Windows container via Docker, you use the same commands, such as docker run. To pull a container image, you can use docker pull, just like on Linux. However, to run a Windows container, you also need a Windows container host. This requirement is there because, as you might remember, a container shares the OS kernel with its container host.

    On Kubernetes, Windows containers are supported since Windows Server 2019. Just like with Docker, you can manage Windows containers like any other resource on the Kubernetes ecosystem. A Windows node can be part of a Kubernetes cluster, allowing you to run Windows container based applications on services like Azure Kubernetes Service. To deploy an Windows application to a Windows pod in Kubernetes, you can author a YAML specification much like you would for Linux. The main difference is that you would point to an image that runs on Windows, and you need to specify a node selection tag to indicate said pod needs to run on a Windows node.

    Windows base container images

    On Windows containers, you will always use a base container image provided by Microsoft. This base container image contains the OS binaries for the container to run. This image can be as large as 3GB+, or small as ~300MB. The difference in the size is a consequence of the APIs and components available in each Windows container base container image. There are primarily, three images: Nano Server, Server Core, and Server.

    Nano Server is the smallest image, ranging around 300MB. It's a base container image for new developments and cloud-native scenarios. Applications need to target Nano Server as the Windows OS, so not all frameworks will work. For example, .Net works on Nano Server, but .Net Framework doesn't. Other third-party frameworks also work on Nano Server, such as Apache, NodeJS, Phyton, Tomcat, Java runtime, JBoss, Redis, among others.

    Server Core is a much larger base container image, ranging around 1.25GB. It's larger size is compensated by it's application compatibility. Simply put, any application that meets the requirements to be run on a Windows container, can be containerized with this image.

    The Server image builds on the Server Core one. It ranges around 3.1GB and has even greater application compatibility than the Server Core image. In addition to the traditional Windows APIs and components, this image allows for scenarios such as Machine Learning via DirectX with GPU access.

    The best image for your scenario is dependent on the requirements your application has on the Windows OS inside a container. However, there are some scenarios that are not supported at all on Windows containers - such as GUI or RDP dependent applications, some Windows Server infrastructure roles, such as Active Directory, among others.

    Isolation

    When running containers, the kernel of the container host is shared with the containers running on it. While extremely convenient, this poses a potential risk for multi-tenant scenarios. If one container is compromised and has access to the host, it could potentially compromise other containers in the same system.

    For enterprise customers running on-premises (or even in the cloud), this can be mitigated by using a VM as a container host and considering the VM itself a security boundary. However, if multiple workloads from different tenants need to share the same host, Windows containers offer another option: Hyper-V isolation. While the name Hyper-V is associated with VMs, its virtualization capabilities extend to other services, including containers. Hyper-V isolated containers run on a purpose built, extremely small, highly performant VM. However, you manage a container running with Hyper-V isolation the same way you do with a process isolated one. In fact, the only notable difference is that you need to append the --isolation=hyperv tag to the docker run command.

    Exercise

    Here are a few examples of how to use Windows containers:

    Run Windows containers via Docker on your machine

    To pull a Windows base container image:

    docker pull mcr.microsoft.com/windows/servercore:ltsc2022

    To run a basic IIS container:

    #This command will pull and start a IIS container. You can access it from http://<your local IP>:8080
    docker run -d -p 8080:80 mcr.microsoft.com/windows/servercore/iis:windowsservercore-ltsc2022

    Run the same IIS container with Hyper-V isolation

    #This command will pull and start a IIS container. You can access it from http://<your local IP>:8080
    docker run -d -p 8080:80 --isolation=hyperv mcr.microsoft.com/windows/servercore/iis:windowsservercore-ltsc2022

    To run a Windows container interactively:

    docker run -it mcr.microsoft.com/windows/servercore:ltsc2022 powershell

    Run Windows containers on Kubernetes

    To prepare an AKS cluster for Windows containers: Note: Replace the values on the example below with the ones from your environment.

    echo "Please enter the username to use as administrator credentials for Windows Server nodes on your cluster: " && read WINDOWS_USERNAME
    az aks create \
    --resource-group myResourceGroup \
    --name myAKSCluster \
    --node-count 2 \
    --generate-ssh-keys \
    --windows-admin-username $WINDOWS_USERNAME \
    --vm-set-type VirtualMachineScaleSets \
    --network-plugin azure

    To add a Windows node pool for Windows containers:

    az aks nodepool add \
    --resource-group myResourceGroup \
    --cluster-name myAKSCluster \
    --os-type Windows \
    --name npwin \
    --node-count 1

    Deploy a sample ASP.Net application to the AKS cluster above using the YAML file below:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
    name: sample
    labels:
    app: sample
    spec:
    replicas: 1
    template:
    metadata:
    name: sample
    labels:
    app: sample
    spec:
    nodeSelector:
    "kubernetes.io/os": windows
    containers:
    - name: sample
    image: mcr.microsoft.com/dotnet/framework/samples:aspnetapp
    resources:
    limits:
    cpu: 1
    memory: 800M
    ports:
    - containerPort: 80
    selector:
    matchLabels:
    app: sample
    ---
    apiVersion: v1
    kind: Service
    metadata:
    name: sample
    spec:
    type: LoadBalancer
    ports:
    - protocol: TCP
    port: 80
    selector:
    app: sample

    Save the file above and run the command below on your Kubernetes cluster:

    kubectl apply -f <filename> .

    Once deployed, you can access the application by getting the IP address of your service:

    kubectl get service

    Resources

    It's not too late to sign up for and complete the Cloud Skills Challenge!
    - + \ No newline at end of file diff --git a/docs/category/resources/index.html b/docs/category/resources/index.html index 4e5233767d..8f48089eca 100644 --- a/docs/category/resources/index.html +++ b/docs/category/resources/index.html @@ -14,13 +14,13 @@ - + - + \ No newline at end of file diff --git a/docs/category/videos/index.html b/docs/category/videos/index.html index 1e66f43330..e7359eaa92 100644 --- a/docs/category/videos/index.html +++ b/docs/category/videos/index.html @@ -14,13 +14,13 @@ - + - + \ No newline at end of file diff --git a/docs/resources/devtools/index.html b/docs/resources/devtools/index.html index e0a1af65b6..72da1412ea 100644 --- a/docs/resources/devtools/index.html +++ b/docs/resources/devtools/index.html @@ -14,13 +14,13 @@ - + - + \ No newline at end of file diff --git a/docs/resources/intro/index.html b/docs/resources/intro/index.html index 44b5dcb579..04cf3a4605 100644 --- a/docs/resources/intro/index.html +++ b/docs/resources/intro/index.html @@ -14,13 +14,13 @@ - +

    Cloud-Native Apps

    Cloud-Native Apps are built from the ground up and optimized for cloud scale and performance.

    They’re based on microservices architectures, use managed services, and take advantage of continuous delivery to achieve reliability and faster time to market. Learn how to build cloud-native apps on Azure!

    Azure Microservices
    Microservices
    Azure Serverless Icon
    Serverless
    Azure Containers Icon
    Containers
    Simplify development of distributed cloud apps and take advantage of built-in, enterprise-grade security and autoscaling.Build cloud-native apps without provisioning and managing infrastructure using a fully managed platform.Containerize apps and let Azure managed services handle orchestration, provisioning, upgrading, and scaling on demand

    Resources

    Visit the Azure Architecture Center and

    - + \ No newline at end of file diff --git a/docs/resources/languages/index.html b/docs/resources/languages/index.html index 6461e21c1e..ece04a976c 100644 --- a/docs/resources/languages/index.html +++ b/docs/resources/languages/index.html @@ -14,13 +14,13 @@ - +

    Azure For Developers

    Want to get started developing on Azure? Bookmark and revisit this page for more resources!

    As developers, we have our favorite programming languages and developer workflows. Visit the Azure For Developers page to learn how to build with Azure using your preferred development environment. Here are a few top-level links.

    We'll keep the page updated with more resources throughout September!

    - + \ No newline at end of file diff --git a/docs/resources/serverless/index.html b/docs/resources/serverless/index.html index 4be2999f0e..5f57b0179e 100644 --- a/docs/resources/serverless/index.html +++ b/docs/resources/serverless/index.html @@ -14,13 +14,13 @@ - + - + \ No newline at end of file diff --git a/docs/videos/intro/index.html b/docs/videos/intro/index.html index 3174b4705f..0fc8d951b4 100644 --- a/docs/videos/intro/index.html +++ b/docs/videos/intro/index.html @@ -14,13 +14,13 @@ - +

    Serverless Hacks

    VIDEO PLAYLIST

    Watch The Serverless Hacks Walkthrough playlist on YouTube!

    In this series of 12 videos, Microsoft Cloud Advocate Gwyneth Peña-Siguenza walks you through building a .NET version of a Tollbooth app using Azure technologies as one approach to the Serverless Hacks Challenge.

    Then:

    Don't forget to bring your questions and insights to the weekly office hours to keep going. Explore the walkthrough videos below for inspiration.


    1. Overview

    2. Setup: Local Env

    3. Build: Hello World

    4. Provision: Resources

    5. Configure: Settings

    6. Deploy: Use VSCode

    7. Use: Azure Functions

    8. Use: App Insights

    9. Use: Logic Apps

    10. Debug: View Errors

    11. Query: Cosmos DB

    12. It's a Wrap!

    - + \ No newline at end of file diff --git a/index.html b/index.html index 6d401069e8..5dc26c735c 100644 --- a/index.html +++ b/index.html @@ -14,13 +14,13 @@ - +

    Build Intelligent Apps On Azure

    Combine the power of AI, cloud-scale data, and cloud-native app development to create highly differentiated digital experiences. Develop adaptive, responsive, and personalized experiences by building and modernizing intelligent applications with Azure.

    Azure Kubernetes Service

    Azure Kubernetes Service makes deploying managed Kubernetes clusters easier by offloading ops overhead to Azure.

    Azure Container Apps

    Azure Container Apps enables you to run microservices and containerized applications on a serverless platform.

    Azure Functions

    Use Azure Functions to Build event-driven serverless solutions with less code and infrastructure maintenance costs.

    Azure Cosmos DB

    Azure Cosmos DB is a fully managed, distributed NoSQL & relational database for modern app development.

    Azure AI Services

    Build cutting-edge, market-ready, responsible apps for your organization with Azure Open AI, Cognitive Search and more.

    GitHub

    Improve developer experience and enhance developer productivity with GitHub tooling like Actions, Copilot and Codespaces.

    - + \ No newline at end of file diff --git a/serverless-september/30DaysOfServerless/index.html b/serverless-september/30DaysOfServerless/index.html index b42f20e013..46189beec7 100644 --- a/serverless-september/30DaysOfServerless/index.html +++ b/serverless-september/30DaysOfServerless/index.html @@ -14,14 +14,14 @@ - +

    Roadmap for #30Days


    Welcome!

    This is a tentative roadmap for #30DaysOfServerless, a daily content series planned for the upcoming Serverless September project. It's a month-long celebration of Serverless On Azure with a curated journey that takes you from understanding core technologies to developing solutions for end-to-end scenarios - organized into 4 stages:

    • Go Serverless with Azure Functions
    • Deploy Microservices with Azure Container Apps
    • Simplify Integrations with Azure Event Grid & Logic Apps
    • Build End-to-End Solutions using familiar Dev Tools & Languages
    🚨 SEP 08: CHANGE IN PUBLISHING SCHEDULE

    Starting from Week 2 (Sep 8), we'll be publishing blog posts in batches rather than on a daily basis, so you can read a series of related posts together. Don't want to miss updates? Just subscribe to the feed

    Here are some actions you can take in the meantime:


    Sep 1: Kickoff

    Welcome to our September Serverless kickoff!! Our Serverless September officially kicks off on September 1, 2022. However, we'll be sharing a few posts ahead of time, to share more information about the many awesome initiatives we are planning for you.

    Kickoff

    SERVERLESS SEPTEMBER INITIATIVES
    LINKS TO POSTS

    Posts will be published nightly on our main blog page. Once the post is published, we will update the corresponding items in the sections below with direct links. You can subscribe to the blog to get updates delivered directly to your feed reader.


    Azure Functions

    Welcome to the Week 1 of your learning journey into Serverless technologies. Let's talk about Azure Functions - what it is, core features and tools, and best practices for getting started in the programming language of your choice.

    Azure Functions

    WEEK 1 - AZURE FUNCTIONS

    Posts will be linked here once published.


    Azure Container Apps

    Welcome to Week 2. You've learnt how to build event-driven serverless backends using Azure Functions. But how can you orchestrate and scale more complex solutions? The answer lies in microservice architectures and containerized apps. This week we explore Azure Container Apps (ACA) - and learn how the Distributed Application Runtime (Dapr) technology can work alongside ACA to unlock richer capabilities and simplify developer experience.

    Azure Container Apps and Dapr

    WEEK 2 - AZURE CONTAINER APPS & DAPR

    Posts will be linked here once published.

    • Sep 09 - Learn Core Concepts
    • Sep 10 - Build an ACA (with/out Dapr)
    • Sep 11 - Learn About: Communication
    • Sep 12 - Learn About: State Management
    • Sep 13 - Learn About: Observability
    • Sep 14 - Learn About: Secure Access
    • Sep 15 - ACA + Serverless On Azure

    Serverless Integrations

    Welcome to Week 3 - you've learned to build serverless applications using functions and microservices, orchestrated as containerized applications. Now let's explore a few core Azure services that streamline integrations with Azure and non-Azure services in standard, scalable ways.

    Week 3 Roadmap Week 3 Roadmap

    WEEK 3 - AZURE EVENT GRID & AZURE LOGIC APP

    Posts will be linked here once published.

    • Sep 16 - Logic Apps: Core Concepts
    • Sep 17 - Logic Apps: Quickstart
    • Sep 18 - Logic Apps: Best Practices
    • Sep 19 - Event Grid: Core Concepts
    • Sep 20 - Event Grid: Quickstart
    • Sep 21 - Event Grid: Best Practices
    • Sep 22 - Integrations + Serverless On Azure

    Serverless End-To-End

    It's the final week of Serverless September! So far we've talked about various components of a Serverless solution on Azure. Now let's explore various end-to-end examples and learn how we can make these components work together.

    Week 4 ARTICLES

    Posts will be linked here once published.

    • Sep 23 - TBA
    • Sep 24 - TBA
    • Sep 25 - TBA
    • Sep 26 - TBA
    • Sep 27 - TBA
    • Sep 28 - TBA
    • Sep 29 - TBA

    Week 4 Roadmap


    Sep 30: Summary

    THANK YOU & NEXT STEPS

    Thank you for staying the course with us. In the final two posts of this series we'll do two things:

    • Look Back - with a quick retrospective of what was covered.
    • Look Ahead - with resources and suggestions for how you can skill up further!

    We appreciate your time and attention and we hope you found this curated tour valuable. Feedback and suggestions are always welcome. From our entire team, we wish you good luck with the learning journey - now go build some apps and share your knowledge! 🎉

    Thank You


    - + \ No newline at end of file diff --git a/serverless-september/AskTheExpert/index.html b/serverless-september/AskTheExpert/index.html index a0ea7aacea..383c7e71b4 100644 --- a/serverless-september/AskTheExpert/index.html +++ b/serverless-september/AskTheExpert/index.html @@ -14,13 +14,13 @@ - +

    Ask The Expert

    1. Open a New Issue on the repo.
    2. Click Get Started on the 🎤 Ask the Expert! template.
    3. Fill in the details and submit!

    Our team will review all submitted questions and prioritize them for the live ATE session. Questions that don't get answered live (due to time constraints) will be responded to here, in response to your submitted issue.


    What is it?

    Ask the Expert is a series of scheduled 30-minute LIVE broadcasts where you can connect with experts to get your questions answered! You an also visit the site later, to view sessions on demand - and view answers to questions you may have submitted ahead of time.


    How does it work?

    The live broadcast will have a moderated chat session where you can submit questions in real time. We also have a custom 🎤 Ask The Expert issue you can use to submit questions ahead of time as mentioned earlier.

    • We strongly encourage you to submit questions early using that issue
    • Browse previously posted questions to reduce duplication.
    • Upvote (👍🏽) existing questions of interest to help us prioritize them for the live show.

    Doing this will help us all in a few ways:

    • We can ensure that all questions get answered here, even if we run out of time on the live broadcast.
    • Others can vote (👍🏽) on your question - helping us prioritize them live based on popularity
    • We can update them with responses post-event for future readers.

    When is it?

    Visit the ATE : Serverless September page to see the latest schedule and registration links! For convenience, we've replicated some information here. Please click the REGISTER TO ATTEND links to save the date and get notified of key details like links to the livestream (pre-event) and recording (post-event.)

    DateDescription
    Sep 15, 2022 : Functions-as-a-Service (FaaS)It is time to focus on the pieces of code that matter most to you while Azure Functions handles the rest. Discuss with the experts on how to execute event-driven serverless code functions with an end-to-end development experience using Azure Functions.

    REGISTER TO ATTEND
    Sep 29, 2022 : Containers & Microservices

    Azure Container Apps is an app-centric service, empowering developers to focus on the differentiating business logic of their apps rather than on cloud infrastructure management. Discuss with the experts on how to build and deploy modern apps and microservices using serverless containers with Azure Container Apps.

    REGISTER TO ATTEND
    - + \ No newline at end of file diff --git a/serverless-september/CloudSkills/index.html b/serverless-september/CloudSkills/index.html index 69c6bc3699..66e1f3e737 100644 --- a/serverless-september/CloudSkills/index.html +++ b/serverless-september/CloudSkills/index.html @@ -14,13 +14,13 @@ - +

    Cloud Skills Challenge

    Use the link above to register for the Cloud Skills Challenge today! You will get an automatical email notification when the challenge kicks off, ensuring you don't waste any time! The challenge runs for 30 days (Sep 1 - Sep 30) so an early start helps!


    About Cloud Skills

    The Cloud Skills Challenge is a fun way to skill up on Azure serverless technologies while competing with other members of the community for a chance to win fun swag!

    You'll work your way through learning modules that skill you up on relevant technologies - while collecting points that place you on a Leaderboard.

    1. 🎯 Compete - Benchmark your progress against friends and coworkers.
    2. 🎓 Learn - Increase your understanding by completing learning modules.
    3. 🏆 Skill Up - Gain useful technical skills and prep for certifications.

    About Microsoft Learn

    Completed the Cloud Skills Challenge, and want to keep going on your learning journey? Or, perhaps there are other Cloud+AI topics you want to skill up in? Check out these three resources for building your professional profile!

    1️⃣ - LEARNING PATHS2️⃣ - CERTIFICATIONS3️⃣ - LEARNING EVENTS
    Skill up on a topic with guided paths for self-study!Showcase your expertise with industry-recognized credentials!Learn from subject matter experts in live & recorded events
    - + \ No newline at end of file diff --git a/serverless-september/CommunityBuzz/index.html b/serverless-september/CommunityBuzz/index.html index d5ecf6cae6..3f29cb1177 100644 --- a/serverless-september/CommunityBuzz/index.html +++ b/serverless-september/CommunityBuzz/index.html @@ -14,13 +14,13 @@ - + - + \ No newline at end of file diff --git a/serverless-september/ServerlessHacks/index.html b/serverless-september/ServerlessHacks/index.html index eb758dc485..6d0c5594f3 100644 --- a/serverless-september/ServerlessHacks/index.html +++ b/serverless-september/ServerlessHacks/index.html @@ -14,13 +14,13 @@ - +

    Serverless Hacks

    1. Open a New Issue on the repo.
    2. Click Get Started on the 🎯 My Serverless Hacks ! template.
    3. Fill in the details and submit!

    We'll review submissions on a rolling basis, to verify that the submitted hacks are complete. Accepted submissions will be added to the 🏆 Hall Of Fame here as a permanent record of your accomplishment!

    You can submit multiple entries - but each must be associated with a unique GitHub repo and showcase something new or different you did beyond the default. Read on for examples of how you can Extend the Hack.


    🌩 Join The Hack!

    Visit the Serverless September At The Reactor page and register to attend weekly online sessions with Cloud Advocate Gwyneth Peña-Siguenza and special guests! Hear real-world serverless stories, ask questions and get insights to help you progress in your challenge.

    • Sep 7 | How to get into Tech And Serverless - with Linda Nichols. REGISTER HERE
    • Sep 14 | How to DevOps and Serverless the Right Way. REGISTER HERE
    • Sep 21 | The Serverless Project that Got Me Promoted! REGISTER HERE
    • Sep 28 | So you want to migrate your project to Serverless? REGISTER HERE

    She will host weekly office hours where they discuss Serverless topics, take questions and provide guidance to help you walk through the mini-challenges in this year's What The Hack: Serverless Challenge described below. Plus, she'll share her own solution in a series of video walkthroughs that can guide you in your own challenge journey!


    🎯 Complete Hacks

    Your challenge this year comes from What The Hack, part of a collection of challenge-based hackathons that you can complete - or in a team of 3-5 people as a collaborative learning experience in-person or online. The goal is to learn from each other and share your insights with the broader community in a way that helps you build and retain expertise, while also contributing back.

    The figure above shows the specific challenge you will work on: Azure Serverless in the category of Application Modernization. In this challenge, you will build a Tollbooth application using a serverless architecture involving multiple Azure services.

    Don't forget to join the weekly office hour sessions if you have questions or need help. And make sure you submit your solution to our Hall Of Fame when you are done!

    SERVERLESS HACK RESOURCES

    Here's a handy link to the Resources.zip file that is mentioned in the Serverless Hacks walkthrough.


    💡 Extend Hacks

    The 8-challenge hack provides the default path for working on a solution. But you have options to go beyond this, or do something new or different!

    • Check out the Optional Challenges identified in the Hack page.
    • Implement your solution in different languages (Java, JS, C#/.NET, Python)
    • Extend the scenario to add another Azure Service (e.g., Azure Container Apps)
    • Explore new developer tools or workflows (e.g., Azure Developer CLI)

    🏆 Hall Of Fame

    THE SERVERLESS HACKS HALL OF FAME!

    This section lists participants who submitted a valid hack solution, with a link to the repository containing their code. We wanted to celebrate your accomplishments publicly, and amplify your work as a learning resource for others!

    We can't wait to see what you build!

    - + \ No newline at end of file diff --git a/serverless-september/ZeroToHero/index.html b/serverless-september/ZeroToHero/index.html index 07fb772749..1ea8a7056f 100644 --- a/serverless-september/ZeroToHero/index.html +++ b/serverless-september/ZeroToHero/index.html @@ -14,13 +14,13 @@ - +

    Zero To Hero

    About This Series

    Zero-to-Hero is a series of blog posts from our Product Engineering teams, that will be published on the Microsoft Tech Community: Apps On Azure blog and links updated below for convenience.


    Azure Functions

    Published OnTopicAuthor / Link
    Sep 5, 2022A walkthrough of Durable Entities Lily Ma
    David Justo
    Sep 12, 2022Building serverless Go applications with Azure functions with Custom Handlers Melony Qin
    Sep 15, 2022🎤 Ask The Expert
    Live Q&A with Azure Functions Team
    🌟 Register
    Sep 19, 2022Error Handling with Apache Kafka extension for AzureRamya Oruganti
    Sep 26, 2022Monitoring & Troubleshooting apps in Azure Functions Madhura Bharadwaj

    Azure Container Apps

    Published OnTopicAuthor
    Sep 5, 2022Go Cloud-Native With Azure Container AppsKendall Roden
    Sep 12, 2022Journey to the cloud with Azure Container AppsAnthony Chu
    Sep 19, 2022Observability with Azure Container AppsMike Morton
    Sep 26, 2022End-to-End solution development with codeKendall Roden
    Sep 29, 2022🎤 Ask The Expert
    Live Q&A with Azure Container Apps Team
    🌟 Register
    - + \ No newline at end of file diff --git a/serverless-september/index.html b/serverless-september/index.html index 570e7f828d..a0f804c67f 100644 --- a/serverless-september/index.html +++ b/serverless-september/index.html @@ -14,13 +14,13 @@ - +

    It's Serverless September!

    Join us for a month-long celebration of serverless computing - from core concepts and developer tools, to usage scenarios and best practices. Bookmark this page, then join us September 1, 2022 as we kickstart multiple community-driven and self-guided learning initiatives for jumpstarting your Cloud-Native journey.

    #30DaysOfServerless

    Join us on a #30Day journey covering Azure Functions, Container Apps, Dapr, Event Grid, Logic Apps & more.

    Zero To Hero

    Get the latest updates on Serverless On Azure products and features - directly from product teams!

    Serverless Hacks

    Join us for weekly events at Microsoft Reactor, as we work through hands-on challenges in Serverless!

    Cloud Skills

    Skill up on key cloud technologies with these free, self-guided learning courses - and make the leaderboard!

    Ask The Expert

    Join us for online conversations with the product teams - submit questions ahead of time or ask them live!

    Community Buzz

    Build interesting demos or wrote helpful articles? Contribute your feedback and content for a chance to be featured!

    - + \ No newline at end of file