diff --git a/00-course-setup/README.md b/00-course-setup/README.md
index 0fc05eb11..b67aa1303 100644
--- a/00-course-setup/README.md
+++ b/00-course-setup/README.md
@@ -1,67 +1,65 @@
-# Getting Started with this course
+# Getting Started with this course
-We are very excited for you to start this course and see what you get inspired to build with Generative AI!
+We are very excited for you to start this course and see what you get inspired to build with Generative AI!
-To make your time successful, we have created this page that outlines any setup steps, technical requirements, and how to get help when you need it.
+To make your time successful, we have created this page that outlines any setup steps, technical requirements, and how to get help when you need it.
+## Setup Steps
-## Setup Steps
-
+To start taking this course, you will need to complete the following steps.
### 1. Fork this Repo
-[Fork this entire repo](https://github.com/microsoft/generative-ai-for-beginners/fork) to your own GitHub account to be able to change any code and complete the challenges. You can also [star (🌟) this repo](https://docs.github.com/en/get-started/exploring-projects-on-github/saving-repositories-with-stars) to find it and related repos easier.
-
+[Fork this entire repo](https://github.com/microsoft/generative-ai-for-beginners/fork) to your own GitHub account to be able to change any code and complete the challenges. You can also [star (🌟) this repo](https://docs.github.com/en/get-started/exploring-projects-on-github/saving-repositories-with-stars) to find it and related repos easier.
-### 2. Create a Codespace
-To avoid any dependency issues when running the code, we recommend running this course in a GitHub codespace.
+### 2. Create a codespace
-This can be created by selecting the `Code` option on your forked version of this repo and selecting the `Codespaces` option.
+To avoid any dependency issues when running the code, we recommend running this course in a GitHub codespace.
-![](./images/who-will-pay.webp)
+This can be created by selecting the `Code` option on your forked version of this repo and selecting the **Codespaces** option.
+![Dialog showing buttons to create a codespace](./images/who-will-pay.webp)
-### 3. Storing Your API Keys
-Keeping your API keys safe and secure is important when building any type of application. We encourage you not to store any API keys directly in the code you are working with as committing those details to a public repository could result in unwanted costs and issues.
+### 3. Storing Your API Keys
-One of the best ways to keep your API keys secure when using GitHub Codespaces is by using Codespace Secrets. Please follow this guide on how to [manage secrets for your codespaces](https://docs.github.com/en/codespaces/managing-your-codespaces/managing-secrets-for-your-codespaces).
+Keeping your API keys safe and secure is important when building any type of application. We encourage you not to store any API keys directly in the code you are working with as committing those details to a public repository could result in unwanted costs and issues.
+One of the best ways to keep your API keys secure when using GitHub Codespaces is by using Codespace Secrets. Please follow this guide on how to [manage secrets for your codespaces](https://docs.github.com/en/codespaces/managing-your-codespaces/managing-secrets-for-your-codespaces).
-## Lessons and Technical Requirements
+## Lessons and Technical Requirements
-The course has 6 concept lessons and 6 coding lessons.
+The course has 6 concept lessons and 6 coding lessons.
-For the coding lessons, we are using the Azure OpenAI Service. You will need access to the Azure OpenAI service and an API key to run this code. You can apply to get access by [completing this application](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUOFA5Qk1UWDRBMjg0WFhPMkIzTzhKQ1dWNyQlQCN0PWcu&culture=en-us&country=us).
+For the coding lessons, we are using the Azure OpenAI Service. You will need access to the Azure OpenAI service and an API key to run this code. You can apply to get access by [completing this application](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUOFA5Qk1UWDRBMjg0WFhPMkIzTzhKQ1dWNyQlQCN0PWcu&culture=en-us&country=us).
While you wait for your application to be processed, each coding lesson also includes a `README.md` file where you can view the code and outputs.
-## Using the Azure OpenAI Service for the First Time
-
-If this is your first time working with the Azure OpenAI service, please follow this guide on how to [create and deploy an Azure OpenAI Service resource.](https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/create-resource?pivots=web-portal)
+## Using the Azure OpenAI Service for the First Time
-## Meet Other Learners
+If this is your first time working with the Azure OpenAI service, please follow this guide on how to [create and deploy an Azure OpenAI Service resource.](https://learn.microsoft.com/azure/ai-services/openai/how-to/create-resource?pivots=web-portal)
-We have created channels in our official [AI Community Discord server](https://aka.ms/genai-discord) for meeting other learners. This is a great way to network with other like-minded entrepreneurs, builders, students, and anyone looking to level up in Generative AI.
+## Meet Other Learners
-[![](https://dcbadge.vercel.app/api/server/ByRwuEEgH4)](https://aka.ms/genai-discord)
+We have created channels in our official [AI Community Discord server](https://aka.ms/genai-discord) for meeting other learners. This is a great way to network with other like-minded entrepreneurs, builders, students, and anyone looking to level up in Generative AI.
-The project team will also be on this Discord server to help any learners.
+[![Join discord channel](https://dcbadge.vercel.app/api/server/ByRwuEEgH4)](https://aka.ms/genai-discord)
+The project team will also be on this Discord server to help any learners.
-## Contribute
+## Contribute
This course is an open-source initiative. If you see areas of improvement or issues, please create a [Pull Request](https://github.com/microsoft/generative-ai-for-beginners/pulls) or log a [Github issue](https://github.com/microsoft/generative-ai-for-beginners/issues).
-The project team will be tracking all contributions and contributing to open source is an amazing way to build your career in Generative AI.
+The project team will be tracking all contributions and contributing to open source is an amazing way to build your career in Generative AI.
-Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.microsoft.com.
+Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit [CLA, Contributor License Agreement website](https://cla.microsoft.com).
Important: when translating text in this repo, please ensure that you do not use machine translation. We will verify translations via the community, so please only volunteer for translations in languages where you are proficient.
When you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repositories using our CLA.
-This project has adopted the Microsoft Open Source Code of Conduct. For more information read the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.
+This project has adopted the Microsoft Open Source Code of Conduct. For more information read the Code of Conduct FAQ or contact [Email opencode](opencode@microsoft.com) with any additional questions or comments.
-## Let's Get Started
+## Let's Get Started
Now that you have completed the needed steps to complete this course, let's get started by getting an [introduction to Generative AI and LLMs](../01-introduction-to-genai/README.md).
\ No newline at end of file
diff --git a/02-exploring-and-comparing-different-llms/README.md b/02-exploring-and-comparing-different-llms/README.md
index fab2875c7..4b8f9a6c1 100644
--- a/02-exploring-and-comparing-different-llms/README.md
+++ b/02-exploring-and-comparing-different-llms/README.md
@@ -2,7 +2,7 @@
[![Exploring and comparing different LLMs](./images/02-lesson-banner.png)](https://youtu.be/J1mWzw0P74c)
-*(Click the image above to view video of this lesson)*
+> *Click the image above to view video of this lesson*
With the previous lesson, we have seen how Generative AI is changing the technology landscape, how Large Language Models (LLMs) work and how a business - like our startup - can apply them to their use cases and grow! In this chapter, we're looking to compare and contrast different types of large language models, LLMs to understand their pros and cons.
@@ -19,42 +19,44 @@ This lesson will cover:
## Learning Goals
After completing this lesson, you will be able to:
+
- Select the right model for your use case.
- Understand how to test, iterate, and improve performance of your model.
- Know how businesses deploy models.
-## Understand different types of LLMs
+## Understand different types of LLMs
-Large Language Models (LLMs) can have multiple categorizations based on their architecture, training data, and use case. Understanding these differences will help our startup select the right model for the scenario, and understand how to test, iterate, and improve performance.
+Large Language Models (LLMs) can have multiple categorizations based on their architecture, training data, and use case. Understanding these differences will help our startup select the right model for the scenario, and understand how to test, iterate, and improve performance.
-There are many different types of LLM models, your choice of model depends on what you aim to use them for, your data, how much you're ready to pay and more.
+There are many different types of LLM models, your choice of model depends on what you aim to use them for, your data, how much you're ready to pay and more.
-Depending on if you aim to use the models for text, audio, video, image generation and so on, you might opt for a differen type of model.
+Depending on if you aim to use the models for text, audio, video, image generation and so on, you might opt for a different type of model.
-- **Audio and speech recognition**. For this purpose, Whisper-type models are a great choice as they're general-purpose and aimed at speech recognition. It's trained on diverse audio and can perform multilingual speech recognition. As an example, you can use everything from a cheaper, but capable model like curie to the more costly but performat davinci type model. Learn more about [Whisper type models here](https://platform.openai.com/docs/models/whisper).
+- **Audio and speech recognition**. For this purpose, Whisper-type models are a great choice as they're general-purpose and aimed at speech recognition. It's trained on diverse audio and can perform multilingual speech recognition. As an example, you can use everything from a cheaper, but capable model like curie to the more costly but performant davinci type model. Learn more about [Whisper type models here](https://platform.openai.com/docs/models/whisper).
-- **Image generation**. For image generation, DALL-E and Midjourney are two very known choices. DALL-E is offered by Azure OpenAI. [Read more about DALL-E here](https://platform.openai.com/docs/models/dall-e) and also in Chapter 9 of this curriculum
+- **Image generation**. For image generation, DALL-E and Midjourney are two very known choices. DALL-E is offered by Azure OpenAI. [Read more about DALL-E here](https://platform.openai.com/docs/models/dall-e) and also in Chapter 9 of this curriculum.
- **Text generation**. Most models are trained on text generation and you have a large variety of choices from GPT-3.5 to GPT-4. They come at different costs with GPT-4 being the most expensive. It's worth looking into the [Azure Open AI playground](https://oai.azure.com/portal/playground) to evaluate which models best fit your needs in terms of capability and cost.
-Selecting a model means you get some basic capabilties, that might not be enough however. Often you have company specific data that you somehow need to tell the LLM about. There are a few different choices on how to approach that, more on that in the upcoming sections.
+Selecting a model means you get some basic capabilities, that might not be enough however. Often you have company specific data that you somehow need to tell the LLM about. There are a few different choices on how to approach that, more on that in the upcoming sections.
### Foundation Models versus LLMs
The term Foundation Model was [coined by Stanford researchers](https://arxiv.org/abs/2108.07258) and defined as an AI model that follows some criteria, such as:
-- **They are trained using unsupervised learning or self-supervised learning**, meaning they are trained on unlabeled multimodal data, and they do not require human annotation or labeling of data for their training process.
+
+- **They are trained using unsupervised learning or self-supervised learning**, meaning they are trained on unlabeled multi-modal data, and they do not require human annotation or labeling of data for their training process.
- **They are very large models**, based on very deep neural networks trained on billions of parameters.
- **They are normally intended to serve as a ‘foundation’ for other models**, meaning they can be used as a starting point for other models to be built on top of, which can be done by fine-tuning.
![Foundation Models versus LLMs](./images/FoundationModel.png)
-
+
Image source: [Essential Guide to Foundation Models and Large Language Models | by Babar M Bhatti | Medium
-](https://thebabar.medium.com/essential-guide-to-foundation-models-and-large-language-models-27dab58f7404)
+](https://thebabar.medium.com/essential-guide-to-foundation-models-and-large-language-models-27dab58f7404)
-To further clarify this distinction, let’s take ChatGPT as an example. To build the first version of ChatGPT, a model called GPT-3.5 served as the foundation model. This means that OpenAI used some chat-specific data to create a tuned version of GPT-3.5 that was specialized in performing well in conversational scenarios, such as chatbots.
+To further clarify this distinction, let’s take ChatGPT as an example. To build the first version of ChatGPT, a model called GPT-3.5 served as the foundation model. This means that OpenAI used some chat-specific data to create a tuned version of GPT-3.5 that was specialized in performing well in conversational scenarios, such as chat bots.
![Foundation Model](./images/Multimodal.png)
-
+
Image source: [2108.07258.pdf (arxiv.org)](https://arxiv.org/pdf/2108.07258.pdf)
### Open Source versus Proprietary Models
@@ -65,7 +67,6 @@ Open-source models are models that are made available to the public and can be u
Proprietary models are models that are owned by a company and are not made available to the public. These models are often optimized for production use. However, they are not allowed to be inspected, modified, or customized for different use cases. Plus, they are not always available for free, and may require a subscription or payment to use. Also, users do not have control over the data that is used to train the model, which means they should entrust the model owner with ensuring commitment about data privacy and responsible use of AI. Examples of popular proprietary models include [OpenAI models](https://platform.openai.com/docs/models/overview), [Google Bard](https://sapling.ai/llm/bard) or [Claude 2](https://www.anthropic.com/index/claude-2).
-
### Embedding versus Image generation versus Text and Code generation
LLMs can also be categorized by the output they generate.
@@ -74,7 +75,7 @@ Embeddings are a set of models that can convert text into a numerical form, call
![Embedding](./images/Embedding.png)
-Image generation models are models that generate images. These models are often used for image editing, image synthesis, and image translation. Image generation models are often trained on large datasets of images, such as [LAION-5B](https://laion.ai/blog/laion-5b/), and can be used to generate new images or to edit existing images with inpainting, super-resolution, and colorization techniques. Examples include [DALLE3](https://openai.com/dall-e-3) and [Stable Diffusion models](https://github.com/Stability-AI/StableDiffusion).
+Image generation models are models that generate images. These models are often used for image editing, image synthesis, and image translation. Image generation models are often trained on large datasets of images, such as [LAION-5B](https://laion.ai/blog/laion-5b/), and can be used to generate new images or to edit existing images with inpainting, super-resolution, and colorization techniques. Examples include [DALL-E-3](https://openai.com/dall-e-3) and [Stable Diffusion models](https://github.com/Stability-AI/StableDiffusion).
![Image generation](./images/Image.png)
@@ -84,7 +85,7 @@ Text and code generation models are models that generate text or code. These mod
### Encoder-Decoder versus Decoder-only
-To talk about the different types of architectures of LLMs, let's use an analogy.
+To talk about the different types of architectures of LLMs, let's use an analogy.
Imagine your manager gave you a task for writing a quiz for the students. You have two colleagues; one oversees creating the content and the other oversees reviewing them.
@@ -96,40 +97,41 @@ Imagine that we can have someone as well who could create and review the quiz, t
### Service versus Model
-Now, let's talk about the difference between a service and a model. A service is a product that is offered by a Cloud Service Provider, and is often a combination of models, data, and other components. A model is the core component of a service, and is often a foundation model, such as an LLM.
+Now, let's talk about the difference between a service and a model. A service is a product that is offered by a Cloud Service Provider, and is often a combination of models, data, and other components. A model is the core component of a service, and is often a foundation model, such as an LLM.
Services are often optimized for production use and are often easier to use than models, via a graphical user interface. However, services are not always available for free, and may require a subscription or payment to use, in exchange to leverage service owner’s equipment and resources, optimizing expenses and scaling easily. An example of service is [Azure OpenAI service](https://learn.microsoft.com/azure/ai-services/openai/overview), which offers a pay-as-you-go rate plan, meaning users are charged proportionally to how much they use the service Also, Azure OpenAI service offers enterprise-grade security and responsible AI framework on top of the models' capabilities.
Models are just the Neural Network, with the parameters, weights, and others. Allowing companies to run locally, however, would need to buy equipment, build structure to scale and buy a license or use an open-source model. A model like LLaMA is available to be used, requiring computational power to run the model.
-
-## How to test and iterate with different models to understand performance on Azure
+
+## How to test and iterate with different models to understand performance on Azure
Once our team has explored the current LLMs landscape and identified some good candidates for their scenarios, the next step is testing them on their data and on their workload. This is an iterative process, done by experiments and measures.
-Most of the models we mentioned in previous paragraphs (OpenAI models, open source models like Llama2, and Hugging Face transformers) are available in the [Foundation Models](https://learn.microsoft.com/en-us/azure/machine-learning/concept-foundation-models) catalog in [Azure Machine Learning studio](https://ml.azure.com/).
+Most of the models we mentioned in previous paragraphs (OpenAI models, open source models like Llama2, and Hugging Face transformers) are available in the [Foundation Models](https://learn.microsoft.com/azure/machine-learning/concept-foundation-models) catalog in [Azure Machine Learning studio](https://ml.azure.com/).
+
+[Azure Machine Learning](https://azure.microsoft.com/products/machine-learning/) is a Cloud Service designed for data scientists and ML engineers to manage the whole ML lifecycle (train, test, deploy and handle MLOps) in a single platform. The Machine Learning studio offers a graphical user interface to this service and enables the user to:
-[Azure Machine Learning](https://azure.microsoft.com/en-us/products/machine-learning/) is a Cloud Service designed for data scientists and ML engineers to manage the whole ML lifecycle (train, test, deploy and handle MLOps) in a single platform. The Machine Learning studio offers a graphical user interface to this service and enables the user to:
-- Find the Foundation Model of interest in the catalog, filtering by task, license, or name. It’s also possible to import new models that are not yet included in the catalog.
-- Review the model card, including a detailed description and code samples, and test it with the Sample Inference widget, by providing a sample prompt to test the result.
+- Find the Foundation Model of interest in the catalog, filtering by task, license, or name. It’s also possible to import new models that are not yet included in the catalog.
+- Review the model card, including a detailed description and code samples, and test it with the Sample Inference widget, by providing a sample prompt to test the result.
![Model card](./images/Llama1.png)
-
-- Evaluate model performance with objective evaluation metrics on a specific workload and a specific set of data provided in input.
+
+- Evaluate model performance with objective evaluation metrics on a specific workload and a specific set of data provided in input.
![Model evaluation](./images/Llama2.png)
-
-- Fine-tune the model on custom training data to improve model performance in a specific workload, leveraging the experimentation and tracking capabilities of Azure Machine Learning.
+
+- Fine-tune the model on custom training data to improve model performance in a specific workload, leveraging the experimentation and tracking capabilities of Azure Machine Learning.
![Model fine-tuning](./images/Llama3.png)
-
-- Deploy the original pre-trained model or the fine-tuned version to a remote real time inference or batch endpoint, to enable applications to consume it.
+
+- Deploy the original pre-trained model or the fine-tuned version to a remote real time inference or batch endpoint, to enable applications to consume it.
![Model deployment](./images/Llama4.png)
## Improving LLM results
+
We’ve explored with our startup team different kinds of LLMs and a Cloud Platform (Azure Machine Learning) enabling us to compare different models, evaluate them on test data, improve performance and deploy them on inference endpoints.
-But when shall they consider fine-tuning a model rather than using a pre-trained one? Are there other approaches to improve model performance on specific workloads?
-
+But when shall they consider fine-tuning a model rather than using a pre-trained one? Are there other approaches to improve model performance on specific workloads?
There are several approaches a business can use to get the results they need from an LLM, you can select different types of models with different degrees of training
@@ -141,9 +143,8 @@ deploy an LLM in production, with different levels of complexity, cost, and qual
- **Fine-tuned model**. Here, you trained the model further on your own data which leads to the model being more exact and responsive to your needs but might be costly.
-
![LLMs deployment](./images/Deploy.png)
-
+
Img source: [Four Ways that Enterprises Deploy LLMs | Fiddler AI Blog](https://www.fiddler.ai/blog/four-ways-that-enterprises-deploy-llms)
### Prompt Engineering with Context
@@ -156,7 +157,8 @@ Prompt engineering with context is the most cost-effective approach to kick-off
### Retrieval Augmented Generation (RAG)
LLMs have the limitation that they can use only the data that has been used during their training to generate an answer. This means that they don’t know anything about the facts that happened after their training process, and they cannot access non-public information (like company data).
-This can be overcome through RAG, a technique that augments prompt with external data in the form of chunks of documents, considering prompt length limits. This is supported by Vector database tools (like [Azure Vector Search](https://learn.microsoft.com/en-us/azure/search/vector-search-overview)) that retrieve the useful chunks from varied pre-defined data sources and add them to the prompt Context.
+This can be overcome through RAG, a technique that augments prompt with external data in the form of chunks of documents, considering prompt length limits. This is supported by Vector database tools (like [Azure Vector Search](https://learn.microsoft.com/azure/search/vector-search-overview)) that retrieve the useful chunks from varied pre-defined data sources and add them to the prompt Context.
+
This technique is very helpful when a business doesn’t have enough data, enough time, or resources to fine-tune an LLM, but still wishes to improve performance on a specific workload and reduce risks of hallucinations, i.e., mystification of reality or harmful content.
### Fine-tuned model
@@ -164,18 +166,19 @@ This technique is very helpful when a business doesn’t have enough data, enoug
Fine-tuning is a process that leverages transfer learning to ‘adapt’ the model to a downstream task or to solve a specific problem. Differently from few-shot learning and RAG, it results in a new model being generated, with updated weights and biases. It requires a set of training examples consisting of a single input (the prompt) and its associated output (the completion).
This would be the preferred approach if:
-- **Using fine-tuned models**. A business would like to use fine-tuned less capable models (like embedding models) rather than high performance models, resulting in a more cost effective and fast solution.
+- **Using fine-tuned models**. A business would like to use fine-tuned less capable models (like embedding models) rather than high performance models, resulting in a more cost effective and fast solution.
-- **Considering latency**. Latency is important for a specific use-case, so it’s not possible to use very long prompts or the number of examples that should be learnt from the model doesn’t fit with the prompt length limit.
+- **Considering latency**. Latency is important for a specific use-case, so it’s not possible to use very long prompts or the number of examples that should be learnt from the model doesn’t fit with the prompt length limit.
-- **Staying up to date**. A business has a lot of high-quality data and ground truth labels and the resources required to maintain this data up to date over time.
+- **Staying up to date**. A business has a lot of high-quality data and ground truth labels and the resources required to maintain this data up to date over time.
### Trained model
+
Training an LLM from scratch is without a doubt the most difficult and the most complex approach to adopt, requiring massive amounts of data, skilled resources, and appropriate computational power. This option should be considered only in a scenario where a business has a domain-specific use case and a large amount of domain-centric data.
## Knowledge check
-What could be a good approach to improve LLM completion results?
+What could be a good approach to improve LLM completion results?
1. Prompt engineering with context
1. RAG
@@ -185,14 +188,10 @@ A:3, if you have the time and resources and high quality data, fine-tuning is th
## 🚀 Challenge
-Read up more on how you can [use RAG](https://learn.microsoft.com/en-us/azure/search/retrieval-augmented-generation-overview) for your business.
+Read up more on how you can [use RAG](https://learn.microsoft.com/azure/search/retrieval-augmented-generation-overview) for your business.
-## Great Work, Continue Your Learning!
-
-
-Want to learn more about different Generative AI concepts? Go to the [contiuned learning page](../13-continued-learning/README.md) to find other great resources on this topic.
+## Great Work, Continue Your Learning
+Want to learn more about different Generative AI concepts? Go to the [continued learning page](../13-continued-learning/README.md) to find other great resources on this topic.
Head over to the Lesson 3 where we will look at how to [build with Generative AI Responsibly](/03-using-generative-ai-responsibly/README.MD)!
-
-
diff --git a/03-using-generative-ai-responsibly/README.MD b/03-using-generative-ai-responsibly/README.MD
index 060a66e00..c4882c061 100644
--- a/03-using-generative-ai-responsibly/README.MD
+++ b/03-using-generative-ai-responsibly/README.MD
@@ -1,126 +1,116 @@
-# Using Generative AI Responsibly
+# Using Generative AI Responsibly
-[![Using Generative AI Responsibly ](./images/genai_course_3[77].png)]()
+[![Using Generative AI Responsibly](./images/genai_course_3[77].png)]()
-**Video Coming Soon**
+> **Video Coming Soon**
-It's easy to be fascinated with AI and generative AI in particular, but you need to consider how you would use it responsibly. You need to consider things like how to ensure the output is fair, non-harmful and more. This chapter aims to provide you with mentioned context, what to consider, and how to take active steps to improve your AI usage.
+It's easy to be fascinated with AI and generative AI in particular, but you need to consider how you would use it responsibly. You need to consider things like how to ensure the output is fair, non-harmful and more. This chapter aims to provide you with mentioned context, what to consider, and how to take active steps to improve your AI usage.
-## Introduction
+## Introduction
This lesson will cover:
+
- Why you should prioritize Responsible AI when building Generative AI applications.
- Core principles of Responsible AI and how they relate to Generative AI.
-- How to put these Responsible AI principles into practice through strategy and tooling.
+- How to put these Responsible AI principles into practice through strategy and tooling.
-## Learning Goals
+## Learning Goals
After completing this lesson you will know:
-- The importance of Responsible AI when building Generative AI applications.
-- When to think and apply the core principles of Responsible AI when building Generative AI applications.
-- What tools and strategies are available to you to put the concept of Responsible AI into practice.
+- The importance of Responsible AI when building Generative AI applications.
+- When to think and apply the core principles of Responsible AI when building Generative AI applications.
+- What tools and strategies are available to you to put the concept of Responsible AI into practice.
-## Responsible AI Principles
+## Responsible AI Principles
-The excitement of Generative AI has never been higher. This excitement has brought a lot of new developers, attention, and funding to this space. While this is very positive for anyone looking to build products and companies using Generative AI, it is also important we proceed responsibly.
+The excitement of Generative AI has never been higher. This excitement has brought a lot of new developers, attention, and funding to this space. While this is very positive for anyone looking to build products and companies using Generative AI, it is also important we proceed responsibly.
Throughout this course, we are focusing on building our startup and our AI education product. We’ll use the principles of Responsible AI: Fairness, Inclusiveness, Reliability/Safety, Security & Privacy, Transparency and Accountability. With these principles, we will explore how they relate to our use of Generative AI in our products.
+## Why Should You Prioritize Responsible AI
-## Why Should You Prioritise Responsible AI
-
-When building a product, taking a human-centric approach by keeping your user's best interest in mind leads to the best results.
+When building a product, taking a human-centric approach by keeping your user's best interest in mind leads to the best results.
-The uniqueness of Generative AI is its power to create helpful answers, information, guidance, and content for users. This can be done without many manual steps which can lead to very impressive results. Without proper planning and strategies, it can also unfortunately lead to some harmful results for your users, your product, and society as a whole.
+The uniqueness of Generative AI is its power to create helpful answers, information, guidance, and content for users. This can be done without many manual steps which can lead to very impressive results. Without proper planning and strategies, it can also unfortunately lead to some harmful results for your users, your product, and society as a whole.
-Let's look at some (but not all) of these potentially harmful results:
+Let's look at some (but not all) of these potentially harmful results:
-### Hallucinations
+### Hallucinations
-Hallucinations are a term used to describe when an LLM produces content that is either completely nonsensical or something we know is factually wrong based on other sources of information.
+Hallucinations are a term used to describe when an LLM produces content that is either completely nonsensical or something we know is factually wrong based on other sources of information.
Let's take for example we build a feature for our startup that allows students to ask historical questions to a model. A student asks the question `Who was the sole survivor of Titanic?`
The model produces a response like the one below:
+![Prompt saying "Who was the sole survivor of the Titanic"](/03-using-generative-ai-responsibly/images/2135-ChatGPT(1)_11zon.webp)
-![](/03-using-generative-ai-responsibly/images/2135-ChatGPT(1)_11zon.webp)
-
-*(Source: https://flyingbisons.com)*
+> *(Source: [Flying bisons](https://flyingbisons.com))*
This is a very confident and thorough answer. Unfortunately, it is incorrect. Even with a minimal amount of research, one would discover there was more than one survivor of the Titanic survivor. For a student who is just starting to research this topic, this answer can be persuasive enough to not be questioned and treated as fact. The consequences of this can lead to AI system to be unreliably and negatively impact the reputation of our startup.
-With each iteration of any given LLM, we have seen performance improvements around minimizing hallucinations. Even with this improvement, we as application builders and users still need to remain aware of these limitations.
+With each iteration of any given LLM, we have seen performance improvements around minimizing hallucinations. Even with this improvement, we as application builders and users still need to remain aware of these limitations.
+
+### Harmful Content
-### Harmful Content
+We covered in the earlier section when an LLM produces incorrect or nonsensical responses. Another risk we need to be aware of is when a model responds with harmful content.
-We covered in the earlier section when an LLM produces incorrect or nonsensical responses. Another risk we need to be aware of is when a model responds with harmful content.
+Harmful content can be defined as:
-Harmful content can be defined as:
-- Providing instructions or encouraging self-harm or harm to certain groups
-- Hateful or demeaning content
-- Guiding planning any type of attack or violent acts
-- Providing instructions on how to find illegal content or commit illegal acts
-- Displaying sexually explicit content
+- Providing instructions or encouraging self-harm or harm to certain groups.
+- Hateful or demeaning content.
+- Guiding planning any type of attack or violent acts.
+- Providing instructions on how to find illegal content or commit illegal acts.
+- Displaying sexually explicit content.
-For our startup, we want to make sure we have the right tools and strategies in place to prevent this type of content from being seen by students.
+For our startup, we want to make sure we have the right tools and strategies in place to prevent this type of content from being seen by students.
-### Lack of Fairness
+### Lack of Fairness
Fairness is defined as “ensuring that an AI system is free from bias and discrimination and that they treat everyone fairly and equally.” In the world of Generative AI, we want to ensure that exclusionary worldviews of marginalized groups are not reinforced by the model’s output.
These types of outputs are not only destructive to building positive product experiences for our users, but they also cause further societal harm. As application builders, we should always keep a wide and diverse user base in mind when building solutions with Generative AI.
-## How to Use Generative AI Responsibly
+## How to Use Generative AI Responsibly
-Now that we have identified the importance of Responsible Generative AI, let's look at 4 steps we can take to build our AI solutions responsibly:
+Now that we have identified the importance of Responsible Generative AI, let's look at 4 steps we can take to build our AI solutions responsibly:
![Mitigate Cycle](./images/mitigate-cycle.png)
-### Measure Potential Harms
-In software testing, we test the expected actions of a user on an application. Similarly, testing a diverse set of prompts users are most likely going to use is a good way to measure potential harm.
-
-Since our startup is building an education product, it would be good to prepare a list of education-related prompts. This could be to cover a certain subject, historical facts, and prompts about student life.
-
+### Measure Potential Harms
-### Mitigate Potential Harms
-It is now time to find ways where we can prevent or limit the potential harm caused by the model and its responses. We can look at this in 4 different layers:
+In software testing, we test the expected actions of a user on an application. Similarly, testing a diverse set of prompts users are most likely going to use is a good way to measure potential harm.
-![Mitigation Layers](./images/mitigation-layers.png)
+Since our startup is building an education product, it would be good to prepare a list of education-related prompts. This could be to cover a certain subject, historical facts, and prompts about student life.
-**Model**
+### Mitigate Potential Harms
-Choosing the right model for the right use case. Larger and more complex models like GPT-4 can cause more of a risk of harmful content when applied to smaller and more specific use cases. Using your training data to fine-tune also reduces the risk of harmful content.
+It is now time to find ways where we can prevent or limit the potential harm caused by the model and its responses. We can look at this in 4 different layers:
-**Safety System**
+![Mitigation Layers](./images/mitigation-layers.png)
-A safety system is a set of tools and configurations on the platform serving the model that help mitigate harm. An example of this is the content filtering system on the Azure OpenAI service. Systems should also detect jailbreak attacks and unwanted activity like requests from bots.
+- **Model**. Choosing the right model for the right use case. Larger and more complex models like GPT-4 can cause more of a risk of harmful content when applied to smaller and more specific use cases. Using your training data to fine-tune also reduces the risk of harmful content.
-**Metaprompt**
+- **Safety System**. A safety system is a set of tools and configurations on the platform serving the model that help mitigate harm. An example of this is the content filtering system on the Azure OpenAI service. Systems should also detect jailbreak attacks and unwanted activity like requests from bots.
-Metaprompts and grounding are ways we can direct or limit the model based on certain behaviors and information. This could be using system inputs to define certain limits of the model. In addition, providing outputs that are more relevant to scope or domain of system.
+- **Metaprompt**. Metaprompts and grounding are ways we can direct or limit the model based on certain behaviors and information. This could be using system inputs to define certain limits of the model. In addition, providing outputs that are more relevant to scope or domain of system.
It can also be using techniques like Retrieval Augmented Generation (RAG) to have the model only pull information from a selection of trusted sources. There is a lesson later in this course for [building search applications](../08-building-search-applications/README.md)
-**User Experience**
-
-The final layer is where the user interacts directly with the model through our application’s interface in some way. In this way we can design the UI/UX to limit the user on the types of inputs they can send to the model as well as text or images displayed to the user. When deploying the AI application, we also must be transparent about what our Generative AI application can and can’t do.
+- **User Experience**. The final layer is where the user interacts directly with the model through our application’s interface in some way. In this way we can design the UI/UX to limit the user on the types of inputs they can send to the model as well as text or images displayed to the user. When deploying the AI application, we also must be transparent about what our Generative AI application can and can’t do.
We have an entire lesson dedicated to [Designing UX for AI Applications](../12-designing-ux-for-ai-applications/README.md)
-**Evaluate model**
-
-Working with LLMs can be challenging because we don’t always have control over the data the model was trained on. Regardless, we should always evaluate the model’s performance and outputs. It’s still important to measure the model’s accuracy, similarity, groundedness, and relevance of the output. This helps provide transparency and trust to stakeholders and users.
+- **Evaluate model**. Working with LLMs can be challenging because we don’t always have control over the data the model was trained on. Regardless, we should always evaluate the model’s performance and outputs. It’s still important to measure the model’s accuracy, similarity, groundedness, and relevance of the output. This helps provide transparency and trust to stakeholders and users.
### Operate a Responsible Generative AI solution
-Building an operational practice around your AI applications is the final stage. This includes partnering with other parts of our startup like Legal and Security to ensure we are compliant with all regulatory policies. Before launching, we also want to build plans around delivery, handling incidents, and rollback to prevent any harm to our users from growing.
+Building an operational practice around your AI applications is the final stage. This includes partnering with other parts of our startup like Legal and Security to ensure we are compliant with all regulatory policies. Before launching, we also want to build plans around delivery, handling incidents, and rollback to prevent any harm to our users from growing.
+## Tools
-## Tools
-
-While the work of developing Responsible AI solutions may seem like a lot, it is work well worth the effort. As the area of Generative AI grows, more tooling to help developers efficiently integrate responsibility into their workflows will mature. For example, the [Azure AI Content Safety](https://learn.microsoft.com/en-us/azure/ai-services/content-safety/overview ) can help detect harmful content and images via an API request.
+While the work of developing Responsible AI solutions may seem like a lot, it is work well worth the effort. As the area of Generative AI grows, more tooling to help developers efficiently integrate responsibility into their workflows will mature. For example, the [Azure AI Content Safety](https://learn.microsoft.com/azure/ai-services/content-safety/overview ) can help detect harmful content and images via an API request.
## Knowledge check
@@ -134,13 +124,10 @@ A: 2 and 3 is correct. Responsible AI helps you consider how to mitigate harmful
## 🚀 Challenge
-Read up on [Azure AI Content Saftey](https://learn.microsoft.com/en-us/azure/ai-services/content-safety/overview ) and see what you can adopt for your usage.
-
-## Great Work, Continue Your Learning!
+Read up on [Azure AI Content Saftey](https://learn.microsoft.com/en-us/azure/ai-services/content-safety/overview ) and see what you can adopt for your usage.
+## Great Work, Continue Your Learning
-Want to learn more about how to build with Generative AI responsibly? Go to the [contiuned learning page](../13-continued-learning/README.md) to find other great resources on this topic.
-
+Want to learn more about how to build with Generative AI responsibly? Go to the [continued learning page](../13-continued-learning/README.md) to find other great resources on this topic.
Head over to Lesson 4 where we will look at [Prompt Engineering Fundamentals](/4-prompt-engineering-fundamentals/README.md)!
-
diff --git a/04-prompt-engineering-fundamentals/README.md b/04-prompt-engineering-fundamentals/README.md
index 14cefac04..b5ee9fa1a 100644
--- a/04-prompt-engineering-fundamentals/README.md
+++ b/04-prompt-engineering-fundamentals/README.md
@@ -1,14 +1,13 @@
-# Prompt Engineering Fundamentals
+# Prompt Engineering Fundamentals
-[![Prompt Engineering Fundamentals ](./img/04-lesson-banner.png)](https://youtu.be/r2ItK3UMVTk)
+[![Prompt Engineering Fundamentals](./img/04-lesson-banner.png)](https://youtu.be/r2ItK3UMVTk)
How you write your prompt to the LLM matters, a carefully crafted prompt can achieve achieve a better result than one that isn't. But what even are these concepts, prompt, prompt engineering and how do I improve what I send to the LLM? Questions like these is what this chapter and the upcoming chapter is looking to answer.
+_Generative AI_ is capable of creating new content (e.g., text, images, audio, code etc.) in response to user requests. It achieves this using _Large Language Models_ (LLMs) like OpenAI's GPT ("Generative Pre-trained Transformer") series that are trained for using natural language and code.
-_Generative AI_ is capable of creating new content (e.g., text, images, audio, code etc.) in response to user requests. It achieves this using _Large Language Models_ (LLMs) like OpenAI's GPT ("Generative Pre-trained Transformer") series that are trained for using natural language and code.
-
-Users can now interact with these models using familiar pardigms like chat, without needing any technical expertise or training. The models are _prompt-based_ - users send a text input (prompt) and get back the AI response (completion). They can then "chat with the AI" iteratively, in multi-turn conversations, refining their prompt till the response matches their expectations.
+Users can now interact with these models using familiar paradigms like chat, without needing any technical expertise or training. The models are _prompt-based_ - users send a text input (prompt) and get back the AI response (completion). They can then "chat with the AI" iteratively, in multi-turn conversations, refining their prompt till the response matches their expectations.
"Prompts" now become the primary _programming interface_ for generative AI apps, telling the models what to do and influencing the quality of returned responses. "Prompt Engineering" is a fast-growing field of study that focuses on the _design and optimization_ of prompts to deliver consistent and quality responses at scale.
@@ -43,10 +42,10 @@ The notebook comes with _starter_ exercises - but you are encouraged to add your
Now, let's talk about how _this topic_ relates to our startup mission to [bring AI innovation to education](https://educationblog.microsoft.com/2023/06/collaborating-to-bring-ai-innovation-to-education). We want to build AI-powered applications of _personalized learning_ - so let's think about how different users of our application might "design" prompts:
- - **Administrators** might ask the AI to _analyze curriculum data to identify gaps in coverage_. The AI can summarize results or visualize them with code.
- - **Educators** might ask the AI to _generate a lesson plan for a target audience and topic_. The AI can build the personalized plan in a specified format.
- - **Students** might ask the AI to _tutor them in a difficult subject_. The AI can now guide students with lessons, hints & examples tailored to their level.
-
+- **Administrators** might ask the AI to _analyze curriculum data to identify gaps in coverage_. The AI can summarize results or visualize them with code.
+- **Educators** might ask the AI to _generate a lesson plan for a target audience and topic_. The AI can build the personalized plan in a specified format.
+- **Students** might ask the AI to _tutor them in a difficult subject_. The AI can now guide students with lessons, hints & examples tailored to their level.
+
That's just the tip of the iceberg. Check out [Prompts For Education](https://github.com/microsoft/prompts-for-edu/tree/main) - an open-source prompts library curated by education experts - to get a broader sense of the possibilities! _Try running some of those prompts in the sandbox or using the OpenAI Playground to see what happens!_
-
-## Code Challenge
+## Assignment
Congratulations! You made it to the end of the lesson! It's time to put some of those concepts and techniques to the test with real examples!
-For our code challenge, we'll be using a Jupyter Notebook with exercises you can complete interactively. You can also extend the Notebook with your own Markdown and Code cells to explore ideas and techniques on your own.
+For our assignment, we'll be using a Jupyter Notebook with exercises you can complete interactively. You can also extend the Notebook with your own Markdown and Code cells to explore ideas and techniques on your own.
+
+### To get started, fork the repo, then
-**To get started, fork the repo, then:**
- (Recommended) Launch GitHub Codespaces
-- (Alternatively) Clone the repo to your local device and use it with Docker Desktop
+- (Alternatively) Clone the repo to your local device and use it with Docker Desktop
- (Alternatively) Open the Notebook with your preferred Notebook runtime environment.
-**Next, configure your environment variables**
+### Next, configure your environment variables
+
- Copt the `.env.copy` file in repo root to `.env` and fill in the `OPENAI_API_KEY` value. You can find your API Key in your [OpenAI Dashboard](https://beta.openai.com/account/api-keys).
-**Next, open the Jupyter Notebook**
+### Next, open the Jupyter Notebook
+
- Select the runtime kernel. If using options 1 or 2, simply select the default Python 3.10.x kernel provided by the dev container.
You're all set to run the exercises. Note that there are no _right and wrong_ answers here - just exploring options by trial-and-error and building intuition for what works for a given model and application domain.
_For this reason there are no Code Solution segments in this lesson. Instead, the Notebook will have Markdown cells titled "My Solution:" that show one example output for reference._
-
-
-## Great Work! Continue Your Learning
-
+## Great Work! Continue Your Learning
Want to learn more about different Prompt Engineering concepts? Go to the [contiuned learning page](../13-continued-learning/README.md) to find other great resources on this topic.
-
Head over to the Lesson 5 where we will look at [advance prompting techniques](../05-advanced-prompts/README.md)!
diff --git a/05-advanced-prompts/README.md b/05-advanced-prompts/README.md
index ac7310b9e..a86bc502c 100644
--- a/05-advanced-prompts/README.md
+++ b/05-advanced-prompts/README.md
@@ -1,12 +1,12 @@
# Creating Advanced prompts
+[![Creating Advanced Prompts](./images/05-lesson-banner.png)](https://youtu.be/32GBH6BTWZQ)
-[![Creating Advanced Prompts](./images/05-lesson-banner.png)](https://youtu.be/32GBH6BTWZQ)
+Let's recap some learnings from the previous chapter:
-Let's recap some learnings from the previous chapter:
-> Prompt _engineering_ is the process by which we **guide the model towards more relevant responses** by providing more useful instructions or context.
+> Prompt _engineering_ is the process by which we **guide the model towards more relevant responses** by providing more useful instructions or context.
-There's also two steps to writing prompts, constructing the prompt, by providing relevant context and the second part is *optimization*, how to gradually improve the prompt.
+There's also two steps to writing prompts, constructing the prompt, by providing relevant context and the second part is _optimization_, how to gradually improve the prompt.
At this point, we have some basic understanding of how to write prompts, but we need to go deeper. In this chapter, you will go from trying out various prompts to understanding why one prompt is better than another. You will learn how to construct prompts following some basic techniques than can be applied to any LLM.
@@ -54,7 +54,7 @@ So far, we've seen a simple prompt example, but generative AI is capable of much
### Techniques for prompting
-First, we need to understand that prompting is an *emergent* property of an LLM meaning that this is not a feature that is built into the model but rather something we discover as we use the model.
+First, we need to understand that prompting is an _emergent_ property of an LLM meaning that this is not a feature that is built into the model but rather something we discover as we use the model.
There are some basic techniques that we can use to prompt an LLM. Let's explore them.
@@ -83,9 +83,9 @@ LLM answers with 5, which is incorrect. Correct answer is 1 apple, given the cal
So how can we teach the LLM to do this correctly?
-Let's try chain-of-thought. Applying chain-of-thought means:
+Let's try chain-of-thought. Applying chain-of-thought means:
-1. Give the LLM a similar example.
+1. Give the LLM a similar example.
1. Show the calculation, and how to calculate it correctly.
1. Provide the original prompt.
@@ -104,73 +104,73 @@ As you can see chain-of-thought is a very powerful technique.
### Generated knowledge
-Many times when you want to construct a prompt, you want to do so using your own company's data. You want part of the prompt to be from the company and the other part should be the actual prompt you're interested in.
+Many times when you want to construct a prompt, you want to do so using your own company's data. You want part of the prompt to be from the company and the other part should be the actual prompt you're interested in.
As an example, this is what your prompt then can look like if you're in the insurance business:
-```text
-{{company}}: {{company_name}}
-{{products}}:
-{{products_list}}
-Please suggest an insurance given the following budget and requirements:
-Budget: {{budget}}
-Requirements: {{requirements}}
-```
+ ```text
+ {{company}}: {{company_name}}
+ {{products}}:
+ {{products_list}}
+ Please suggest an insurance given the following budget and requirements:
+ Budget: {{budget}}
+ Requirements: {{requirements}}
+ ```
Above, you see how the prompt is constructed using a template. In the template there's a number of variables, denoted by `{{variable}}`, that will be replaced with actual values from a company API.
Here's an example of how the prompt could look like once the variables have been replaced by content from your company:
-```text
-Insurance company: ACME Insurance
-Insurance products (cost per month):
-- Car, cheap, 500 USD
-- Car, expensive, 1100 USD
-- Home, cheap, 600 USD
-- Home, expensive, 1200 USD
-- Life, cheap, 100 USD
-
-Please suggest an insurance given the following budget and requirements:
-Budget: $1000
-Requirements: Car, Home
-```
+ ```text
+ Insurance company: ACME Insurance
+ Insurance products (cost per month):
+ - Car, cheap, 500 USD
+ - Car, expensive, 1100 USD
+ - Home, cheap, 600 USD
+ - Home, expensive, 1200 USD
+ - Life, cheap, 100 USD
+
+ Please suggest an insurance given the following budget and requirements:
+ Budget: $1000
+ Requirements: Car, Home
+ ```
Running this prompt through an LLM will produce a response like this:
-```text
-, and Life insurance
-
-Given the budget and requirements, we suggest the following insurance package from ACME Insurance:
-- Car, cheap, 500 USD
-- Home, cheap, 600 USD
-- Life, cheap, 100 USD
-Total cost: $1,200 USD
-```
-
-As you can see, it also suggests the Life insurance, which it shouldn't. This result is an indication that we need to optimize the prompt by changing the prompt to be clearer on what it can allow. After som *trial and error*, we arrive at the following prompt:
-
-```text
-Insurance company: ACME Insurance
-Insurance products (cost per month):
-- type: Car, cheap, cost: 500 USD
-- type: Car, expensive, cost: 1100 USD
-- type: Home, cheap, cost: 600 USD
-- type: Home, expensive, cost: 1200 USD
-- type: Life, cheap, cost: 100 USD
-
-Please suggest an insurance given the following budget and requirements:
-Budget: $1000 restrict choice to types: Car, Home
-```
+ ```output
+ , and Life insurance
+
+ Given the budget and requirements, we suggest the following insurance package from ACME Insurance:
+ - Car, cheap, 500 USD
+ - Home, cheap, 600 USD
+ - Life, cheap, 100 USD
+ Total cost: $1,200 USD
+ ```
+
+As you can see, it also suggests the Life insurance, which it shouldn't. This result is an indication that we need to optimize the prompt by changing the prompt to be clearer on what it can allow. After som _trial and error_, we arrive at the following prompt:
+
+ ```text
+ Insurance company: ACME Insurance
+ Insurance products (cost per month):
+ - type: Car, cheap, cost: 500 USD
+ - type: Car, expensive, cost: 1100 USD
+ - type: Home, cheap, cost: 600 USD
+ - type: Home, expensive, cost: 1200 USD
+ - type: Life, cheap, cost: 100 USD
+
+ Please suggest an insurance given the following budget and requirements:
+ Budget: $1000 restrict choice to types: Car, Home
+ ```
-Note how adding *type* and *cost* and also using the keyword *restrict* helps the LLM to understand what we want.
+Note how adding _type_ and _cost_ and also using the keyword _restrict_ helps the LLM to understand what we want.
Now we get the following response:
-```text
-Given the budget and requirements, we suggest the Car, Cheap insurance product which costs 500 USD per month.
-```
+ ```output
+ Given the budget and requirements, we suggest the Car, Cheap insurance product which costs 500 USD per month.
+ ```
-The point of this example was to show that even though we're using a basic technique like *generated knowledge*, we still need to optimize the prompt in most cases to get the desired outcome.
+The point of this example was to show that even though we're using a basic technique like _generated knowledge_, we still need to optimize the prompt in most cases to get the desired outcome.
### Least-to-most
@@ -178,7 +178,7 @@ The idea with Least-to-most prompting is to break down a bigger problem into sub
> Prompt: How to perform data science in 5 steps?
-With your AI assistant answering with:
+With your AI assistant answering with:
1. Collect data
1. Clean data
@@ -205,52 +205,52 @@ Here's an example using this technique:
AI Answer:
-```python
-import flask
-app = Flask(__name__)
-
-@app.route('/products')
-def products():
- return 'Products'
-
-@app.route('/customers')
-def customers():
- return 'Customers'
-```
+ ```python
+ import flask
+ app = Flask(__name__)
+
+ @app.route('/products')
+ def products():
+ return 'Products'
+
+ @app.route('/customers')
+ def customers():
+ return 'Customers'
+ ```
> Prompt: suggest 3 improvements of the above code
AI Answer:
-```text
-Sure, here are three suggestions to improve the code:
-
-Import the correct module: The first line should import Flask instead of flask. The correct import statement should be from flask import Flask.
-
-Add a main function: Add a if __name__ == '__main__': block to the end of the file to run the application when the file is executed.
-
-Use f-strings: Use f-strings to format the return statements in the route functions. For example, return 'Products' can be changed to return f'This is the {request.path} page'.
-```
-
-```python
-from flask import Flask, request
-
-app = Flask(__name__)
-
-@app.route('/products')
-def products():
- return f'This is the {request.path} page'
-
-@app.route('/customers')
-def customers():
- return f'This is the {request.path} page'
+ ```text
+ Sure, here are three suggestions to improve the code:
+
+ Import the correct module: The first line should import Flask instead of flask. The correct import statement should be from flask import Flask.
+
+ Add a main function: Add a if __name__ == '__main__': block to the end of the file to run the application when the file is executed.
+
+ Use f-strings: Use f-strings to format the return statements in the route functions. For example, return 'Products' can be changed to return f'This is the {request.path} page'.
+ ```
-if __name__ == '__main__':
- app.run()
-```
+ ```python
+ from flask import Flask, request
+
+ app = Flask(__name__)
+
+ @app.route('/products')
+ def products():
+ return f'This is the {request.path} page'
+
+ @app.route('/customers')
+ def customers():
+ return f'This is the {request.path} page'
+
+ if __name__ == '__main__':
+ app.run()
+ ```
As you can see, the above AI response is improving the first suggested code thanks to critiquing the first answer.
-
+
### Maieutic prompting
Maieutic prompting is a technique that is similar to self-refine but it's more about asking the LLM to explain itself. The goal is to reduce inconsistencies in the LLM's output so to ensure it arrives at the correct answer. The workflow to follow is:
@@ -264,43 +264,43 @@ Repeat 2 and 3 until you've gone through all the parts and you're satisfied with
Here's an example prompt:
> prompt: How can I create a crisis plan to mitigate for a pandemic in 5 steps?
-LLM answer:
+LLM answer:
-```text
-1. Identify the risks
-1. Identify the stakeholders
-1. Identify the resources
-1. Identify the actions
-1. Identify the communication channels
-```
+ ```text
+ 1. Identify the risks
+ 1. Identify the stakeholders
+ 1. Identify the resources
+ 1. Identify the actions
+ 1. Identify the communication channels
+ ```
It's identified 5 steps, but can we determine if this is correct? Let's ask the LLM to explain each step:
> prompt: Explain the first step in more detail, what are the risks in detail with a pandemic?
-```text
-The risks are the following:
-
-- Loss of life
-- Loss of business
-- Loss of reputation
-- Loss of customers
-- Loss of employees
-```
+ ```text
+ The risks are the following:
+
+ - Loss of life
+ - Loss of business
+ - Loss of reputation
+ - Loss of customers
+ - Loss of employees
+ ```
At this point, we want to make sure the LLM is correct so we ask it to explain the first risk in more detail and hope it's consistent with the answer above:
> prompt: In a Pandemic, which is the biggest risk and why?
-```text
-The biggest risk is loss of life because it's the most important thing to protect.
-```
+ ```text
+ The biggest risk is loss of life because it's the most important thing to protect.
+ ```
> What are the two biggest risks in a pandemic?
-```text
-The two biggest risks are loss of life and loss of business.
-```
+ ```text
+ The two biggest risks are loss of life and loss of business.
+ ```
At this point, the LLM is consistent and mentions "life" and "business" as top two risks. We can now continue to the next step and feel fairly confident. However, we shouldn't trust the LLM blindly, we should always verify the output.
@@ -310,107 +310,107 @@ LLMs are non deterministic by nature, meaning that you will get different result
> "Generate code for a Python Web API"
-```text
-# Import necessary modules
-import flask
-from flask import request, jsonify
-
-# Create a Flask app
-app = flask.Flask(__name__)
-
-# Create a route for the API
-@app.route('/api', methods=['GET'])
-def api():
- # Get the data from the request
- data = request.args
-
- # Process the data
- result = process_data(data)
-
- # Return the result as JSON
- return jsonify(result)
-
-# Function to process the data
-def process_data(data):
- # Do something with the data
- result = data + 1
-
- # Return the result
- return result
-
-# Run the app
-if __name__ == '__main__':
- app.run()
-
-```
+ ```text
+ # Import necessary modules
+ import flask
+ from flask import request, jsonify
+
+ # Create a Flask app
+ app = flask.Flask(__name__)
+
+ # Create a route for the API
+ @app.route('/api', methods=['GET'])
+ def api():
+ # Get the data from the request
+ data = request.args
+
+ # Process the data
+ result = process_data(data)
+
+ # Return the result as JSON
+ return jsonify(result)
+
+ # Function to process the data
+ def process_data(data):
+ # Do something with the data
+ result = data + 1
+
+ # Return the result
+ return result
+
+ # Run the app
+ if __name__ == '__main__':
+ app.run()
+
+ ```
Running the same prompt again generates a slightly different response:
-```text
-#import necessary packages
-import flask
-from flask import request, jsonify
-
-#create the Flask application
-app = flask.Flask(__name__)
-app.config["DEBUG"] = True
-
-#create a list of books
-books = [
- {'id': 0,
- 'title': 'A Fire Upon the Deep',
- 'author': 'Vernor Vinge',
- 'first_sentence': 'The coldsleep itself was dreamless.',
- 'year_published': '1992'},
- {'id': 1,
- 'title': 'The Ones Who Walk Away From Omelas',
- 'author': 'Ursula K. Le Guin',
- 'first_sentence': 'With a clamor of bells that set the swallows soaring, the Festival of Summer came to the city Omelas, bright-towered by the sea.',
- 'published': '1973'},
- {'id': 2,
- 'title': 'Dhalgren',
- 'author': 'Samuel R. Delany',
- 'first_sentence': 'to wound the autumnal city.',
- 'published': '1975'}
-]
-
-#create an endpoint for the API
-@app.route('/', methods=['GET'])
-def home():
- return '''
Books API
-A prototype API for retrieving books.
'''
-
-#create an endpoint to return all books
-@app.route('/api/v1/resources/books/all', methods=['GET'])
-def api_all():
- return jsonify(books)
-
-#create an endpoint to return a single book
-@app.route('/api/v1/resources/books', methods=['GET'])
-def api_id():
- #check if an ID was provided as part of the URL
- #if ID is provided, assign it to a variable
- #if no ID is provided, display an error
- if 'id' in request.args:
- id = int(request.args['id'])
- else:
- return "Error: No id field provided. Please specify an id."
-
- #create an empty list for our results
- results = []
-
- #loop through the data and match results that fit the requested ID
- #IDs are unique, but other fields might return many results
- for book in books:
- if book['id'] == id:
- results.append(book)
-
- #use the jsonify function from Flask to convert our list of
- #Python dictionaries to the JSON format
- return jsonify(results)
-
-app.run()
-```
+ ```text
+ #import necessary packages
+ import flask
+ from flask import request, jsonify
+
+ #create the Flask application
+ app = flask.Flask(__name__)
+ app.config["DEBUG"] = True
+
+ #create a list of books
+ books = [
+ {'id': 0,
+ 'title': 'A Fire Upon the Deep',
+ 'author': 'Vernor Vinge',
+ 'first_sentence': 'The coldsleep itself was dreamless.',
+ 'year_published': '1992'},
+ {'id': 1,
+ 'title': 'The Ones Who Walk Away From Omelas',
+ 'author': 'Ursula K. Le Guin',
+ 'first_sentence': 'With a clamor of bells that set the swallows soaring, the Festival of Summer came to the city Omelas, bright-towered by the sea.',
+ 'published': '1973'},
+ {'id': 2,
+ 'title': 'Dhalgren',
+ 'author': 'Samuel R. Delany',
+ 'first_sentence': 'to wound the autumnal city.',
+ 'published': '1975'}
+ ]
+
+ #create an endpoint for the API
+ @app.route('/', methods=['GET'])
+ def home():
+ return '''Books API
+ A prototype API for retrieving books.
'''
+
+ #create an endpoint to return all books
+ @app.route('/api/v1/resources/books/all', methods=['GET'])
+ def api_all():
+ return jsonify(books)
+
+ #create an endpoint to return a single book
+ @app.route('/api/v1/resources/books', methods=['GET'])
+ def api_id():
+ #check if an ID was provided as part of the URL
+ #if ID is provided, assign it to a variable
+ #if no ID is provided, display an error
+ if 'id' in request.args:
+ id = int(request.args['id'])
+ else:
+ return "Error: No id field provided. Please specify an id."
+
+ #create an empty list for our results
+ results = []
+
+ #loop through the data and match results that fit the requested ID
+ #IDs are unique, but other fields might return many results
+ for book in books:
+ if book['id'] == id:
+ results.append(book)
+
+ #use the jsonify function from Flask to convert our list of
+ #Python dictionaries to the JSON format
+ return jsonify(results)
+
+ app.run()
+ ```
> So is the varied output a problem?
@@ -424,137 +424,137 @@ Temperature is a value between 0 and 1, where 0 is the most deterministic and 1
> "Generate code for a Python Web API"
-```python
-#import necessary libraries
-import flask
-from flask import request, jsonify
-
-#create an instance of the Flask class
-app = flask.Flask(__name__)
-
-#create an endpoint for the API
-@app.route('/api/v1/endpoint', methods=['GET'])
-def api_endpoint():
- #get the request data
- data = request.get_json()
-
- #process the data
- result = process_data(data)
-
- #return the result
- return jsonify(result)
-
-#function to process the data
-def process_data(data):
- #process the data
- result = {'result': 'success'}
-
- #return the result
- return result
-
-#run the Flask app
-if __name__ == '__main__':
- app.run()
-
-```
+ ```python
+ #import necessary libraries
+ import flask
+ from flask import request, jsonify
+
+ #create an instance of the Flask class
+ app = flask.Flask(__name__)
+
+ #create an endpoint for the API
+ @app.route('/api/v1/endpoint', methods=['GET'])
+ def api_endpoint():
+ #get the request data
+ data = request.get_json()
+
+ #process the data
+ result = process_data(data)
+
+ #return the result
+ return jsonify(result)
+
+ #function to process the data
+ def process_data(data):
+ #process the data
+ result = {'result': 'success'}
+
+ #return the result
+ return result
+
+ #run the Flask app
+ if __name__ == '__main__':
+ app.run()
+
+ ```
Running the prompt again gives us this outcome:
-```python
-#import necessary libraries
-import flask
-from flask import request, jsonify
-
-#create a Flask app
-app = flask.Flask(__name__)
-
-#create a route for the API
-@app.route('/api', methods=['GET'])
-def api_call():
- #get the data from the request
- data = request.get_json()
+ ```python
+ #import necessary libraries
+ import flask
+ from flask import request, jsonify
- #process the data
- result = process_data(data)
+ #create a Flask app
+ app = flask.Flask(__name__)
- #return the result
- return jsonify(result)
-
-#define the process_data function
-def process_data(data):
- #do something with the data
- result = data + 1
+ #create a route for the API
+ @app.route('/api', methods=['GET'])
+ def api_call():
+ #get the data from the request
+ data = request.get_json()
+
+ #process the data
+ result = process_data(data)
+
+ #return the result
+ return jsonify(result)
- #return the result
- return result
-
-#run the app
-if __name__ == '__main__':
- app.run()
-
-```
+ #define the process_data function
+ def process_data(data):
+ #do something with the data
+ result = data + 1
+
+ #return the result
+ return result
+
+ #run the app
+ if __name__ == '__main__':
+ app.run()
+
+ ```
There's only a tiny difference between these two outputs. Let's do the opposite this time, let's set the temperature to 0.9:
-```python
-# Import necessary libraries
-import flask
-from flask import request, jsonify
-
-# Create a Flask app
-app = flask.Flask(__name__)
-
-# Create a route for the API
-@app.route('/api', methods=['GET'])
-def api_call():
- # Get the data from the request
- data = request.args
-
- # Process the data
- result = process_data(data)
-
- # Return the result
- return jsonify(result)
-
-# Function to process the data
-def process_data(data):
- # Do something with the data
- result = data + 1
-
- # Return the result
- return result
-
-# Run the app
-if __name__ == '__main__':
- app.run()
-
-```
+ ```python
+ # Import necessary libraries
+ import flask
+ from flask import request, jsonify
+
+ # Create a Flask app
+ app = flask.Flask(__name__)
+
+ # Create a route for the API
+ @app.route('/api', methods=['GET'])
+ def api_call():
+ # Get the data from the request
+ data = request.args
+
+ # Process the data
+ result = process_data(data)
+
+ # Return the result
+ return jsonify(result)
+
+ # Function to process the data
+ def process_data(data):
+ # Do something with the data
+ result = data + 1
+
+ # Return the result
+ return result
+
+ # Run the app
+ if __name__ == '__main__':
+ app.run()
+
+ ```
and the second attempt at 0.9 as temperature value:
-```python
-import flask
-from flask import request, jsonify
-
-# create the Flask app
-app = flask.Flask(__name__)
-app.config['DEBUG'] = True
-
-# create some test data
-books = [
- {'id': 0, 'title': 'A Fire Upon The Deep', 'author': 'Vernor Vinge', 'first_sentence': 'The coldsleep itself was dreamless.', 'year_published': '1992'},
- {'id': 1, 'title': 'The Ones Who Walk Away From Omelas', 'author': 'Ursula K. Le Guin', 'first_sentence': 'With a clamor of bells that set the swallows soaring, the Festival of Summer came to the city Omelas, bright-towered by the sea.', 'published': '1973'},
- {'id': 2, 'title': 'Dhalgren', 'author': 'Samuel R. Delany', 'first_sentence': 'to wound the autumnal city.', 'published': '1975'}
-]
-
-# create an endpoint
-@app.route('/', methods=['GET'])
-def home():
- return '''Welcome to our book API!
'''
-
-@app.route('/api/v1/resources/books
-
-```
+ ```python
+ import flask
+ from flask import request, jsonify
+
+ # create the Flask app
+ app = flask.Flask(__name__)
+ app.config['DEBUG'] = True
+
+ # create some test data
+ books = [
+ {'id': 0, 'title': 'A Fire Upon The Deep', 'author': 'Vernor Vinge', 'first_sentence': 'The coldsleep itself was dreamless.', 'year_published': '1992'},
+ {'id': 1, 'title': 'The Ones Who Walk Away From Omelas', 'author': 'Ursula K. Le Guin', 'first_sentence': 'With a clamor of bells that set the swallows soaring, the Festival of Summer came to the city Omelas, bright-towered by the sea.', 'published': '1973'},
+ {'id': 2, 'title': 'Dhalgren', 'author': 'Samuel R. Delany', 'first_sentence': 'to wound the autumnal city.', 'published': '1975'}
+ ]
+
+ # create an endpoint
+ @app.route('/', methods=['GET'])
+ def home():
+ return '''Welcome to our book API!
'''
+
+ @app.route('/api/v1/resources/books
+
+ ```
As you can see, the results couldn't be more varied.
@@ -578,19 +578,19 @@ Here are some good practices to consider:
Here's code in Python showing how to build a simple API using Flask:
-```python
-from flask import Flask, request
-
-app = Flask(__name__)
-
-@app.route('/')
-def hello():
- name = request.args.get('name', 'World')
- return f'Hello, {name}!'
-
-if __name__ == '__main__':
- app.run()
-```
+ ```python
+ from flask import Flask, request
+
+ app = Flask(__name__)
+
+ @app.route('/')
+ def hello():
+ name = request.args.get('name', 'World')
+ return f'Hello, {name}!'
+
+ if __name__ == '__main__':
+ app.run()
+ ```
Use an AI assistant like GitHub Copilot or ChatGPT and apply the "self-refine" technique to improve the code.
@@ -599,7 +599,7 @@ Use an AI assistant like GitHub Copilot or ChatGPT and apply the "self-refine" t
Please attempt to solve the assignment by adding suitable prompts to the code.
> [!TIP]
-> Phrase a prompt to ask it to improve, it's a good idea to limit how many improvements. You can also ask to improve it in a certain way, for example architecture, performance, security, etc.
+> Phrase a prompt to ask it to improve, it's a good idea to limit how many improvements. You can also ask to improve it in a certain way, for example architecture, performance, security, etc.
[Solution](./solution.py)
@@ -613,9 +613,8 @@ Why would I use chain-of-thought prompting? Show me 1 correct response and 2 inc
Answer: A, because chain-of-thought is about showing the LLM how to solve a problem by providing it with a series of steps, and similar problems and how they were solved.
-## Great Work! Continue Your Learning
-
-Want to learn more about creating advanced prompts? Go to the [contiuned learning page](../13-continued-learning/README.md) to find other great resources on this topic.
+## Great Work! Continue Your Learning
+Want to learn more about creating advanced prompts? Go to the [continued learning page](../13-continued-learning/README.md) to find other great resources on this topic.
-Head over to the Lesson 6 where we will apply our knowledge of Prompt Engineering by [building text generation apps](/6-text-generation-apps/README.md)!
\ No newline at end of file
+Head over to the Lesson 6 where we will apply our knowledge of Prompt Engineering by [building text generation apps](/6-text-generation-apps/README.md)
diff --git a/06-text-generation-apps/README.md b/06-text-generation-apps/README.md
index fdc01ea21..1bb118f3f 100644
--- a/06-text-generation-apps/README.md
+++ b/06-text-generation-apps/README.md
@@ -1,8 +1,8 @@
-# Building Text Generation Applications
+# Building Text Generation Applications
-[![Building Text Generation Applications ](./images/06-lesson-banner.png)](https://youtu.be/5jKHzY6-4s8)
+[![Building Text Generation Applications](./images/06-lesson-banner.png)](https://youtu.be/5jKHzY6-4s8)
-*(Click the image above to view video of this lesson)*
+> *(Click the image above to view video of this lesson)*
You've seen so far through this curriculum that there are core concepts like prompts and even a whole discipline called "prompt engineering". Many tools you can interact with like ChatGPT, Office 365, Microsoft Power Platform and more, supports you using prompts to accomplish something.
@@ -33,16 +33,16 @@ Normally when you build an app it some kind of interface like the following:
### Console and UI apps are limited
-Compare it to a command-based app where you type a command:
+Compare it to a command-based app where you type a command:
- **It's limited**. You can't just type any command, only the ones that the app supports.
-- **Language specific**. Some apps support many languages, but by default the app is built for a specific language, even if you can add more language support.
+- **Language specific**. Some apps support many languages, but by default the app is built for a specific language, even if you can add more language support.
### Benefits of text generation apps
So how is a text generation app different?
-In a text generation app, you have more flexibility, you're not limited to a set of commands or a specific input language. Instead, you can use natural language to interact with the app. Other benefits is that because you're already interacting with a data source that has been trained on a vast corpus of information, where a traditional app might be limited on what's in a database.
+In a text generation app, you have more flexibility, you're not limited to a set of commands or a specific input language. Instead, you can use natural language to interact with the app. Other benefits is that because you're already interacting with a data source that has been trained on a vast corpus of information, where a traditional app might be limited on what's in a database.
### What can I build with a text generation app?
@@ -95,21 +95,20 @@ You need to carry out the following steps:
- Install Python
- Have created an Azure OpenAI Service resource. See this guide for how to [create a resource](https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/create-resource?pivots=web-portal).
-
### Locate API key and endpoint
At this point, you need to tell your `openai` library what API key to use. To find your API key, go to "Keys and Endpoint" section of your Azure Open AI resource and copy the "Key 1" value.
- ![Keys and Endpoint resource blade in Azure Portal](https://learn.microsoft.com/en-us/azure/ai-services/openai/media/quickstarts/endpoint.png)
+![Keys and Endpoint resource blade in Azure Portal](https://learn.microsoft.com/azure/ai-services/openai/media/quickstarts/endpoint.png)
Now that you have this information copied, let's instruct the libraries to use it.
> [!NOTE]
> It's worth separating your API key from your code. You can do so by using environment variables.
+>
> - Set the environment variable `OPENAI_API_KEY` to your API key.
> `export OPENAI_API_KEY='sk-...'`
-
### Setup configuration Azure
If you're using Azure Open AI, here's how you setup configuration:
@@ -146,7 +145,7 @@ In above code we create a completion object and pass in the model we want to use
### Chat completions
-So far, you've seen how we've been using `Completion` to generate text. But there's another class called `ChatCompletion` that is more suited for chatbots. Here's an example of using it:
+So far, you've seen how we've been using `Completion` to generate text. But there's another class called `ChatCompletion` that is more suited for chat bots. Here's an example of using it:
```python
import openai
@@ -172,7 +171,7 @@ Now that we learned how to setup and configure openai, it's time to build your f
```
> [!NOTE]
- > If you're using Windows type `venv\Scripts\activate` instead of `source venv/bin/activate`.
+ > If you're using Windows type `venv\Scripts\activate` instead of `source venv/bin/activate`.
> [!NOTE]
> locate your Azure Open AI key by going to https://portal.azure.com/ and search for `Open AI` and select the `Open AI resource` and then select `Keys and Endpoint` and copy the `Key 1` value.
@@ -198,8 +197,8 @@ Now that we learned how to setup and configure openai, it's time to build your f
# print response
print(completion.choices[0].text)
```
-
- > [!NOTE]
+
+ > [!NOTE]
> If you're using Azure Open AI, you need to set the `api_type` to `azure` and set the `api_key` to your Azure Open AI key.
You should see an output like the following:
@@ -212,7 +211,7 @@ Now that we learned how to setup and configure openai, it's time to build your f
## Different types of prompts, for different things
-Now you've seen how to generate text using a prompt. You even have a program up and running that you can modify and change to generate different types of text.
+Now you've seen how to generate text using a prompt. You even have a program up and running that you can modify and change to generate different types of text.
Prompts can be used for all sorts of tasks like for example:
@@ -368,7 +367,7 @@ Shopping List:
- Pepper
```
-That's your five recipes, with no garlic mentioned and you also have a shopping list considering what you already have at home.
+That's your five recipes, with no garlic mentioned and you also have a shopping list considering what you already have at home.
## Exercise - build a recipe generator
@@ -395,7 +394,7 @@ Now that we have played out a scenario, let's write code to match the demonstrat
> NOTE, your LLM is non deterministic, so you might get different results for every time you run the program.
- Great, let's see how we can improve things. To improve things, we want to make sure the code is flexible, so ingredients and number of recipes can be improved and changed.
+ Great, let's see how we can improve things. To improve things, we want to make sure the code is flexible, so ingredients and number of recipes can be improved and changed.
1. Let's change the code in the following way:
@@ -409,7 +408,7 @@ Now that we have played out a scenario, let's write code to match the demonstrat
```
Taking the code for a test run, could look like:
-
+
```output
No of recipes (for example, 5: 3
List of ingredients (for example, chicken, potatoes, and carrots: milk,strawberries
@@ -436,8 +435,8 @@ To further improve it, we want to add the following:
Above, we add `{filter}` to the end the prompt and we also capture the filter value from the user.
An example input of running the program can now look like so:
-
- ```output
+
+ ```output
No of recipes (for example, 5: 3
List of ingredients (for example, chicken, potatoes, and carrots: onion,milk
Filter (for example, vegetarian, vegan, or gluten-free: no milk
@@ -505,13 +504,13 @@ To further improve it, we want to add the following:
As you can see, any recipes with milk in it has been filtered out. But, if you're lactose intolerant, you might want to filter out recipes with cheese in it as well, so there's a need to be clear.
```python
-
+
- **Produce a shopping list**. We want to produce a shopping list, considering what we already have at home.
- For this functionality, we could either try to solve everything one prompt or we could split it up into two prompts. Let's try the latter approach. Here we're suggesting to add an additional prompt, but for that to work, we need to add the result of the former prompt as context to the latter prompt.
+ For this functionality, we could either try to solve everything one prompt or we could split it up into two prompts. Let's try the latter approach. Here we're suggesting to add an additional prompt, but for that to work, we need to add the result of the former prompt as context to the latter prompt.
Locate the part in the code that prints out the result from the first prompt and add the following code below:
-
+
```python
old_prompt_result = completion.choices[0].text
prompt = "Produce a shopping list for the generated recipes and please don't include ingredients that I already have."
@@ -526,13 +525,13 @@ To further improve it, we want to add the following:
Note the following:
- - We're constructing a new prompt by adding the result from the first prompt to the new prompt:
-
+ 1. We're constructing a new prompt by adding the result from the first prompt to the new prompt:
+
```python
new_prompt = f"{old_prompt_result} {prompt}"
```
- - We make a new request, but also considering the number of tokens we asked for in the first prompt, so this time we say `max_tokens` is 1200.
+ 1. We make a new request, but also considering the number of tokens we asked for in the first prompt, so this time we say `max_tokens` is 1200.
```python
completion = openai.Completion.create(engine=deployment_name, prompt=new_prompt, max_tokens=1200)
@@ -553,17 +552,17 @@ To further improve it, we want to add the following:
-Flour, baking powder, baking soda, salt, sugar, egg, buttermilk, butter, apple, nutmeg, cinnamon, allspice
```
-## Improve your setup
+## Improve your setup
What we have so far is code that works, but there are some tweaks we should be doing to improve things further. Some things we should do is:
- **Separate secrets from code**, like the API key. Secrets does not belong in code and should be stored in a secure location. To separate secrets from code, we can use environment variables and library like `python-dotenv` to load them from a file. Here's how that would look like in code:
- - Create a `.env` file with the following content:
+ 1. Create a `.env` file with the following content:
```bash
OPENAI_API_KEY=sk-...
- ```
+ ```
> Note, for Azure, you need to set the following environment variables:
@@ -582,7 +581,7 @@ What we have so far is code that works, but there are some tweaks we should be d
openai.api_key = os.environ["OPENAI_API_KEY"]
```
-
+
- **A word on token length**. We should consider how many tokens we need to generate the text we want. Tokens cost money, so where possible, we should try to be economical with the number of tokens we use. For example, can we phrase the prompt so that we can use less tokens?
To change tokens used, you can use the `max_tokens` parameter. For example, if you want to use 100 tokens, you would do:
@@ -601,8 +600,6 @@ What we have so far is code that works, but there are some tweaks we should be d
> Note, the closer to 1.0, the more varied the output.
-
-
## Assignment
For this assignment, you can choose what to build.
@@ -611,31 +608,35 @@ Here's some suggestions:
- Tweak the recipe generator app to improve it further. Play around with temperature values, and the prompts to see what you can come up with.
- Build a "study buddy". This app should be able to answer questions about a topic for example Python, you could have prompts like "What is a certain topic in Python?", or you could have a prompt that says, show me code for a certain topic etc.
-- History bot, make history come alive, instruct the bot to play a certain historical character and ask it questions about its life and times.
+- History bot, make history come alive, instruct the bot to play a certain historical character and ask it questions about its life and times.
## Solution
### Study buddy
+Below is a starter prompt, see how you can use it and tweak it to your liking.
+
+```text
- "You're an expert on the Python language
Suggest a beginner lesson for Python in the following format:
-
+
Format:
- concepts:
- brief explanation of the lesson:
- exercise in code with solutions"
-
-Above is a starter prompt, see how you can use it and tweak it to your liking.
+```
### History bot
Here's some prompts you could be using:
+```text
- "You are Abe Lincoln, tell me about yourself in 3 sentences, and respond using grammar and words like Abe would have used"
- "You are Abe Lincoln, respond using grammar and words like Abe would have used:
Tell me about your greatest accomplishments, in 300 words:"
+```
## Knowledge check
@@ -653,11 +654,10 @@ What's a good way to store secrets like API keys?
1. In a file.
1. In environment variables.
-A: 3, because environment variables are not stored in code and can be loaded from the code.
-
-## Great Work! Continue Your Learning
+A: 3, because environment variables are not stored in code and can be loaded from the code.
-Want to learn more about about creating text generation applications? Go to the [contiuned learning page](../13-continued-learning/README.md) to find other great resources on this topic.
+## Great Work! Continue Your Learning
+Want to learn more about about creating text generation applications? Go to the [continued learning page](../13-continued-learning/README.md) to find other great resources on this topic.
Head over to the Lesson 7 where we will look at how to [build chat applications](/7-building-chat-applications/README.md)!
diff --git a/07-building-chat-applications/README.md b/07-building-chat-applications/README.md
index 1fa4c8781..ccf80eb1d 100644
--- a/07-building-chat-applications/README.md
+++ b/07-building-chat-applications/README.md
@@ -1,10 +1,9 @@
# Building Generative AI-Powered Chat Applications
[![Building Generative AI-Powered Chat Applications](./img/07-lesson-banner.png
-)](https://youtu.be/Kw4i-tlKMrQ)
-
-*(Click the image above to view video of this lesson)*
+)](https://youtu.be/Kw4i-tlKMrQ)
+> *(Click the image above to view video of this lesson)*
Now that we've seen how we can build text-generation apps, let's look into chat applications.
@@ -17,7 +16,7 @@ Some questions we need answered are:
As we move further into an age defined by automation and seamless human-machine interactions, understanding how generative AI transforms the scope, depth, and adaptability of chat applications becomes essential. This lesson will investigate the aspects of architecture that support these intricate systems, delve into the methodologies for fine-tuning them for domain-specific tasks, and evaluate the metrics and considerations pertinent to ensuring responsible AI deployment.
-## Introduction
+## Introduction
This lesson covers:
@@ -25,42 +24,41 @@ This lesson covers:
- How to apply customization and fine-tuning to applications.
- Strategies and considerations to effectively monitor chat applications.
-## Learning Goals
+## Learning Goals
By the end of this lesson, you'll be able to:
- Describe considerations for building and integrating chat applications into existing systems.
-- Customize chat applications for specific use-cases.
+- Customize chat applications for specific use-cases.
- Identify key metrics and considerations to effectively monitor and maintain the quality of AI-powered chat applications.
- Ensure chat applications leverage AI responsibly.
-## Integrating Generative AI into Chat Applications
+## Integrating Generative AI into Chat Applications
Elevating chat applications through generative AI isn't only centered around making them smarter; it's about optimizing their architecture, performance, and user interface to deliver a quality user experience. This involves investigating the architectural foundations, API integrations, and user interface considerations. This section aims to offer you a comprehensive roadmap for navigating these complex landscapes, whether you're plugging them into existing systems or building them as stand-alone platforms.
-
+
By the end of this section, you'll be equipped with the expertise needed to efficiently construct and incorporate chat applications.
### Chatbot or Chat application?
-Before we dive into building chat applications, let's compare 'chatbots' against 'AI-powered chat applications,' which serve distinct roles and functionalities. A chatbot's main purpose is to automate specific conversational tasks, such as answering frequently asked questions or tracking a package. It's typically governed by rule-based logic or complex AI algorithms. In contrast, an AI-powered chat application is a far more expansive environment designed to facilitate various forms of digital communication, such as text, voice, and video chats among human users. Its defining feature is the integration of generative AI model that simulate nuanced, human-like conversations, generating responses based on a wide variety of input and contextual cues. A generative AI powered chat application can engage in open-domain discussions, adapt to evolving conversational contexts, and even produce creative or complex dialogue.
+Before we dive into building chat applications, let's compare 'chat bots' against 'AI-powered chat applications,' which serve distinct roles and functionalities. A chatbot's main purpose is to automate specific conversational tasks, such as answering frequently asked questions or tracking a package. It's typically governed by rule-based logic or complex AI algorithms. In contrast, an AI-powered chat application is a far more expansive environment designed to facilitate various forms of digital communication, such as text, voice, and video chats among human users. Its defining feature is the integration of generative AI model that simulate nuanced, human-like conversations, generating responses based on a wide variety of input and contextual cues. A generative AI powered chat application can engage in open-domain discussions, adapt to evolving conversational contexts, and even produce creative or complex dialogue.
The table below outlines the key differences and similarities to help us understand their unique roles in digital communication.
| Chatbot | Generative AI-Powered Chat Application |
| ------------------------------------- | -------------------------------------- |
| Task-Focused and rule based | Context-aware |
-| Often integrated into larger systems | May host one or multiple chatbots |
+| Often integrated into larger systems | May host one or multiple chat bots |
| Limited to programmed functions | Incorporates generative AI models |
| Specialized & structured interactions | Capable of open-domain discussions |
-
### Leveraging pre-built functionalities with SDKs and APIs
When building a chat application, a great first step is to assess what is already out there. Using SDKs and APIs to build chat applications is an advantageous strategy for a variety of reasons. By integrating well-documented SDKs and APIs, you're strategically positioning your application for long-term success, addressing scalability and maintenance concerns.
- **Expedites the development process and reduces overhead**: Relying on pre-built functionalities instead of the expensive process of building them yourself allows you to focus on other aspects of your application that you may find more important, such as business logic.
- **Better performance**: When building functionality from scratch, you'll eventually ask yourself "How does it scale? Is this application capable of handling a sudden influx of users?" Well maintained SDK and APIs often have built in solutions for these concerns.
-- **Easier maintenance**: Updates and improvements are easier to manage as most APIs and SDKs simply require an update to a library when a newer version is released.
+- **Easier maintenance**: Updates and improvements are easier to manage as most APIs and SDKs simply require an update to a library when a newer version is released.
- **Access to cutting edge technology**: Leveraging models that have been fined tuned and trained on extensive dataset provides your application with natural language capabilities.
Accessing functionality of an SDK or API typically involves obtaining permission to use the provided services, which is often through the use of a unique key or authentication token. We'll use the OpenAI Python Library to explore what this looks like. You can also try it out on your own in the [notebook](notebook.ipynb) for this lesson.
@@ -76,7 +74,7 @@ chat_completion = openai.ChatCompletion.create(model="gpt-3.5-turbo", messages=[
The above example uses the GPT-3.5 Turbo model to complete the prompt, but notice that the API key is set prior to doing so. You'd receive the following error if you didn't set the key.
-```
+```output
AuthenticationError: No API key provided. You can set your API key in code using 'openai.api_key = ', or you can set the environment variable OPENAI_API_KEY=). If your API key is stored in a file, you can point the openai module at it with 'openai.api_key_path = '. You can generate API keys in the OpenAI web interface. See https://platform.openai.com/account/api-keys for details.
```
@@ -88,7 +86,7 @@ General UX principles apply to chat applications, but here's some additional con
- **Context retention**: Advanced generative AI models have the ability to remember context within a conversation, which can be a necessary asset to the user experience. Giving users the ability to control and manage context improves the user experience, but introduces the risk of retaining sensitive user information. Considerations for how long this information is stored, such as introducing a retention policy, can balance the need for context against privacy.
- **Personalization**: With the ability to learn and adapt, AI models offer an individualized experience for a user. Tailoring the user experience through features like user profiles not only makes the user feel understood, but it also helps their pursuit in finding specific answers, creating a more efficient and satisfying interaction.
-One such example of personalization is the "Custom instructions" settings in OpenAI's ChatGPT. It allows you to provide information about yourself that may be important context for your prompts. Here's an example of a custom instruction.
+One such example of personalization is the "Custom instructions" settings in OpenAI's ChatGPT. It allows you to provide information about yourself that may be important context for your prompts. Here's an example of a custom instruction.
![Custom Instructions Settings in ChatGPT](img/custom_instructions.png)
@@ -96,7 +94,6 @@ This "profile" prompts ChatGPT to create a lesson plan on linked lists. Notice t
![A prompt in ChatGPT for a lesson plan about linked lists](img/lesson_plan_prompt.png)
-
### Microsoft's System Message Framework for Large Language Models
[Microsoft has provided guidance](https://learn.microsoft.com/azure/ai-services/openai/concepts/system-message#define-the-models-output-format) for writing effective system messages when generating responses from LLMs broken down into 4 areas:
@@ -106,7 +103,6 @@ This "profile" prompts ChatGPT to create a lesson plan on linked lists. Notice t
3. Providing specific examples that demonstrate intended behavior of the model.
4. Providing additional behavioral guardrails.
-
### Accessibility
Whether a user has visual, auditory, motor, or cognitive impairments, a well-designed chat application should be usable by all. The following list breaks down specific features aimed at enhancing accessibility for various user impairments.
@@ -123,36 +119,34 @@ Imagine a chat application that understands your company's jargon and anticipate
- **Leveraging DSL models**. DSL stands for domain specific language. You can leverage a so called DSL model trained on a specific domain to understand it's concepts and scenarios.
- **Apply fine-tuning**. Fine-tuning is the process to further train your model with specific data.
-### Using a DSL
+## Customization: Using a DSL
Leveraging a domain-specific language models (DSL Models) can enhance user engagement and by providing specialized, contextually relevant interactions. It's a model that is trained or fine-tuned to understand and generate text related to a specific field, industry, or subject. Options for using a DSL model can vary from training one from scratch, to using pre-existing ones through SDKs and APIs. Another option is fine-tuning, which involves taking an existing pre-trained model and adapting it for a specific domain.
-### Apply fine-tuning
-
+## Customization: Apply fine-tuning
-Fine-tuning is often considered when a pre-trained model falls short in a specialized domain or specific task.
+Fine-tuning is often considered when a pre-trained model falls short in a specialized domain or specific task.
-For instance, medical queries are complex and require a lot of context. When a medical professional diagnoses a patient it's based on a variety of factors such as lifestyle or pre-existing conditions, and may even rely on recent medical journals to validate their diagnosis. In such nuanced scenarios, a general-purpose AI chat application cannot be a reliable source.
+For instance, medical queries are complex and require a lot of context. When a medical professional diagnoses a patient it's based on a variety of factors such as lifestyle or pre-existing conditions, and may even rely on recent medical journals to validate their diagnosis. In such nuanced scenarios, a general-purpose AI chat application cannot be a reliable source.
-**Scenario: a medical application**
+### Scenario: a medical application**
-Consider a chat application designed to assist medical practitioners by providing quick references to treatment guidelines, drug interactions, or recent research findings.
+Consider a chat application designed to assist medical practitioners by providing quick references to treatment guidelines, drug interactions, or recent research findings.
A general-purpose model might be adequate for answering basic medical questions or providing general advice, it may struggle with the following:
-- **Highly specific or complex cases**. For example, a neurologist might ask the application, "What are the current best practices for managing drug-resistant epilepsy in pediatric patients?"
+- **Highly specific or complex cases**. For example, a neurologist might ask the application, "What are the current best practices for managing drug-resistant epilepsy in pediatric patients?"
- **Lacking recent advancements**. A general-purpose model could struggle to provide a current answer that incorporates the most recent advancements in neurology and pharmacology.
In instances such as these, fine-tuning the model with a specialized medical dataset can significantly improve its ability to handle these intricate medical inquiries more accurately and reliably. This requires access to a large and relevant dataset that represents the domain-specific challenges and questions that need to be addressed.
-
## Considerations for a High Quality AI-Driven Chat Experience
This section outlines the criteria for "high-quality" chat applications, which include the capture of actionable metrics and adherence to a framework that responsibly leverages AI technology.
-### Key Metrics
+### Key Metrics
-To maintain the high-quality performance an application, it's essential to keep track of key metrics and considerations. These measurements not only ensure the functionality of the application but also assess the quality of the AI model and user experience. Below is a list that covers basic, AI, and user experience metrics to consider.
+To maintain the high-quality performance an application, it's essential to keep track of key metrics and considerations. These measurements not only ensure the functionality of the application but also assess the quality of the AI model and user experience. Below is a list that covers basic, AI, and user experience metrics to consider.
| Metric | Definition | Considerations for Chat Developer |
| ----------------------------- | ---------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------- |
@@ -167,12 +161,10 @@ To maintain the high-quality performance an application, it's essential to keep
| **Retraining Cycles** | The frequency with which the model is updated to incorporate new data and insights. | How often will you retrain the model? What triggers a retraining cycle? |
| **Anomaly Detection** | Tools and techniques for identifying unusual patterns that do not conform to expected behavior. | How will you respond to anomalies? |
-
### Implementing Responsible AI Practices in Chat Applications
Microsoft's approach to Responsible AI has identified six principles that should guide AI development and use. Below are the principles, their definition, and things a chat developer should consider any why they should take them seriously.
-
| Principles | Microsoft's Definition | Considerations for Chat Developer | Why It's Important |
| ---------------------- | ----------------------------------------------------- | ---------------------------------------------------------------------- | -------------------------------------------------------------------------------------- |
| Fairness | AI systems should treat all people fairly. | Ensure the chat application does not discriminate based on user data. | To build trust and inclusivity among users; avoids legal ramifications. |
@@ -182,15 +174,12 @@ Microsoft's approach to Responsible AI has identified six principles that should
| Transparency | AI systems should be understandable. | Provide clear documentation and reasoning for AI responses. | Users are more likely to trust a system if they can understand how decisions are made. |
| Accountability | People should be accountable for AI systems. | Establish a clear process for auditing and improving AI decisions. | Enables ongoing improvement and corrective measures in case of mistakes. |
-## Coding Challenge
-See [assignment](./notebook-azure-openai.ipynb) it will take you through a series of exercises from running your first chat prompts, to classifying and summaring text and more.
-
-
-
+## Assignment
+
+See [assignment](./notebook-azure-openai.ipynb) it will take you through a series of exercises from running your first chat prompts, to classifying and summarizing text and more.
## Great Work! Continue the Journey
-Want to learn more about creating chat applications with Generative AI? Go to the [contiuned learning page](../13-continued-learning/README.md) to find other great resources on this topic.
+Want to learn more about creating chat applications with Generative AI? Go to the [continued learning page](../13-continued-learning/README.md) to find other great resources on this topic.
-
Head over to Lesson 8 to see how you can start [building search applications](/8-building-search-applications/README.md)!
diff --git a/08-building-search-applications/README.md b/08-building-search-applications/README.md
index f80c79b5c..583a9631f 100644
--- a/08-building-search-applications/README.md
+++ b/08-building-search-applications/README.md
@@ -1,10 +1,10 @@
# Building a Search Applications
-[![Introduction to Generative AI and Large Language Models](./media/genai_course_8[80].png)]()
+[![Introduction to Generative AI and Large Language Models](./media/genai_course_8[80].png)](TBD)
-**Video Coming Soon**
+> **Video Coming Soon**
-There's more to LLMs than chatbots and text generation. It's also possible to build search applications using Embeddings. Embeddings are numerical representations of data also known as vectors, and can be used for semantic search for data.
+There's more to LLMs than chat bots and text generation. It's also possible to build search applications using Embeddings. Embeddings are numerical representations of data also known as vectors, and can be used for semantic search for data.
In this lesson, you are going to build a search application for our education startup Edu4All. Our startup is a non-profit organization that provides free education to students in developing countries. Our startup has a large number of YouTube videos that students can use to learn about AI. Our startup wants to build a search application that allows students to search for a YouTube video by typing a question.
@@ -27,7 +27,6 @@ After completing this lesson, you will be able to:
- Explain what Text Embeddings are.
- Create an application using Embeddings to search for data.
-
## Why build a search application?
Creating a search application will help you understand how to use Embeddings to search for data. You will also learn how to build a search application that can be used by students to find information quickly.
@@ -54,7 +53,7 @@ Here's an example, image the following text is in a transcript from one of the e
Today we are going to learn about Azure Machine Learning.
```
-We'd pass the text to the OpenAI Embedding API and it would return the following embedding consisting of 1536 numbers aka a vector. Each number in the vector represents a different aspect of the text. For brievity, here are the first 10 numbers in the vector.
+We'd pass the text to the OpenAI Embedding API and it would return the following embedding consisting of 1536 numbers aka a vector. Each number in the vector represents a different aspect of the text. For brevity, here are the first 10 numbers in the vector.
```python
[-0.006655829958617687, 0.0026128944009542465, 0.008792596869170666, -0.02446001023054123, -0.008540431968867779, 0.022071078419685364, -0.010703742504119873, 0.003311325330287218, -0.011632772162556648, -0.02187200076878071, ...]
@@ -82,7 +81,7 @@ We've learned about text embeddings, the next step is to learn how to use text e
### What is cosine similarity?
-Cosine similarity is a measure of similarity between two vectors, you'll also hear this referred to as `nearest neighbor search`. To perform a cosine similarity search you need to `vectorize` for `query` text using the OpenAI Embedding API. Then calculate the `cosine similarity` between the query vector and each vector in the Embedding Index. Remember, the Embedding Index has a vector for each YouTube transcript text segment. Finally, sort the results by cosine similarity and the text segments with the highest cosine similarity are the most similar to the query.
+Cosine similarity is a measure of similarity between two vectors, you'll also hear this referred to as `nearest neighbor search`. To perform a cosine similarity search you need to _vectorize_ for _query_ text using the OpenAI Embedding API. Then calculate the _cosine similarity_ between the query vector and each vector in the Embedding Index. Remember, the Embedding Index has a vector for each YouTube transcript text segment. Finally, sort the results by cosine similarity and the text segments with the highest cosine similarity are the most similar to the query.
From a mathematic perspective, cosine similarity measures the cosine of the angle between two vectors projected in a multidimensional space. This measurement is beneficial, because if two documents are far apart by Euclidean distance because of size, they could still have a smaller angle between them and therefore higher cosine similarity. For more information about cosine similarity equations, see [Cosine similarity](https://en.wikipedia.org/wiki/Cosine_similarity).
@@ -159,7 +158,6 @@ When you run the notebook, you'll be prompted to enter a query. The input box wi
## Great Work! Continue Your Learning
-Want to learn more about how to build search applications? Go to the [contiuned learning page](../13-continued-learning/README.md) to find other great resources on this topic.
+Want to learn more about how to build search applications? Go to the [continued learning page](../13-continued-learning/README.md) to find other great resources on this topic.
-
-Head over to the Lesson 9 where we will look at how to [build image generation applications](/09-building-image-applications/README.md)!
\ No newline at end of file
+Head over to the Lesson 9 where we will look at how to [build image generation applications](/09-building-image-applications/README.md)!
diff --git a/09-building-image-applications/README.md b/09-building-image-applications/README.md
index 9e6d9c6f6..08b7f802e 100644
--- a/09-building-image-applications/README.md
+++ b/09-building-image-applications/README.md
@@ -1,12 +1,12 @@
# Building Image Generation Applications
-[![Building Image Geneation Applications](./images/genai_course_9[70].png)]()
+[![Building Image Generation Applications](./images/genai_course_9[70].png)](TBD)
-**Video Coming Soon**
+> **Video Coming Soon**
There's more to LLMs than text generation. It's also possible to generate images from text descriptions. Having images as a modality can be highly useful in a number of areas from MedTech, architecture, tourism, game development and more. In this chapter we will look into the two most popular image generation models, DALL-E and Midjourney.
-## Introduction
+## Introduction
In this lesson, we will cover:
@@ -14,12 +14,12 @@ In this lesson, we will cover:
- DALL-E and Midjourney,what they are and how they work.
- How you would build an image generation app.
-## Learning Goals
+## Learning Goals
After completing this lesson, you will be able to:
- Build an image generation application.
-- Define boundaries for your application with meta prompts.
+- Define boundaries for your application with meta prompts.
- Work with DALL-E and Midjourney.
## Why build an image generation application?
@@ -28,46 +28,44 @@ Image generation applications are a great way to explore the capabilities of Gen
- **Image editing and synthesis**. You can generate images for a variety of use cases, such as image editing and image synthesis.
-- **Applied to a variety of industries**. They can also be used to generate images for a variety of industries like Medtech, Tourism, Game development and more.
+- **Applied to a variety of industries**. They can also be used to generate images for a variety of industries like Medtech, Tourism, Game development and more.
-## Scenario: Edu4All
+## Scenario: Edu4All
-As part of this lesson, we will continue to work with our startup, Edu4All, in this lesson. The students will create images for their assessments, exactly what images is up to the students, but it could be illustrations for their own fairytale or create a new character for their story or help them to visualize their ideas and concepts.
+As part of this lesson, we will continue to work with our startup, Edu4All, in this lesson. The students will create images for their assessments, exactly what images is up to the students, but it could be illustrations for their own fairytale or create a new character for their story or help them to visualize their ideas and concepts.
Here's what Edu4All's students could generate for example if they're working on class on monuments:
-![Edu4All startup, class on monuments, Eifel Tower](startup.png)
+![Edu4All startup, class on monuments, Eiffel Tower](startup.png)
-using a prompt like
+using a prompt like
-> "Dog next to Eifel Tower in early morning sunlight"
+> "Dog next to Eiffel Tower in early morning sunlight"
-## What is DALL-E and Midjourney?
+## What is DALL-E and Midjourney?
[DALL-E](https://openai.com/dall-e-2) and [Midjourney](https://www.midjourney.com/) are two of the most popular image generation models, they allow you using prompts to generate images.
### DALL-E
-Let's start with DALL-E, which is a Generative AI model that generates images from text descriptions.
+Let's start with DALL-E, which is a Generative AI model that generates images from text descriptions.
> [DALL-E is a combination of two models, CLIP and diffused attention](https://towardsdatascience.com/openais-dall-e-and-clip-101-a-brief-introduction-3a4367280d4e).
- **CLIP**, is a model that generates embeddings, which are numerical representations of data, from images and text.
-- **Diffused attention**, is a model that generates images from embeddings. DALL-E is trained on a dataset of images and text and can be used to generate images from text descriptions. For example, DALL-E can be used to generate images of a cat in a hat, or a dog with a mohawk.
+- **Diffused attention**, is a model that generates images from embeddings. DALL-E is trained on a dataset of images and text and can be used to generate images from text descriptions. For example, DALL-E can be used to generate images of a cat in a hat, or a dog with a mohawk.
### Midjourney
-
-Midjourney works in a similar way to DALL-E, it generates images from text prompts. Midjourney, can also be used to generate images using prompts like “a cat in a hat”, or a “dog with a mohawk”.
-
+Midjourney works in a similar way to DALL-E, it generates images from text prompts. Midjourney, can also be used to generate images using prompts like “a cat in a hat”, or a “dog with a mohawk”.
![Image generated by Midjourney, mechanical pigeon](https://upload.wikimedia.org/wikipedia/commons/thumb/8/8c/Rupert_Breheny_mechanical_dove_eca144e7-476d-4976-821d-a49c408e4f36.png/440px-Rupert_Breheny_mechanical_dove_eca144e7-476d-4976-821d-a49c408e4f36.png)
*Image cred Wikipedia, image generated by Midjourney*
-## How does DALL-E and Midjourney Work
+## How does DALL-E and Midjourney Work
-First, [DALL-E](https://arxiv.org/pdf/2102.12092.pdf). DALL-E is a Generative AI model based on the transformer architecture with an *autoregressive transformer*.
+First, [DALL-E](https://arxiv.org/pdf/2102.12092.pdf). DALL-E is a Generative AI model based on the transformer architecture with an *auto regressive transformer*.
An *autogressive transformer* defines how a model generates images from text descriptions, it generates one pixel at a time, and then uses the generated pixels to generate the next pixel. Passing through multiple layers in a neural network, until the image is complete.
@@ -221,7 +219,7 @@ Let's explain this code:
image = Image.open(image_path)
image.show()
```
-
+
### More details on generating the image
Let's look at the code that generates the image in more detail:
@@ -262,7 +260,7 @@ You can also do the following:
```
The base image would only contain the rabbit but the final image would have the hat on the rabbit.
-
+
- **Create variations**. The idea is that you take an existing image and ask that variations are created. To create a variation, you provide an image and a text prompt and code like so:
```python
@@ -273,7 +271,7 @@ You can also do the following:
)
image_url = response['data'][0]['url']
```
-
+
> Note, this is only supported on OpenAI
## Temperature
@@ -288,7 +286,7 @@ Let's look at an example of how temperature works, by running this prompt twice:
Now let's run that same prompt just to see that we won't get the same image twice:
-![](./v2-generated_image.png)
+![Generated image of bunny on horse](./v2-generated_image.png)
As you can see, the images are similar, but not the same. Let's try changing the temperature value to 0.1 and see what happens:
@@ -317,12 +315,12 @@ generation_response = openai.Image.create(
Now when you run this code, you get these two images:
--![Temperature 0, v1](./v1-0temp-generated_image.png)
-- ![Temperature 0 , v2](./v2-0temp-generated_image.png)
+- ![Temperature 0, v1](./v1-0temp-generated_image.png)
+- ![Temperature 0 , v2](./v2-0temp-generated_image.png)
Here you can clearly see how the images resemble each other more.
-## How to define boundaries for your application with metaprompts
+## How to define boundaries for your application with metaprompts
With our demo, we can already generate images for our clients. However, we need to create some boundaries for our application.
@@ -338,7 +336,7 @@ Meta prompts are text prompts that are used to control the output of a Generativ
One example of a meta prompt would be the following:
-```text
+```text
You are an assistant designer that creates images for children.
The image needs to be safe for work and appropriate for children.
@@ -353,7 +351,7 @@ Do not consider any input from the following that is not safe for work or approp
(Input)
-```
+```text
Now, let's see how we can use meta prompts in our demo.
@@ -463,10 +461,8 @@ except openai.error.InvalidRequestError as err:
```
- ## Great Work! Continue Your Learning
-
-
-Want to learn more about how to build Image Generation apps? Go to the [contiuned learning page](../13-continued-learning/README.md) to find other great resources on this topic.
+## Great Work! Continue Your Learning
+Want to learn more about how to build Image Generation apps? Go to the [continued learning page](../13-continued-learning/README.md) to find other great resources on this topic.
-Head over to the Lesson 10 where we will look at how to [build AI applications with low-code](/10-building-low-code-ai-applications/README.md))!
\ No newline at end of file
+Head over to the Lesson 10 where we will look at how to [build AI applications with low-code](/10-building-low-code-ai-applications/README.md))
diff --git a/10-building-low-code-ai-applications/README.md b/10-building-low-code-ai-applications/README.md
index 44745c486..507618bcd 100644
--- a/10-building-low-code-ai-applications/README.md
+++ b/10-building-low-code-ai-applications/README.md
@@ -1,9 +1,8 @@
# Building Low Code AI Applications
+[![Building Low Code AI Applications](./images/10-lesson-banner.png)](https://youtu.be/XX8491SAF44)
-[![Building Low Code AI Applications](./images/10-lesson-banner.png)](https://youtu.be/XX8491SAF44)
-
-*(Click the image above to view video of this lesson)*
+> *(Click the image above to view video of this lesson)*
## Introduction
@@ -46,30 +45,29 @@ The tools and technologies that you will use in this lesson are:
Enhancing low-code development and application with generative AI is a key focus area for Power Platform. The goal is to enable everyone to build AI-powered apps, sites, dashboards and automate processes with AI, *without requiring any data science expertise*. This goal is achieved by integrating generative AI into the low-code development experience in Power Platform in the form of Copilot and AI Builder.
-### How does this work?
+### How does this work?
Copilot is an AI assistant that enables you to build Power Platform solutions by describing your requirements in a series of conversational steps using natural language. You can for example instruct your AI assistant to state what fields your app will use and it will create both the app and the underlying data model or you could specify how to set up a flow in Power Automate.
You can use Copilot driven functionalities as a feature in your app screens to enable users to uncover insights through conversational interactions.
-
AI Builder is a low-code AI capability available in Power Platform that enables you to use AI Models to help you to automate processes and predict outcomes. With AI Builder you can bring AI to your apps and flows that connect to your data in Dataverse or in various cloud data sources, such as SharePoint, OneDrive or Azure.
Copilot is available in all of the Power Platform products: Power Apps, Power Automate, Power BI, Power Pages and Power Virtual Agents. AI Builder is available in Power Apps and Power Automate. In this lesson, we will focus on how to use Copilot and AI Builder in Power Apps and Power Automate to build a solution for our education startup.
### Copilot in Power Apps
-As part of the Power Platform, Power Apps provides a low-code development environment for building apps to track, manage and interact with data. It's a suite of app development services with a scalable data platform and the ability to connect to cloud services and on-premises data. Power Apps allows you to build apps that run on browsers, tablets, and phones, and can be shared with co-workers. Power Apps eases users into app development with a simple interface, so that every business user or pro developer can build custom apps. The app development experience is also enhanced with Generative AI through Copilot.
+As part of the Power Platform, Power Apps provides a low-code development environment for building apps to track, manage and interact with data. It's a suite of app development services with a scalable data platform and the ability to connect to cloud services and on-premises data. Power Apps allows you to build apps that run on browsers, tablets, and phones, and can be shared with co-workers. Power Apps eases users into app development with a simple interface, so that every business user or pro developer can build custom apps. The app development experience is also enhanced with Generative AI through Copilot.
The copilot AI assistant feature in Power Apps enables you to describe what kind of app you need and what information you want your app to track, collect, or show. Copilot then generates a responsive Canvas app based on your description. You can then customize the app to meet your needs. The AI Copilot also generates and suggests a Dataverse Table with the fields you need to store the data you want to track and some sample data. We will look at what Dataverse is and how you can use it in Power Apps on this lesson later. You can then customize the table to meet your needs using the AI Copilot assistant feature through conversational steps. This feature is readily available from the Power Apps home screen.
### Copilot in Power Automate
-As part of the Power Platform, Power Automate lets users create automated workflows between applications and services. It helpes automate repetitive business processes such as communication, data collection, and decision approvals. Its simple interface allows users with every technical competence (from beginners to seasoned developers) to automate work tasks. The workflow development experience is also enhanced with Generative AI through Copilot.
+As part of the Power Platform, Power Automate lets users create automated workflows between applications and services. It helps automate repetitive business processes such as communication, data collection, and decision approvals. Its simple interface allows users with every technical competence (from beginners to seasoned developers) to automate work tasks. The workflow development experience is also enhanced with Generative AI through Copilot.
The copilot AI assistant feature in Power Automate enables you to describe what kind of flow you need and what actions you want your flow to perform. Copilot then generates a flow based on your description. You can then customize the flow to meet your needs. The AI Copilot also generates and suggests the actions you need to perform the task you want to automate. We will look at what flows are and how you can use them in Power Automate on this lesson later. You can then customize the actions to meet your needs using the AI Copilot assistant feature through conversational steps. This feature is readily available from the Power Automate home screen.
-## Assignment: manage student assignments and invoices for our startup, using Copilot
+## Assignment: manage student assignments and invoices for our startup, using Copilot
Our startup provides online courses to students. The startup has grown rapidly and is now struggling to keep up with the demand for its courses. The startup has hired you as a Power Platform developer to help them build a low code solution to help them manage their student assignments and invoices. Their solution should be able to help them track and manage student assignments through an app and automate the invoice processing process through a workflow. You have been asked to use Generative AI to develop the solution.
@@ -77,42 +75,41 @@ When you are getting started with using Copilot, you can use the [Power Platform
### Build a Student Assignment Tracker App for Our Startup
-The educators at our startup have been struggling to keep track of student assignments. They have been using a spreadsheet to track the assignments but this has become difficult to manage as the number of students has increased. They have asked you to build an app that will help them track and manage student assignments. The app should enable them to add new assignments, view assignments, update assignments and delete assignments. The app should also enable the educators and students to view the assignments that have been graded and those that have not been graded.
+The educators at our startup have been struggling to keep track of student assignments. They have been using a spreadsheet to track the assignments but this has become difficult to manage as the number of students has increased. They have asked you to build an app that will help them track and manage student assignments. The app should enable them to add new assignments, view assignments, update assignments and delete assignments. The app should also enable the educators and students to view the assignments that have been graded and those that have not been graded.
You will build the app using Copilot in Power Apps following the steps below:
1. Navigate to the [Power Apps](https://make.powerapps.com) home screen.
-2. Use the text area on the home screen to describe the app you want to build. For example, ***I want to build an app to track and manage student assignments***. Click on the **Send** button to send the prompt to the AI Copilot.
+1. Use the text area on the home screen to describe the app you want to build. For example, ***I want to build an app to track and manage student assignments***. Click on the **Send** button to send the prompt to the AI Copilot.
![Describe the app you want to build](images/copilot-chat-prompt-powerapps.png)
+1. The AI Copilot will suggest a Dataverse Table with the fields you need to store the data you want to track and some sample data. You can then customize the table to meet your needs using the AI Copilot assistant feature through conversational steps.
-3. The AI Copilot will suggest a Dataverse Table with the fields you need to store the data you want to track and some sample data. You can then customize the table to meet your needs using the AI Copilot assistant feature through conversational steps.
-
- > **Important**: Dataverse is the underlying data platform for Power Platform. It is a low-code data platform for storing the apps data. It is a fully managed service that securely stores data in the Microsoft Cloud and is provisioned within your Power Platform environment. It comes with built-in data governance capabilities, such as data classification, data lineage, fine-grained access control, and more. You can learn more about Dataverse [here](https://docs.microsoft.com/en-us/powerapps/maker/data-platform/data-platform-intro?WT.mc_id=academic-109639-somelezediko).
+ > **Important**: Dataverse is the underlying data platform for Power Platform. It is a low-code data platform for storing the apps data. It is a fully managed service that securely stores data in the Microsoft Cloud and is provisioned within your Power Platform environment. It comes with built-in data governance capabilities, such as data classification, data lineage, fine-grained access control, and more. You can learn more about Dataverse [here](https://docs.microsoft.com/powerapps/maker/data-platform/data-platform-intro?WT.mc_id=academic-109639-somelezediko).
![Suggested fields in your new table](images/copilot-dataverse-table-powerapps.png)
-4. Educators want to send emails to the students who have submitted their assignments to keep them updated on the progress of their assignments. You can use Copilot to add a new field to the table to store the student email. For example, you can use the following prompt to add a new field to the table: ***I want to add a column to store student email***. Click on the **Send** button to send the prompt to the AI Copilot.
+1. Educators want to send emails to the students who have submitted their assignments to keep them updated on the progress of their assignments. You can use Copilot to add a new field to the table to store the student email. For example, you can use the following prompt to add a new field to the table: ***I want to add a column to store student email***. Click on the **Send** button to send the prompt to the AI Copilot.
![Adding a new field](images/copilot-new-column.png)
-5. The AI Copilot will generate a new field and you can then customize the field to meet your needs.
+1. The AI Copilot will generate a new field and you can then customize the field to meet your needs.
-6. Once you are done with the table, click on the **Create app** button to create the app.
+1. Once you are done with the table, click on the **Create app** button to create the app.
-7. The AI Copilot will generate a responsive Canvas app based on your description. You can then customize the app to meet your needs.
+1. The AI Copilot will generate a responsive Canvas app based on your description. You can then customize the app to meet your needs.
-8. For educators to send emails to students, you can use Copilot to add a new screen to the app. For example, you can use the following prompt to add a new screen to the app: ***I want to add a screen to send emails to students***. Click on the **Send** button to send the prompt to the AI Copilot.
+1. For educators to send emails to students, you can use Copilot to add a new screen to the app. For example, you can use the following prompt to add a new screen to the app: ***I want to add a screen to send emails to students***. Click on the **Send** button to send the prompt to the AI Copilot.
![Adding a new screen via a prompt instruction](images/copilot-new-screen.png)
-9. The AI Copilot will generate a new screen and you can then customize the screen to meet your needs.
+1. The AI Copilot will generate a new screen and you can then customize the screen to meet your needs.
-10. Once you are done with the app, click on the **Save** button to save the app.
+1. Once you are done with the app, click on the **Save** button to save the app.
-11. To share the app with the educators, click on the **Share** button and then click on the **Share** button again. You can then share the app with the educators by entering their email addresses.
+1. To share the app with the educators, click on the **Share** button and then click on the **Share** button again. You can then share the app with the educators by entering their email addresses.
> **Your homework**: The app you just built is a good start but can be improved. With the email feature, educators can only send emails to students manually by having to type their emails. Can you use Copilot to build an automation that will enable educators to send emails to students automatically when they submit their assignments? Your hint is with the right prompt you can use Copilot in Power Automate to build this.
@@ -120,9 +117,9 @@ You will build the app using Copilot in Power Apps following the steps below:
The finance team of our startup has been struggling to keep track of invoices. They have been using a spreadsheet to track the invoices but this has become difficult to manage as the number of invoices has increased. They have asked you to build a table that will help them store, track and manage the information of the invoices they received. The table should be used to build an automation that will extract all the invoice information and store it in the table. The table should also enable the finance team to view the invoices that have been paid and those that have not been paid.
-The Power Platform has an underlying data platform called Dataverse that enables you to store the data for your apps and solutions. Dataverse provides a low-code data platform for storing the apps data. It is a fully managed service that securely stores data in the Microsoft Cloud and is provisioned within your Power Platform environment. It comes with built-in data governance capabilities, such as data classification, data lineage, fine-grained access control, and more. You can learn more about Dataverse [here](https://docs.microsoft.com/en-us/powerapps/maker/data-platform/data-platform-intro?WT.mc_id=academic-109639-somelezediko).
+The Power Platform has an underlying data platform called Dataverse that enables you to store the data for your apps and solutions. Dataverse provides a low-code data platform for storing the apps data. It is a fully managed service that securely stores data in the Microsoft Cloud and is provisioned within your Power Platform environment. It comes with built-in data governance capabilities, such as data classification, data lineage, fine-grained access control, and more. You can learn more [about Dataverse here](https://docs.microsoft.com/powerapps/maker/data-platform/data-platform-intro?WT.mc_id=academic-109639-somelezediko).
-Why should we use Dataverse for our startup? The standard and custom tables within Dataverse provide a secure and cloud-based storage option for your data. Tables let you store different types of data, similar to how you might use multiple worksheets in a single Excel workbook. You can use tables to store data that is specific to your organization or business need. Some of the benefits our startup will get from using Dataverse include but are not limited to:
+Why should we use Dataverse for our startup? The standard and custom tables within Dataverse provide a secure and cloud-based storage option for your data. Tables let you store different types of data, similar to how you might use multiple worksheets in a single Excel workbook. You can use tables to store data that is specific to your organization or business need. Some of the benefits our startup will get from using Dataverse include but are not limited to:
- **Easy to manage**: Both the metadata and data are stored in the cloud, so you don't have to worry about the details of how they are stored or managed. You can focus on building your apps and solutions.
@@ -144,23 +141,23 @@ To create a table in Dataverse using Copilot, follow the steps below:
![Select new table](images/describe-new-table.png)
-3. On the **Describe the new Table** screen, use the text area to describe the table you want to create. For example, ***I want to create a table to store invoice information***. Click on the **Send** button to send the prompt to the AI Copilot.
+1. On the **Describe the new Table** screen, use the text area to describe the table you want to create. For example, ***I want to create a table to store invoice information***. Click on the **Send** button to send the prompt to the AI Copilot.
![Describe the table](images/copilot-chat-prompt-dataverse.png)
-4. The AI Copilot will suggest a Dataverse Table with the fields you need to store the data you want to track and some sample data. You can then customize the table to meet your needs using the AI Copilot assistant feature through conversational steps.
+1. The AI Copilot will suggest a Dataverse Table with the fields you need to store the data you want to track and some sample data. You can then customize the table to meet your needs using the AI Copilot assistant feature through conversational steps.
![Suggested Dataverse table](images/copilot-dataverse-table.png)
-5. The finance team want to send an email to the supplier to update them with the current status of their invoice. You can use Copilot to add a new field to the table to store the supplier email. For example, you can use the following prompt to add a new field to the table: ***I want to add a column to store supplier email***. Click on the **Send** button to send the prompt to the AI Copilot.
+1. The finance team want to send an email to the supplier to update them with the current status of their invoice. You can use Copilot to add a new field to the table to store the supplier email. For example, you can use the following prompt to add a new field to the table: ***I want to add a column to store supplier email***. Click on the **Send** button to send the prompt to the AI Copilot.
-6. The AI Copilot will generate a new field and you can then customize the field to meet your needs.
+1. The AI Copilot will generate a new field and you can then customize the field to meet your needs.
-7. Once you are done with the table, click on the **Create** button to create the table.
+1. Once you are done with the table, click on the **Create** button to create the table.
## AI Models in Power Platform with AI Builder
-AI Builder is a low-code AI capability available in Power Platform that enables you to use AI Models to help you to automate processes and predict outcomes. With AI Builder you can bring AI to your apps and flows that connect to your data in Dataverse or in various cloud data sources, such as SharePoint, OneDrive or Azure.
+AI Builder is a low-code AI capability available in Power Platform that enables you to use AI Models to help you to automate processes and predict outcomes. With AI Builder you can bring AI to your apps and flows that connect to your data in Dataverse or in various cloud data sources, such as SharePoint, OneDrive or Azure.
## Prebuilt AI Models vs Custom AI Models
@@ -177,7 +174,7 @@ Some of the Prebuilt AI Models available in Power Platform include:
- **Form Processing**: This model extracts information from forms.
- **Invoice Processing**: This model extracts information from invoices.
-With Custom AI Models you can bring your own model into AI Builder so that it can function like any AI Builder custom model, allowing you to train the model using your own data. You can use these models to automate processes and predict outcomes in both Power Apps and Power Automate. When using your own model there are limitations that apply. Read more on these [limitations](https://learn.microsoft.com/en-us/ai-builder/byo-model#limitations).
+With Custom AI Models you can bring your own model into AI Builder so that it can function like any AI Builder custom model, allowing you to train the model using your own data. You can use these models to automate processes and predict outcomes in both Power Apps and Power Automate. When using your own model there are limitations that apply. Read more on these [limitations](https://learn.microsoft.com/ai-builder/byo-model#limitations).
![AI builder models](images/ai-builder-models.png)
@@ -193,7 +190,7 @@ To build a workflow that will help the finance team process invoices using the I
2. Use the text area on the home screen to describe the workflow you want to build. For example, ***Process an invoice when it arrives in my mailbox***. Click on the **Send** button to send the prompt to the AI Copilot.
- ![](images/copilot-chat-prompt-powerautomate.png)
+ ![Copilot power automate](images/copilot-chat-prompt-powerautomate.png)
3. The AI Copilot will suggest the actions you need to perform the task you want to automate. You can click on the **Next** button to go through the next steps.
@@ -207,22 +204,22 @@ To build a workflow that will help the finance team process invoices using the I
8. Remove the **Condition** action from the flow because you will not be using it. It should look like the following screenshot:
- ![](images/powerautomate-remove-actions.png)
+ ![power automate, remove actions](images/powerautomate-remove-actions.png)
9. Click on the **Add an action** button and search for **Dataverse**. Select the **Add a new row** action.
10. On the **Extract Information from invoices** action, update the **Invoice File** to point to the **Attachment Content** from the email. This will ensure that the flow extracts information from the invoice attachment.
-11. Select the **Table** you created earlier on. For example, you can select the **Invoice Information** table. Choose the dynamic content from the previous action to populate the following fields:
+11. Select the **Table** you created earlier on. For example, you can select the **Invoice Information** table. Choose the dynamic content from the previous action to populate the following fields:
- * ID
- * Amount
- * Date
- * Name
- * Status - Set the **Status** to **Pending**.
- * Supplier Email - Use the **From** dynamic content from the **When a new email arrives** trigger.
+ - ID
+ - Amount
+ - Date
+ - Name
+ - Status - Set the **Status** to **Pending**.
+ - Supplier Email - Use the **From** dynamic content from the **When a new email arrives** trigger.
- ![](images/powerautomate-add-row.png)
+ ![power automate add row](images/powerautomate-add-row.png)
12. Once you are done with the flow, click on the **Save** button to save the flow. You can then test the flow by sending an email with an invoice to the folder you specified in the trigger.
@@ -230,19 +227,18 @@ To build a workflow that will help the finance team process invoices using the I
## Use a Text Generation AI Model in Power Automate
-The Create Text with GPT AI Model in AI Builder enables you to generate text based on a prompt and is powered by the Microsoft Azure OpenAI Service. With this capability, you can incorporate GPT (Generative Pre-Trained Transformer) technology into your apps and flows to build a variety of automations and insightful applications.
+The Create Text with GPT AI Model in AI Builder enables you to generate text based on a prompt and is powered by the Microsoft Azure OpenAI Service. With this capability, you can incorporate GPT (Generative Pre-Trained Transformer) technology into your apps and flows to build a variety of automated flows and insightful applications.
GPT models undergo extensive training on vast amounts of data, enabling them to produce text that closely resembles human language when provided with a prompt. When integrated with workflow automation, AI models like GPT can be harnessed to streamline and automate a wide range of tasks.
-For example, you can build flows to automatically generate text for a variety of use cases, such as: drafts of emails, product descriptions, and more. You can also use the model to generate text for a variety of apps, such as chatbots and customer service apps that enable customer service agents to respond effectively and efficiently to customer inquiries.
-
-![](images/create-prompt-gpt.png)
+For example, you can build flows to automatically generate text for a variety of use cases, such as: drafts of emails, product descriptions, and more. You can also use the model to generate text for a variety of apps, such as chat bots and customer service apps that enable customer service agents to respond effectively and efficiently to customer inquiries.
-To learn how to use this AI Model in Power Automate, go through the [Add intelligence with AI Builder and GPT](https://learn.microsoft.com/en-us/training/modules/ai-builder-text-generation/?WT.mc_id=academic-109639-somelezediko) module.
+![create a prompt](images/create-prompt-gpt.png)
+To learn how to use this AI Model in Power Automate, go through the [Add intelligence with AI Builder and GPT](https://learn.microsoft.com/training/modules/ai-builder-text-generation/?WT.mc_id=academic-109639-somelezediko) module.
-## Great Work! Contiune Your Learning
+## Great Work! Continue Your Learning
-Want to learn more about building low-code Generative AI apps? Go to the [contiuned learning page](../13-continued-learning/README.md) to find other great resources on this topic.
+Want to learn more about building low-code Generative AI apps? Go to the [continued learning page](../13-continued-learning/README.md) to find other great resources on this topic.
-Head over to the Lesson 11 where we will look at how to [intergrate Generative AI with Function Calling](/11-intergating-with-function-calling/README.md)!
+Head over to the Lesson 11 where we will look at how to [integrate Generative AI with Function Calling](/11-intergating-with-function-calling/README.md)!
diff --git a/11-intergating-with-function-calling/README.md b/11-intergating-with-function-calling/README.md
index 6b807230e..57a790df9 100644
--- a/11-intergating-with-function-calling/README.md
+++ b/11-intergating-with-function-calling/README.md
@@ -1,62 +1,61 @@
# Integrating with function calling
-![](./images/genai_course_11[90].png)
+![chapter image](./images/genai_course_11[90].png)
You've learned a fair bit so far in the previous lessons. However, we can improve further. Some things we can address are how we can get a more consistent response format to make it easier to work with the response down stream. Also, we might want to add data from other sources to further enrich our application.
The above mentioned problems is what this chapter is looking to address.
-**Video Coming Soon**
+> **Video Coming Soon**
-## Introduction
+## Introduction
-This lesson will cover:
+This lesson will cover:
-- Explain what is function calling and its use cases.
-- Creating a function call using Azure OpenAI.
-- How to integrate a function call into an application.
+- Explain what is function calling and its use cases.
+- Creating a function call using Azure OpenAI.
+- How to integrate a function call into an application.
-## Learning Goals
+## Learning Goals
-After completing this lesson you will be able to:
-
-- Explain the purpose of using function calling.
-- Setup Function Call using the Azure Open AI Service.
-- Design effective function calls for your applications use case.
+After completing this lesson you will be able to:
+- Explain the purpose of using function calling.
+- Setup Function Call using the Azure Open AI Service.
+- Design effective function calls for your applications use case.
## Scenario: improving our chatbot with functions
-For this lesson, we want to build a feature for our education startup that allows users to use a chatbot to find technical courses. We will recommend courses that fit their skill level, current role and technology of interest.
+For this lesson, we want to build a feature for our education startup that allows users to use a chatbot to find technical courses. We will recommend courses that fit their skill level, current role and technology of interest.
-To complete this scenario we will use a combination of:
+To complete this scenario we will use a combination of:
- - `Azure Open AI` to create a chat experience for the user
- - `Microsoft Learn Catalog API` to help users find courses based on the request of the user
- - `Function Calling` to take the user's query and send it to a function to make the API request.
+- `Azure Open AI` to create a chat experience for the user.
+- `Microsoft Learn Catalog API` to help users find courses based on the request of the user.
+- `Function Calling` to take the user's query and send it to a function to make the API request.
-To get started, let's look at why we would want to use function calling in the first place:
+To get started, let's look at why we would want to use function calling in the first place:
-## Why Function Calling
+## Why Function Calling
-Before function calling, responses from an LLM were unstructured and inconsistent. Developers were required to write complex validation code to make sure they are able to handle each variation of a response. Users could not get answers like "What is the current weather in Stockholm?". This is because models were limited to the time the data was trained on.
+Before function calling, responses from an LLM were unstructured and inconsistent. Developers were required to write complex validation code to make sure they are able to handle each variation of a response. Users could not get answers like "What is the current weather in Stockholm?". This is because models were limited to the time the data was trained on.
-Function Calling is a feature of the Azure Open AI Service to overcome to the following limitations:
+Function Calling is a feature of the Azure Open AI Service to overcome to the following limitations:
- **Consistent response format**. If we can better control the response format we can more easily integrate the response down stream to other systems.
-- **External data**. Ability to use data from other sources of an application in a chat context.
+- **External data**. Ability to use data from other sources of an application in a chat context.
## Illustrating the problem through a scenario
> We recommend you create a file *Notebook.ipynb* and paste the below code into separate code cells if you want to run the below scenario. You can also just read along as we're trying to illustrate a problem where functions can help to address the problem.
-Let's look at the example that illustrates the response format problem:
+Let's look at the example that illustrates the response format problem:
Let's say we want to create a database of student data so we can suggest the right course to them. Below we have two descriptions of students that are very similar in the data they contain.
1. Create a connection to our Azure Open AI resource:
- ```python
+ ```python
import os
import openai
import json
@@ -70,17 +69,17 @@ Let's say we want to create a database of student data so we can suggest the rig
1. Creating two student descriptions using variables `student_1_description` and `student_2_description`.
- ```python
+ ```python
student_1_description="Emily Johnson is a sophomore majoring in computer science at Duke University. She has a 3.7 GPA. Emily is an active member of the university's Chess Club and Debate Team. She hopes to pursue a career in software engineering after graduating."
- student_2_description = "Michael Lee is a sophomore majoring in computer science at Stanford University. He has a 3.8 GPA. Michael is known for his programming skills and is an active member of the university's Robotics Club. He hopes to pursue a career in artificial intelligence after finshing his studies."
+ student_2_description = "Michael Lee is a sophomore majoring in computer science at Stanford University. He has a 3.8 GPA. Michael is known for his programming skills and is an active member of the university's Robotics Club. He hopes to pursue a career in artificial intelligence after finishing his studies."
```
- We want send the above student descriptions to an LLM to parse the data. This data can later be used in our an application and be sent to an API or stored into a database.
+ We want send the above student descriptions to an LLM to parse the data. This data can later be used in our an application and be sent to an API or stored into a database.
-1. Let's create two identical prompts that we instruct the LLM on what information that we are interested in:
+1. Let's create two identical prompts that we instruct the LLM on what information that we are interested in:
- ```python
+ ```python
prompt1 = f'''
Please extract the following information from the given text and return it as a JSON object:
@@ -110,10 +109,9 @@ Let's say we want to create a database of student data so we can suggest the rig
The above prompts instructs the LLM to extract information and to return the response in JSON format.
+1. After setting up the prompts and the connection to Azure Open AI, we will now send the prompts to the LLM by using `openai.ChatCompletion`. We store the prompt in the `messages` variable and assign the role to `user`. This is to mimic a message from a user being written to a chatbot.
-1. After setting up the prompts and the connection to Azure Open AI, we will now send the prompts to the LLM by using `openai.ChatCompletion`. We store the prompt in the `messages` variable and assign the role to `user`. This is to mimic a message from a user being written to a chatbot.
-
- ```python
+ ```python
# response from prompt one
openai_response1 = openai.ChatCompletion.create(
engine="gpt-function",
@@ -129,17 +127,17 @@ Let's say we want to create a database of student data so we can suggest the rig
openai_response2['choices'][0]['message']['content']
```
-Now we can send both requests to the LLM and examine the response we receive by finding it like so `openai_response1['choices'][0]['message']['content']`.
+Now we can send both requests to the LLM and examine the response we receive by finding it like so `openai_response1['choices'][0]['message']['content']`.
1. Lastly, we can convert the response to JSON format calling `json.loads`:
- ```python
+ ```python
# Loading the response as a JSON object
json_response1 = json.loads(openai_response1['choices'][0]['message']['content'])
json_response1
```
- Response 1:
+ Response 1:
```json
{'name': 'Emily Johnson',
@@ -149,9 +147,9 @@ Now we can send both requests to the LLM and examine the response we receive by
'club': 'Chess Club'}
```
- Response 2:
+ Response 2:
- ```json
+ ```json
{'name': 'Michael Lee',
'major': 'computer science',
'school': 'Stanford University',
@@ -159,43 +157,43 @@ Now we can send both requests to the LLM and examine the response we receive by
'club': 'Robotics Club'}
```
- Even though the prompts are the same and the descriptions are similar, we see values of the `Grades` property formatted differently as we can sometime get the format `3.7` or `3.7 GPA` for example.
+ Even though the prompts are the same and the descriptions are similar, we see values of the `Grades` property formatted differently as we can sometime get the format `3.7` or `3.7 GPA` for example.
This result is because the LLM takes unstructured data in the form of the written prompt and returns also unstructured data. We need to have structured format so that we know what to expect when storing or using this dat
So how do we solve the formatting problem then? By using functional calling, we can make sure that we receive structured data back. When using function calling, the LLM does not actually call or run any functions. Instead, we create a structure for the LLM to follow for its responses. We then use those structured responses to know what function to run in our applications.
-
- ![](./images/Function-Flow.png)
-We can then take what is returned from the function and send this back to the LLM. The LLM will then respond using natural language to answer the user's query.
+![function flow](./images/Function-Flow.png)
+
+We can then take what is returned from the function and send this back to the LLM. The LLM will then respond using natural language to answer the user's query.
-## Use Cases for using function calls
+## Use Cases for using function calls
There's many different uses cases where function calls can improve your app like:
-- **Calling External Tools**. Chatbots are great at providing answers to questions from users. By using function calling, the chatbots can use messages from users to complete certain tasks. For example, a student can ask the chatbot to "Send email to my instructor saying I need more assistance with this subject". This can make a function call to `send_email(to: string, body: string)`
+- **Calling External Tools**. Chat bots are great at providing answers to questions from users. By using function calling, the chat bots can use messages from users to complete certain tasks. For example, a student can ask the chatbot to "Send email to my instructor saying I need more assistance with this subject". This can make a function call to `send_email(to: string, body: string)`
- **Create API or Database Queries**. Users can find information using natural language that gets converted into a formatted query or API request. An example of this could be a teacher that request "Who are the students that completed the last assignment" which could call a function named `get_completed(student_name: string, assignment: int, current_status: string)`
- **Creating Structured Data**. Users can take the a block of text or CSV and use the LLM to extract important information from it. For example, a student can convert Wikipedia article about peace agreements to create AI flash cards. This can be done by using a function called `get_important_facts(agreement_name: string, date_signed: string, parties_involved: list)`
-## Creating Your First Function Call
+## Creating Your First Function Call
-The process of creating a function call includes 3 main steps:
+The process of creating a function call includes 3 main steps:
1. **Calling** the Chat Completions API with a list of your functions and a user message.
-2. **Reading** the model's response to perform an action ie execute a function or API Call.
-3. **Making** another call to Chat Completions API with the response from your function to use that information to create a response to the user.
+2. **Reading** the model's response to perform an action ie execute a function or API Call.
+3. **Making** another call to Chat Completions API with the response from your function to use that information to create a response to the user.
-![](./images/LLM-Flow.png)
+![LLM Flow](./images/LLM-Flow.png)
-### Step 1 - creating messages
+### Step 1 - creating messages
-The first step is to create a user message. This can be dynamically assigned by take the value of a text input or you can assign a value here. If this is your first time working with the Chat Completions API, we need to define the `role` and the `content` of the message.
+The first step is to create a user message. This can be dynamically assigned by take the value of a text input or you can assign a value here. If this is your first time working with the Chat Completions API, we need to define the `role` and the `content` of the message.
-The `role` can be either `system` (creating rules) , `assistant` (the model) or `user` (the end-user). For function calling, we will assign this as `user` and an example question.
+The `role` can be either `system` (creating rules) , `assistant` (the model) or `user` (the end-user). For function calling, we will assign this as `user` and an example question.
-```python
+```python
messages= [ {"role": "user", "content": "Find me a good course for a beginner students to learn Azure."} ]
```
@@ -205,11 +203,11 @@ By assigning different roles, it's made clear to the LLM if it's the system sayi
Next we will can define a function and their parameters of that function. We will use just one function here called `search_courses` but you can create multiple functions.
-> **Important** : Functions are included in thee system message to the LLM and will be included in the amount of available tokens you have available.
+> **Important** : Functions are included in thee system message to the LLM and will be included in the amount of available tokens you have available.
Below, we create the functions as an array of items. Each item is a function and has properties `name`, `description` and `parameters`:
-```python
+```python
functions = [
{
"name":"search_courses",
@@ -240,26 +238,26 @@ functions = [
Let's describe each function instance more in detail below:
-- `name` - The name of the function that we want to have called.
-- `description` - This is the description of how the function works. Here its important to be specific and clear.
+- `name` - The name of the function that we want to have called.
+- `description` - This is the description of how the function works. Here its important to be specific and clear.
- `parameters` - A list of values and format that you want the model to produce in its response. The parameters array consist of items where item have the following properties:
- - `type` - The data type of the properties will be stored in.
- - `properties` - List of the specific values that the model will use for its response
- - `name` - The key is the name of the property that model will use in its formatted response, for example `product`.
- - `type` - The data type of the this property, for example `string`.
- - `description` - Description of the specific property.
+ 1. `type` - The data type of the properties will be stored in.
+ 1. `properties` - List of the specific values that the model will use for its response
+ 1. `name` - The key is the name of the property that model will use in its formatted response, for example `product`.
+ 1. `type` - The data type of the this property, for example `string`.
+ 1. `description` - Description of the specific property.
-There's also an optional property `required` - required property for the function call to be completed.
+There's also an optional property `required` - required property for the function call to be completed.
-### Step 3 - Making the function call
+### Step 3 - Making the function call
-After defining a function, we now need to include it in the call to the Chat Completion API. We do this by adding `functions` to the request. In this case `functions=functions`.
+After defining a function, we now need to include it in the call to the Chat Completion API. We do this by adding `functions` to the request. In this case `functions=functions`.
There is also an option to set `function_call` to `auto`. This means we will let the LLM decide which function should be called based on the user message rather than assigning it ourselves.
Here's some code below where we call `ChatCompletion.create`, note how we set `functions=functions` and `function_call="auto"` and thereby giving the LLM the choice when to call the functions we provide it:
-```python
+```python
response = openai.ChatCompletion.create( engine="gpt-function",
messages=messages,
functions=functions,
@@ -292,23 +290,23 @@ As you can see, `student`, `Azure` and `beginner` was extracted from `messages`
Next, we need to see how we can use this in our app.
-## Integrating Function Calls into an Application.
+## Integrating Function Calls into an Application
-After we have tested the formatted response from the LLM, now we can integrate this into an application.
+After we have tested the formatted response from the LLM, now we can integrate this into an application.
-### Managing the flow
+### Managing the flow
-To integrate this into our application, let's take the following steps:
+To integrate this into our application, let's take the following steps:
-1. First, lets make the call to the Open AI services and store the message in a variable called `response_message`.
+1. First, lets make the call to the Open AI services and store the message in a variable called `response_message`.
- ```python
+ ```python
response_message = response["choices"][0]["message"]
```
-1. Now we will define the function that will call the Microsoft Learn API to get a list of courses:
+1. Now we will define the function that will call the Microsoft Learn API to get a list of courses:
- ```python
+ ```python
import requests
def search_courses(role, product, level):
@@ -330,11 +328,11 @@ To integrate this into our application, let's take the following steps:
Note how we now create actual Python functions that maps to the function names introduced in the `functions` variable. We're also making real external API calls to fetch the data we need. In this case we go against the Microsoft Learn API to search for training modules.
- Ok, so we created `functions` variable and corresponding Python functions, how do we tell the LLM how to map these two together so our Python functions are called?
+ Ok, so we created `functions` variable and corresponding Python functions, how do we tell the LLM how to map these two together so our Python functions are called?
1. To see if we need to call a Python function, we need to look into the LLM response and see if `function_call` is part of it and call the pointed out function. Here's how you can make the mentioned check below:
- ```python
+ ```python
# Check if the model wants to call a function
if response_message.get("function_call"):
print("Recommended Function call:")
@@ -388,7 +386,8 @@ To integrate this into our application, let's take the following steps:
Below is the output from running our code:
- **Output**
+ **Output**
+
```Recommended Function call:
{
"name": "search_courses",
@@ -406,7 +405,7 @@ To integrate this into our application, let's take the following steps:
```
-1. Now we will send the updated message, `messages` to the LLM so we can recieve a natural language response instead of an API JSON formatted response.
+1. Now we will send the updated message, `messages` to the LLM so we can recieve a natural language response instead of an API JSON formatted response.
```python
print("Messages in next request:")
@@ -425,29 +424,28 @@ To integrate this into our application, let's take the following steps:
print(second_response["choices"][0]["message"])
```
- **Output**
+ **Output**
```python
{
"role": "assistant",
- "content": "I found some good courses for beginner students to learn Azure:\n\n1. [Describe concepts of cryptography] (https://learn.microsoft.com/en-us/training/modules/describe-concepts-of-cryptography/?WT.mc_id=api_CatalogApi)\n2. [Introduction to audio classification with TensorFlow](https://learn.microsoft.com/en-us/training/modules/intro-audio-classification-tensorflow/?WT.mc_id=api_CatalogApi)\n3. [Design a Performant Data Model in Azure SQL Database with Azure Data Studio](https://learn.microsoft.com/en-us/training/modules/design-a-data-model-with-ads/?WT.mc_id=api_CatalogApi)\n4. [Getting started with the Microsoft Cloud Adoption Framework for Azure](https://learn.microsoft.com/en-us/training/modules/cloud-adoption-framework-getting-started/?WT.mc_id=api_CatalogApi)\n5. [Set up the Rust development environment](https://learn.microsoft.com/en-us/training/modules/rust-set-up-environment/?WT.mc_id=api_CatalogApi)\n\nYou can click on the links to access the courses."
+ "content": "I found some good courses for beginner students to learn Azure:\n\n1. [Describe concepts of cryptography] (https://learn.microsoft.com/training/modules/describe-concepts-of-cryptography/?WT.mc_id=api_CatalogApi)\n2. [Introduction to audio classification with TensorFlow](https://learn.microsoft.com/training/modules/intro-audio-classification-tensorflow/?WT.mc_id=api_CatalogApi)\n3. [Design a Performant Data Model in Azure SQL Database with Azure Data Studio](https://learn.microsoft.com/training/modules/design-a-data-model-with-ads/?WT.mc_id=api_CatalogApi)\n4. [Getting started with the Microsoft Cloud Adoption Framework for Azure](https://learn.microsoft.com/training/modules/cloud-adoption-framework-getting-started/?WT.mc_id=api_CatalogApi)\n5. [Set up the Rust development environment](https://learn.microsoft.com/training/modules/rust-set-up-environment/?WT.mc_id=api_CatalogApi)\n\nYou can click on the links to access the courses."
}
```
-## Assignment
+## Assignment
To continue your learning of Azure Open AI Function Calling you can build:
- - More parameters of the function that might help learners find more courses. You can find the available API parameters here:
- - Create another function call that takes more information from the learner like their native language
- - Create error handling on when the function call and/or API call does not return any suitable courses
-
- Hint: Follow the [Learn API reference documentation](https://learn.microsoft.com/en-us/training/support/catalog-api-developer-reference) page to see how and where this data is available.
+- More parameters of the function that might help learners find more courses. You can find the available API parameters here:
+- Create another function call that takes more information from the learner like their native language
+- Create error handling on when the function call and/or API call does not return any suitable courses
- ## Great Work! Continue the Journey
+ Hint: Follow the [Learn API reference documentation](https://learn.microsoft.com/training/support/catalog-api-developer-reference) page to see how and where this data is available.
-Want to learn more about different Function Calling? Go to the [contiuned learning page](../13-continued-learning/README.md) to find other great resources on this topic.
+## Great Work! Continue the Journey
+Want to learn more about different Function Calling? Go to the [continued learning page](../13-continued-learning/README.md) to find other great resources on this topic.
Head over to the Lesson 12 where we will look at how to [design UX for AI applications](/12-designing-ux-for-ai-applications/README.md)!
diff --git a/12-designing-ux-for-ai-applications/README.md b/12-designing-ux-for-ai-applications/README.md
index 5a57c56bb..8b19739c9 100644
--- a/12-designing-ux-for-ai-applications/README.md
+++ b/12-designing-ux-for-ai-applications/README.md
@@ -9,11 +9,11 @@ User experience is a very important aspect of building apps. Users need to be ab
## Introduction
-User experience is how a user interacts with and uses a specific product or service be it a system, tool, or design. When developing AI applications, developers not only focus on ensuring the user experience is effective but also ethical. In this lesson, we cover how to build Artificial Intelligence (AI) applications that adresses user needs.
+User experience is how a user interacts with and uses a specific product or service be it a system, tool, or design. When developing AI applications, developers not only focus on ensuring the user experience is effective but also ethical. In this lesson, we cover how to build Artificial Intelligence (AI) applications that addresses user needs.
The lesson will cover the following areas:
-* Introduction to User Experience and Undestanding User Needs
+* Introduction to User Experience and Understanding User Needs
* Designing AI Applications for Trust and Transparency
* Designing AI Applications for Collaboration and Feedback
@@ -26,7 +26,7 @@ After taking this lesson, you'll be able to:
### Prerequisite
-Take some time and read more about [user experience and design thinking.](https://learn.microsoft.com/en-us/training/modules/ux-design/)
+Take some time and read more about [user experience and design thinking.](https://learn.microsoft.com/training/modules/ux-design/)
## Introduction to User Experience and Understanding User Needs
@@ -56,7 +56,7 @@ Not every challenge can be solved with AI. AI comes in to augment your user expe
## Designing AI Applications for Trust and Transparency
-Building trust is crtitical when designing AI applications. Trust ensures a user is confident that the application will get the work done, deliver results consistently and the results are what the user needs. A risk in this area is mistrust and overtrust. Mistrust occures when a user has little or no trust in an AI system, this leads to the user rejecting your application. Overtrust occurs when a user overestimates the capability of an AI system, leading to users trusting the AI system too much. For example, an automated grading system in the case of overtrust might lead the teacher not to proof through some of the papers to ensure the grading system works well. This could result in unfair or inaccurate grades for the students, or missed opportunities for feedback and improvement.
+Building trust is critical when designing AI applications. Trust ensures a user is confident that the application will get the work done, deliver results consistently and the results are what the user needs. A risk in this area is mistrust and over trust. Mistrust occurs when a user has little or no trust in an AI system, this leads to the user rejecting your application. Over trust occurs when a user overestimates the capability of an AI system, leading to users trusting the AI system too much. For example, an automated grading system in the case of over trust might lead the teacher not to proof through some of the papers to ensure the grading system works well. This could result in unfair or inaccurate grades for the students, or missed opportunities for feedback and improvement.
Two ways to ensure trust is put right at the centre of design is explainability and control.
@@ -84,11 +84,11 @@ Another feature in Bing that allows a user to have control over the application
![Bing search results with options to modify the prompt and output](images/bing2.png "Bing search results with options to modify the prompt and output")
-> When designing AI applications, intentionality is key in ensuring users do not overtrust setting unrealistic expectations of its capabilities. One way to do this is by creating friction between the prompts and the results. Reminding the user, that this is AI and not a fellow human being
+> When designing AI applications, intentionality is key in ensuring users do not over trust setting unrealistic expectations of its capabilities. One way to do this is by creating friction between the prompts and the results. Reminding the user, that this is AI and not a fellow human being
## Designing AI Applications for Collaboration and Feedback
-As earlier mentioned generative AI creates a collaboration between the user and AI. Most engagements are with a user inputing a prompt and the AI generating an output. What if the output is incorrect? How does the application handle errors if they occur? Does the AI blame the user or takes time to explain the error?
+As earlier mentioned generative AI creates a collaboration between the user and AI. Most engagements are with a user inputting a prompt and the AI generating an output. What if the output is incorrect? How does the application handle errors if they occur? Does the AI blame the user or takes time to explain the error?
AI applications should be built in to receive and give feedback. This not only helps the AI system improve, but it also builds trust with the users. A feedback loop should be included in the design, an example can be a simple thumbs up or down on the output.
@@ -116,6 +116,6 @@ Take any AI apps you've built so far, consider implementing the below steps in y
## Congratulations, you have finished this course
-Want to learn more about designing UX for AI Applications? Go to the [contiuned learning page](../13-continued-learning/README.md) to find other great resources on this topic.
+Want to learn more about designing UX for AI Applications? Go to the [continued learning page](../13-continued-learning/README.md) to find other great resources on this topic.
-Congratulations, you have completed this course! The building should not stop here. Hopefully you have been inspired to start building your own Generative AI startup. Head over to the [Microsoft Founders Hub](https://aka.ms/genai-foundershub) and apply for the program to recieve support on your journey.
+Congratulations, you have completed this course! The building should not stop here. Hopefully you have been inspired to start building your own Generative AI startup. Head over to the [Microsoft Founders Hub](https://aka.ms/genai-foundershub) and apply for the program to receive support on your journey.
diff --git a/13-continued-learning/README.md b/13-continued-learning/README.md
index 316fee636..0f0d2a76f 100644
--- a/13-continued-learning/README.md
+++ b/13-continued-learning/README.md
@@ -14,77 +14,63 @@ Are we missing a great resource? Let us know by submitting a PR!
🔗 [Generative AI: Implication and Applications for Education](https://arxiv.org/abs/2305.07605?wt.mc_id=github_S-1231_webpage_reactor)
-
-## Lesson 2 - Exploring and Comparing Different LLM types
+## Lesson 2 - Exploring and Comparing Different LLM types
🔗 [How to use Open Source foundation models curated by Azure Machine Learning (preview) - Azure Machine Learning | Microsoft Learn](https://learn.microsoft.com/azure/machine-learning/how-to-use-foundation-models?wt.mc_id=github_S-1231_webpage_reactor&view=azureml-api-2)
-## Lesson 2 - Exploring and Comparing Different LLM types
-
🔗 [The Large Language Model (LLM) Index | Sapling](https://sapling.ai/llm/index)
🔗 [[2304.04052] Decoder-Only or Encoder-Decoder? Interpreting Language Model as a Regularized Encoder-Decoder (arxiv.org)](https://arxiv.org/abs/2304.04052)
-
🔗 [Retrieval Augmented Generation using Azure Machine Learning prompt flow](https://learn.microsoft.com/azure/machine-learning/concept-retrieval-augmented-generation?wt.mc_id=github_S-1231_webpage_reactor&view=azureml-api-2)
🔗 [Grounding LLMs](https://techcommunity.microsoft.com/t5/fasttrack-for-azure/grounding-llms/ba-p/3843857?wt.mc_id=github_S-1231_webpage_reactor)
🔗 [The Large Language Model (LLM) Index | Sapling](https://sapling.ai/llm/index?wt.mc_id=github_S-1231_webpage_reactor )
-
🔗 [[2304.04052] Decoder-Only or Encoder-Decoder? Interpreting Language Model as a Regularized Encoder-Decoder (arxiv.org)](https://arxiv.org/abs/2304.04052?wt.mc_id=github_S-1231_webpage_reactor)
+## Lesson 3 - Using Generative AI Responsibly
- ## Lesson 3 - Using Generative AI Responsibly
🔗 [Fundamentals of Responsible Generative AI](https://learn.microsoft.com/training/modules/responsible-generative-ai/?wt.mc_id=github_S-1231_webpage_reactor)
-=======
-🔗 [Grounding LLMs](https://techcommunity.microsoft.com/t5/fasttrack-for-azure/grounding-llms/ba-p/3843857)
-
-## Lesson 3 - Using Generative AI Responsibly
+🔗 [Grounding LLMs](https://techcommunity.microsoft.com/t5/fasttrack-for-azure/grounding-llms/ba-p/3843857)
-🔗 [Fundamentals of Responsible Generative AI](https://learn.microsoft.com/en-us/training/modules/responsible-generative-ai/)
+🔗 [Fundamentals of Responsible Generative AI](https://learn.microsoft.com/training/modules/responsible-generative-ai/)
-🔗 [Being Responsible with Generative AI](https://learn.microsoft.com/en-us/shows/ai-show/being-responsible-with-generative-ai)
+🔗 [Being Responsible with Generative AI](https://learn.microsoft.com/shows/ai-show/being-responsible-with-generative-ai)
🔗 [GPT-4 System Card](https://cdn.openai.com/papers/gpt-4-system-card.pdf?wt.mc_id=github_S-1231_webpage_reactor)
-
## Lesson 4 - Understanding Prompt Engineering Fundamentals
-🔗 [Introduction to Prompt Engineering](https://learn.microsoft.com/azure/ai-services/openai/concepts/prompt-engineering?wt.mc_id=github_S-1231_webpage_reactor)
+🔗 [Introduction to Prompt Engineering](https://learn.microsoft.com/azure/ai-services/openai/concepts/prompt-engineering?wt.mc_id=github_S-1231_webpage_reactor)
-🔗 [Prompt Engineering Overview](https://learn.microsoft.com/semantic-kernel/prompt-engineering/?wt.mc_id=github_S-1231_webpage_reactor)
+🔗 [Prompt Engineering Overview](https://learn.microsoft.com/semantic-kernel/prompt-engineering/?wt.mc_id=github_S-1231_webpage_reactor)
-🔗 [Azure OpenAI for Education Prompts](https://techcommunity.microsoft.com/t5/education-blog/azure-openai-for-education-prompts-ai-and-a-guide-from-ethan-and/ba-p/3938259?wt.mc_id=github_S-1231_webpage_reactor )
+🔗 [Azure OpenAI for Education Prompts](https://techcommunity.microsoft.com/t5/education-blog/azure-openai-for-education-prompts-ai-and-a-guide-from-ethan-and/ba-p/3938259?wt.mc_id=github_S-1231_webpage_reactor )
🔗 [Introduction to Prompt Engineering](https://learn.microsoft.com/azure/ai-services/openai/concepts/prompt-engineering)
-🔗 [Prompt Engineering Overview](https://learn.microsoft.com/en-us/semantic-kernel/prompt-engineering/)
+🔗 [Prompt Engineering Overview](https://learn.microsoft.com/semantic-kernel/prompt-engineering/)
🔗 [Azure OpenAI for Education Prompts](https://techcommunity.microsoft.com/t5/e1.ucation-blog/azure-openai-for-education-prompts-ai-and-a-guide-from-ethan-and/ba-p/3938259)
-
## Lesson 5 - Creating Advanced Prompts
-🔗 [Prompt Engineering Techniques](https://learn.microsoft.com/azure/ai-services/openai/concepts/advanced-prompt-engineering?wt.mc_id=github_S-1231_webpage_reactor&pivots=programming-language-chat-completions)
-
-
-## Lesson 6 - Building Text Generation Applications
-
-🔗 [Prompt Engineering Techniques](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/advanced-prompt-engineering?pivots=programming-language-chat-completions)
+🔗 [Prompt Engineering Techniques](https://learn.microsoft.com/azure/ai-services/openai/concepts/advanced-prompt-engineering?wt.mc_id=github_S-1231_webpage_reactor&pivots=programming-language-chat-completions)
## Lesson 6 - Building Text Generation Applications
+🔗 [Prompt Engineering Techniques](https://learn.microsoft.com/azure/ai-services/openai/concepts/advanced-prompt-engineering?pivots=programming-language-chat-completions)
+
## Lesson 7 - Building Chat Applications
🔗 [System message framework and template recommendations for Large Language Models (LLMs)](https://learn.microsoft.com/azure/ai-services/openai/concepts/system-message?wt.mc_id=github_S-1231_webpage_reactor)
🔗 [Learn how to work with the GPT-35-Turbo and GPT-4 models](https://learn.microsoft.com/azure/ai-services/openai/how-to/chatgpt?wt.mc_id=github_S-1231_webpage_reactor&pivots=programming-language-chat-completions)
-🔗 [Fine-Tuning language models from human preferences](https://arxiv.org/pdf/1909.08593.pdf?wt.mc_id=github_S-1231_webpage_reactor)
-
+🔗 [Fine-Tuning language models from human preferences](https://arxiv.org/pdf/1909.08593.pdf?wt.mc_id=github_S-1231_webpage_reactor)
🔗 [OpenAI Fine-Tuning](https://platform.openai.com/docs/guides/fine-tuning/when-to-use-fine-tuning?wt.mc_id=github_S-1231_webpage_reactor )
@@ -100,15 +86,14 @@ Are we missing a great resource? Let us know by submitting a PR!
🔗 [Generate Images with Azure OpenAI Service](https://learn.microsoft.com/training/modules/generate-images-azure-openai/?wt.mc_id=github_S-1231_webpage_reactor)
-🔗 [OpenAI's DALL-E and CLIP 101: A Brief Introduction](https://towardsdatascience.com/openais-dall-e-and-clip-101-a-brief-introduction-3a4367280d4e?wt.mc_id=github_S-1231_webpage_reactor)
+🔗 [OpenAI's DALL-E and CLIP 101: A Brief Introduction](https://towardsdatascience.com/openais-dall-e-and-clip-101-a-brief-introduction-3a4367280d4e?wt.mc_id=github_S-1231_webpage_reactor)
-🔗 [OpenAI's CLIP paper](https://arxiv.org/pdf/2103.00020.pdf?wt.mc_id=github_S-1231_webpage_reactor)
+🔗 [OpenAI's CLIP paper](https://arxiv.org/pdf/2103.00020.pdf?wt.mc_id=github_S-1231_webpage_reactor)
🔗 [OpenAI's DALL-E and CLIP 101: A Brief Introduction](https://towardsdatascience.com/openais-dall-e-and-clip-101-a-brief-introduction-3a4367280d4e)
🔗 [OpenAI's CLIP paper](https://arxiv.org/pdf/2103.00020.pdf)
-
## Lesson 10 - Building Low Code AI Applications
🔗 [Add intelligence with AI Builder and GPT](https://learn.microsoft.com/training/modules/ai-builder-text-generation/?wt.mc_id=github_S-1231_webpage_reactor&WT.mc_id=academic-109639-somelezediko)
@@ -121,12 +106,10 @@ Are we missing a great resource? Let us know by submitting a PR!
🔗 [Power Platform Copilot Prompt Library](https://pnp.github.io/powerplatform-prompts/?wt.mc_id=github_S-1231_webpage_reactor&WT.mc_id=academic-109639-somelezediko)
-
## Lesson 11- Integrating Applications with Function Calling
🔗 [OpenAI Functions Documentation](https://learn.microsoft.com/azure/ai-services/openai/how-to/function-calling?wt.mc_id=github_S-1231_webpage_reactor)
-
## Lesson 12 - Designing UX for AI Applications
🔗 [Best practices for building collaborative UX with Human-AI partnership](https://learn.microsoft.com/community/content/best-practices-ai-ux?wt.mc_id=github_S-1231_webpage_reactor)
@@ -138,4 +121,3 @@ Are we missing a great resource? Let us know by submitting a PR!
🔗 [New skills in the age of AI by John Maeda](https://www.amazon.com/Designing-Human-Centric-Experiences-Artificial-Intelligence/dp/1484280873?wt.mc_id=github_S-1231_webpage_reactor)
🔗 [Designing Human-Centric AI Experiences: Applied UX Design for Artificial Intelligence by Akshay Kpre](https://www.amazon.com/Designing-Human-Centric-Experiences-Artificial-Intelligence/dp/1484280873)
-