Skip to content

Commit

Permalink
added lesson 4
Browse files Browse the repository at this point in the history
  • Loading branch information
koreyspace committed Oct 26, 2023
1 parent 2efef18 commit 9465459
Show file tree
Hide file tree
Showing 5 changed files with 47 additions and 79 deletions.
1 change: 1 addition & 0 deletions 01-introduction-to-genai/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@

[![Introduction to Generative AI and Large Language Models](./images/genai_course_1[83].png)](https://youtu.be/vf_mZrn8ibc)

*(Click the image above to view video of this lesson)*

Generative AI is artificial intelligence capable of generating text, images and other type of content. What makes it a fantastic technology is that it democratizes AI, anyone can use it with as little as a text prompt, a sentence written in a natural language. There's no need for you to learn a language like Java or SQL to accomplish something worthwhile, all you need is to use your language, state what you want and out comes a suggestion from an AI model. The applications and impact for this is huge, you write or understand reports, write applications and much more, all in seconds.

Expand Down
1 change: 1 addition & 0 deletions 02-exploring-and-comparing-different-llms/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@

[![Exploring and comparing different LLMs](./images/genai_course_2[56].png)](https://youtu.be/J1mWzw0P74c)

*(Click the image above to view video of this lesson)*

## Introduction

Expand Down
115 changes: 40 additions & 75 deletions 03-using-generative-ai-responsibly /README.MD
Original file line number Diff line number Diff line change
Expand Up @@ -5,153 +5,118 @@
**Video Coming Soon**


## Introduction

This lesson will cover:
- Why you should prioritize Responsible AI when building Generative AI applications.
- Core principles of Responsible AI and how they relate to Generative AI.
- How to put these Responsible AI principles into practice through strategy and tooling.

## Learning Goals

After completing this lesson you will know:
- The importance of Responsible AI when building Generative AI applications.
- When to think and apply the core principles of Responsible AI during your application building process.
- What tools and strategies are available to you to put the concept of Responsible AI into practice.
- When to think and apply the core principles of Responsible AI when building Generative AI applications.
- What tools and strategies are available to you to put the concept of Responsible AI into practice.


## Responsible AI Principles

The excitement of Generative AI has never been higher. This excitement has brought a lot of new developers, attention and funding to this space. While this is very positive for anyone looking to build products and companies using Generative AI, it is also important we proceed responsibly.
The excitement of Generative AI has never been higher. This excitement has brought a lot of new developers, attention, and funding to this space. While this is very positive for anyone looking to build products and companies using Generative AI, it is also important we proceed responsibly.

Throughout this course we are focusing on building our startup and our AI education product. Let's look at the principles of Responsible AI and how they relate to our use of Generative AI in our products.
Throughout this course, we are focusing on building our startup and our AI education product. Let's look at the principles of Responsible AI and how they relate to our use of Generative AI in our products.


## Why Should You Prioritise Responsible AI

When building product, taking a human centric approach by keeping your users best interest in mind leads to the best results.
When building a product, taking a human-centric approach by keeping your user's best interest in mind leads to the best results.

The uniqueness of Generative AI is its power to create helpful answers, information, guidance and content for users. This can be done without many manual steps which can lead to very impressive results. Without proper planning and strategies, it can also unfortunately lead to some harmful results both for your users, your product and society as a whole.
The uniqueness of Generative AI is its power to create helpful answers, information, guidance, and content for users. This can be done without many manual steps which can lead to very impressive results. Without proper planning and strategies, it can also unfortunately lead to some harmful results for your users, your product, and society as a whole.

Let's look at some (but not all) of these potentially harmful results:

### Hallucinations

Hallucinations are a term used to describe when an LLM produces content that is either completely nonsensical or something we know is factually wrong based on other sources of information.

Let's take for example we build a feature for our startup that allows students to ask historical questions to a model. A student asks the question `Who was the sole surivor of Titanic?`
Let's take for example we build a feature for our startup that allows students to ask historical questions to a model. A student asks the question `Who was the sole survivor of Titanic?`

The model produces a a response like the one below:
The model produces a response like the one below:


![](/03-using-generative-ai-responsibly%20/images/2135-ChatGPT(1)_11zon.webp)

*(Source: https://flyingbisons.com)*

This is a very confident and thorough answer. Unfortunately, it is incorrect. Even with a minimal amount of research, one would discover there was more than one survivor of the Titantic survivor. For a student that is just starting to research this topic, this answer can be persuasive enough to not be questioned and treated as fact.
This is a very confident and thorough answer. Unfortunately, it is incorrect. Even with a minimal amount of research, one would discover there was more than one survivor of the Titanic survivor. For a student who is just starting to research this topic, this answer can be persuasive enough to not be questioned and treated as fact.

With each iteration of any given LLM we have seen performance improvements around minimising hallucinations. Even with this improvement, we as applications builders and users still need to remain aware of these limitations.
With each iteration of any given LLM, we have seen performance improvements around minimizing hallucinations. Even with this improvement, we as application builders and users still need to remain aware of these limitations.


### Harmful Content

We covered in the earlier section when a LLM produces incorrect or nonsensical responses. Another risk we need to be aware of is when a model responds with harmful content.
We covered in the earlier section when an LLM produces incorrect or nonsensical responses. Another risk we need to be aware of is when a model responds with harmful content.

Harmful content can be defined as:
- Providing instructions or encouraging self harm or harm to certain groups
- Providing instructions or encouraging self-harm or harm to certain groups
- Hateful or demeaning content
- Providing guidance on planning any type of attack or violent acts
- Guiding planning any type of attack or violent acts
- Providing instructions on how to find illegal content or commit illegal acts

For our startup, we want to make sure we have the right tools and strategies in place to prevent this type of content from being seen to students.
For our startup, we want to make sure we have the right tools and strategies in place to prevent this type of content from being seen by students.

### Lack of Fairness

Fairness is defined as "ensuring that an AI system is free from bias and discrimination and that they treat everyone fairly and equally." In the world of Generative AI, that exclusionary worldviews of marginalised groups are not reinforced by the model's output.
Fairness is defined as "ensuring that an AI system is free from bias and discrimination and that they treat everyone fairly and equally." In the world of Generative AI, we want to ensure that exclusionary worldviews of marginalized groups are not reinforced by the model's output.

These type of outputs are not only harmful to build positive product experiences for our users, they also cause further societal harm. As application builders, we should always keep a wide and diverse user base in mind when building solutions with Generative AI.
These types of outputs are not only harmful to building positive product experiences for our users, but they also cause further societal harm. As application builders, we should always keep a wide and diverse user base in mind when building solutions with Generative AI.

## How to Use Generative AI Responsibly

Now that we have identified the importance of Responsible Generative AI, let's look at 4 steps we can do to build our AI solutions responsibly:
Now that we have identified the importance of Responsible Generative AI, let's look at 4 steps we can take to build our AI solutions responsibly:

![Mitigate Cycle](./images/mitigate-cycle.png)



### Identify Potential Harms

In the earlier section we discussed some of the potential harms when building a Generative AI Solution. These harms can change based on the services and the models you are using. Using techniques such as fine-tuning or grounding your data that provides a higher level control on the model's output should also be considered when listing potential harms.

In this course, we will be building applications for our startups that generates images, text, chat responses and API calls to external services. Each of these applications come with their own unique set of potential harms.


### Measure Potential Harms
In software testing, we test the expected actions of a user on an application. Similarly, testing a diverse set of prompts users are most likely going to use is a good way to measure potential harm.

Since our startup is building an education product, it would be good to prepare a list of education-related prompts. This could be to cover a certain subject, historical facts, and prompts about student life.

### Mitigate Potential Harms


### Operate a responsible generative AI solution

You will see as we build our startup we can use Generative AI to create text applications, new images, improve search results, create better chat experiences and integrate it with other parts of our application.



## Tools and Strategies for Responsible AI

### Mitigate Potential Harms
It is now time to find ways where we can prevent or limit the potential harm caused by the model and its responses. We can look at this in 4 different layers:

### Creating Mitigation Layers
![Mitigation Layers](./images/mitigation-layers.png)



**Model**

Choosing the right model for the right use case. Larger and more complex models like GPT-4 can cause more of a risk of harmful content when applied to smaller and more specific use cases. Using your training data to fine-tune also reduces the risk of harmful content.

**Safety System**

A safety system is a set of tools and configurations on the platform serving the model that help mitigate harm. An example of this is the content filtering system on the Azure OpenAI service. Systems can also detect over usage and unwanted activity like requests from bots.

**Metaprompt**

Metaprompts and grounding are ways we can direct or limit the model based on certain behaviors and information. This could be using system inputs to define certain limits of the model.

**User Experience**

### Adversarial Testing



### Azure AI Content Safety
It can also be using techniques like Retrieval Augmented Generation (RAG) to have the model only pull information from a selection of trusted sources. There is a lesson later in this course for [building search applications](../08-building-search-applications/README.md)

**User Experience**

https://learn.microsoft.com/en-us/azure/ai-services/content-safety/overview
The final layer is where the user interacts directly with the model through our application's interface in some way. In this way we can design the UI/UX to limit the user on the types of inputs they can send to the model. We also must be transparent about what our Generative AI application can and can't do.

We have an entire lesson dedicated to [Designing UX for AI Applications](../12-designing-ux-for-ai-applications/README.md)

```python
# Create an Content Safety client
client = ContentSafetyClient(endpoint, AzureKeyCredential(key))

# Build request
with open(image_path, "rb") as file:
request = AnalyzeImageOptions(image=ImageData(content=file.read()))
### Operate a Responsible Generative AI solution

# Analyze image
try:
response = client.analyze_image(request)
except HttpResponseError as e:
print("Analyze image failed.")
if e.error:
print(f"Error code: {e.error.code}")
print(f"Error message: {e.error.message}")
raise
print(e)
raise
Building an operational practice around your AI applications is the final stage. This includes partnering with other parts of our startup like Legal and Security to ensure we are compliant with all policies. Before launching, we also want to build plans around delivery, handling incidents, and rollback to prevent any harm to our users from growing.

if response.hate_result:
print(f"Hate severity: {response.hate_result.severity}")
if response.self_harm_result:
print(f"SelfHarm severity: {response.self_harm_result.severity}")
if response.sexual_result:
print(f"Sexual severity: {response.sexual_result.severity}")
if response.violence_result:
print(f"Violence severity: {response.violence_result.severity}")

# [END analyze_image]
## Tools

```
While the work of developing Responsible AI solutions may seem like a lot, it is work well worth the effort. As the area of Generative AI grows, more tooling to help developers efficiently integrate responsibility into their workflows will mature. For example, the [Azure AI Content Saftey](https://learn.microsoft.com/en-us/azure/ai-services/content-safety/overview ) can help detect harmful content and images via an API request.


## Great Work, Continue Your Learning!
Expand All @@ -160,5 +125,5 @@ https://learn.microsoft.com/en-us/azure/ai-services/content-safety/overview
Want to learn more about how to build with Generative AI responsibly? Go to the [contiuned learning page](../13-continued-learning/README.md) to find other great resources on this topic.


Head over to the Lesson 4 where we will look at [Prompt Engineering Fundamentals](/4-prompt-engineering-fundamentals/README.md)!
Head over to Lesson 4 where we will look at [Prompt Engineering Fundamentals](/4-prompt-engineering-fundamentals/README.md)!

1 change: 1 addition & 0 deletions 06-text-generation-apps/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@

[![Building Text Generation Applications ](./images/genai_course_6[95].png)](https://youtu.be/5jKHzY6-4s8)

*(Click the image above to view video of this lesson)*

You've seen so far through this curriculum that there are core concepts like prompts and even a whole discipline called "prompt engineering". Many tools you can interact with like ChatGPT, Office 365, Microsoft Power Platform and more, supports you using prompts to accomplish something.

Expand Down
8 changes: 4 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,9 +49,9 @@ We believe one of the best ways to learn is learning with others! Join our [offi
## 🗃️ Lessons
| | Lesson Link | Concepts Taught | Learning Goal |
| :---: | :------------------------------------: | :---------------------------------------------------------: | ----------------------------------------------------------- |
| 00 | [Course Introduction - How to Take This Course](/00-course-setup/README.md) | Tech setup and course structure | Setting you up for success while learning in this course| [Introduction to Generative AI](./1-getting-started/lessons/1-introduction-to-generative-ai/README.md)
| 01 | [Introduction to Generative AI and LLMs](./1-getting-started/README.md) | Generative AI and how we landed on the current technology landscape| Understanding what Generative AI is and how Large Language Models (LLMs) work. |
| 02 | [Exploring and comparing different LLMs](./2-exploring-and-comparing-different-llms/) |Testing, iterating, and comparing different Large Language Models | Select the right model for your use case |
| 00 | [Course Introduction - How to Take This Course](/00-course-setup/README.md) | Tech setup and course structure | Setting you up for success while learning in this course|
| 01 | [Introduction to Generative AI and LLMs](./01-introduction-to-genai/README.md) | Generative AI and how we landed on the current technology landscape| Understanding what Generative AI is and how Large Language Models (LLMs) work. |
| 02 | [Exploring and comparing different LLMs](./02-exploring-and-comparing-different-llms/README.md) |Testing, iterating, and comparing different Large Language Models | Select the right model for your use case |
| 03 | [Using Generative AI Responsibly](./03-using-generative-ai-responsibly%20/README.MD)| Understanding the limitations of foundation models and the risks behind AI | Learn how to build Generative AI Applications responsibly
| 04 | [Understanding Prompt Engineering Fundamentals](./4-prompt-engineering-fundamentals/) | Hands-on application of Prompt Engineering Best Practices | Understand prompt structure & usage|
| 05 | [Creating Advanced Prompts](./05-advanced-prompts/README.md) | Extend your knowledge of prompt engineering by applying different techniques to your prompts | Apply prompt engineering techniques that improve the outcome of your prompts.|
Expand All @@ -62,7 +62,7 @@ We believe one of the best ways to learn is learning with others! Join our [offi
| 10 | [Building Low Code AI Applications](./10-building-low-code-ai-applications/) | Introduction to Generative AI in Power Platform | Build a Student Assignment Tracker App for our education startup with Low Code | |
| 11 | [Integrating External Applications with Function Calling](./11%20-%20Integrating%20External%20Applications%20with%20Function%20Calling%20/) | What is function calling and its use cases for applications | Setup a function call to retrieve data from an external API | |
| 12 | [Designing UX for AI Applications](./12-designing-ux-for-ai-applications/) | Designing AI Applications for Trust and Transparency | Apply UX design principles when developing Generative AI Applications | |
| xx | [Continue Your Learning](./13%20-%20contiuned-learning/) | Links to continue your learning from each lesson! | Mastering your Generative AI skills | |
| xx | [Continue Your Learning](./13-continued-learning/README.md) | Links to continue your learning from each lesson! | Mastering your Generative AI skills | |



Expand Down

0 comments on commit 9465459

Please sign in to comment.