From a17351371ea5f27c3858eda2524c799bacb821fc Mon Sep 17 00:00:00 2001 From: John A Date: Wed, 8 Nov 2023 13:42:05 +0200 Subject: [PATCH] fix: add tracking to paths --- 00-course-setup/README.md | 2 +- 01-introduction-to-genai/README.md | 18 +++++++-------- .../README.md | 22 +++++++++---------- 03-using-generative-ai-responsibly/README.md | 8 +++---- 04-prompt-engineering-fundamentals/README.md | 14 ++++++------ 05-advanced-prompts/README.md | 4 ++-- 06-text-generation-apps/README.md | 2 +- 07-building-chat-applications/README.md | 3 +-- 08-building-search-applications/README.md | 2 +- 09-building-image-applications/README.md | 12 +++++----- .../README.md | 2 +- .../README.md | 6 ++--- 12-designing-ux-for-ai-applications/README.md | 3 +-- README.md | 4 ++-- docs/_sidebar.md | 2 +- 15 files changed, 51 insertions(+), 53 deletions(-) diff --git a/00-course-setup/README.md b/00-course-setup/README.md index 21be18640..a1f9952d8 100644 --- a/00-course-setup/README.md +++ b/00-course-setup/README.md @@ -22,7 +22,7 @@ This can be created by selecting the `Code` option on your forked version of thi Keeping your API keys safe and secure is important when building any type of application. We encourage you not to store any API keys directly in the code you are working with as committing those details to a public repository could result in unwanted costs and issues. -![Dialog showing buttons to create a codespace](./images/who-will-pay.webp) +![Dialog showing buttons to create a codespace](./images/who-will-pay.webp?WT.mc_id=academic-105485-koreyst) ## How to Run locally on your computer diff --git a/01-introduction-to-genai/README.md b/01-introduction-to-genai/README.md index 617cda9ea..7439d3c26 100644 --- a/01-introduction-to-genai/README.md +++ b/01-introduction-to-genai/README.md @@ -1,6 +1,6 @@ # Introduction to Generative AI and Large Language Models -[![Introduction to Generative AI and Large Language Models](./images/01-lesson-banner.png)](https://youtu.be/vf_mZrn8ibc) +[![Introduction to Generative AI and Large Language Models](./images/01-lesson-banner.png?WT.mc_id=academic-105485-koreyst)](https://youtu.be/vf_mZrn8ibc) *(Click the image above to view video of this lesson)* @@ -34,7 +34,7 @@ Our startup team is aware we’ll not be able to achieve this goal without lever Generative AI is expected to revolutionize the way we learn and teach today, with students having at their disposal virtual teachers 24 hours a day who provide vast amounts of information and examples, and teachers able to leverage innovative tools to assess their students and give feedback. -![Five young students looking at a monitor - image by DALLE2](./images/students-by-DALLE2.png) +![Five young students looking at a monitor - image by DALLE2](./images/students-by-DALLE2.png?WT.mc_id=academic-105485-koreyst) To start, let’s define some basic concepts and terminology we’ll be using throughout the curriculum. @@ -61,7 +61,7 @@ This is the technology that powered the virtual assistants born in the first dec So that’s how we came to Generative AI today, which can be seen as a subset of deep learning. -![AI, ML, DL and Generative AI](./images/AI-diagram.png) +![AI, ML, DL and Generative AI](./images/AI-diagram.png?WT.mc_id=academic-105485-koreyst) After decades of research in the AI field, a new model architecture – called *Transformer* – overcame the limits of RNNs, being able to get much longer sequences of text as input. Transformers are based on the attention mechanism, enabling the model to give different weights to the inputs it receives, ‘paying more attention’ where the most relevant information is concentrated, regardless of their order in the text sequence. @@ -73,7 +73,7 @@ In the next chapter we are going to explore different types of Generative AI mod * **Tokenizer, text to numbers**: Large Language Models receive a text as input and generate a text as output. However, being statistical models, they work much better with numbers than text sequences. That’s why every input to the model is processed by a tokenizer, before being used by the core model. A token is a chunk of text – consisting of a variable number of characters, so the tokenizer's main task is splitting the input into an array of tokens. Then, each token is mapped with a token index, which is the integer encoding of the original text chunk. -![Example of tokenization](./images/tokenizer-example.png) +![Example of tokenization](./images/tokenizer-example.png?WT.mc_id=academic-105485-koreyst) * **Predicting output tokens**: Given n tokens as input (with max n varying from one model to another), the model is able to predict one token as output. This token is then incorporated into the input of the next iteration, in an expanding window pattern, enabling a better user experience of getting one (or multiple) sentence as an answer. This explains why, if you ever played with ChatGPT, you might have noticed that sometimes it looks like it stops in the middle of a sentence. @@ -90,19 +90,19 @@ The input of a large language model is known as prompt, while the output is know * An **instruction** specifying the type of output we expect from the model. This instruction sometimes might embed some examples or some additional data. 1. Summarization of an article, book, product reviews and more, along with extraction of insights from unstructured data. - ![Example of summarization](./images/summarization-example.png) + ![Example of summarization](./images/summarization-example.png?WT.mc_id=academic-105485-koreyst) 2. Creative ideation and design of an article, an essay, an assignment or more. - ![Example of creative writing](./images/creative-writing-example.png) + ![Example of creative writing](./images/creative-writing-example.png?WT.mc_id=academic-105485-koreyst) * A **question**, asked in the form of a conversation with an agent. - ![Example of conversation](./images/conversation-example.png) + ![Example of conversation](./images/conversation-example.png?WT.mc_id=academic-105485-koreyst) * A chunk of **text to complete**, which implicitly is an ask for writing assistance. - ![Example of text completion](./images/text-completion-example.png) + ![Example of text completion](./images/text-completion-example.png?WT.mc_id=academic-105485-koreyst) * A chunk of **code** together with the ask of explaining and documenting it, or a comment asking to generate a piece of code performing a specific task. - ![Coding example](./images/coding-example.png) + ![Coding example](./images/coding-example.png?WT.mc_id=academic-105485-koreyst) The examples above are quite simple and don’t want to be an exhaustive demonstration of Large Language Models capabilities. They just want to show the potential of using generative AI, in particular but not limited to educational context. diff --git a/02-exploring-and-comparing-different-llms/README.md b/02-exploring-and-comparing-different-llms/README.md index dc48fddd9..0f164e609 100644 --- a/02-exploring-and-comparing-different-llms/README.md +++ b/02-exploring-and-comparing-different-llms/README.md @@ -1,6 +1,6 @@ # Exploring and comparing different LLMs -[![Exploring and comparing different LLMs](./images/02-lesson-banner.png)](https://youtu.be/J1mWzw0P74c) +[![Exploring and comparing different LLMs](./images/02-lesson-banner.png?WT.mc_id=academic-105485-koreyst)](https://youtu.be/J1mWzw0P74c) > *Click the image above to view video of this lesson* @@ -48,14 +48,14 @@ The term Foundation Model was [coined by Stanford researchers](https://arxiv.org - **They are very large models**, based on very deep neural networks trained on billions of parameters. - **They are normally intended to serve as a ‘foundation’ for other models**, meaning they can be used as a starting point for other models to be built on top of, which can be done by fine-tuning. -![Foundation Models versus LLMs](./images/FoundationModel.png) +![Foundation Models versus LLMs](./images/FoundationModel.png?WT.mc_id=academic-105485-koreyst) Image source: [Essential Guide to Foundation Models and Large Language Models | by Babar M Bhatti | Medium ](https://thebabar.medium.com/essential-guide-to-foundation-models-and-large-language-models-27dab58f7404) To further clarify this distinction, let’s take ChatGPT as an example. To build the first version of ChatGPT, a model called GPT-3.5 served as the foundation model. This means that OpenAI used some chat-specific data to create a tuned version of GPT-3.5 that was specialized in performing well in conversational scenarios, such as chatbots. -![Foundation Model](./images/Multimodal.png) +![Foundation Model](./images/Multimodal.png?WT.mc_id=academic-105485-koreyst) Image source: [2108.07258.pdf (arxiv.org)](https://arxiv.org/pdf/2108.07258.pdf) @@ -73,15 +73,15 @@ LLMs can also be categorized by the output they generate. Embeddings are a set of models that can convert text into a numerical form, called embedding, which is a numerical representation of the input text. Embeddings make it easier for machines to understand the relationships between words or sentences and can be consumed as inputs by other models, such as classification models, or clustering models that have better performance on numerical data. Embedding models are often used for transfer learning, where a model is built for a surrogate task for which there’s an abundance of data, and then the model weights (embeddings) are re-used for other downstream tasks. An example of this category is [OpenAI embeddings](https://platform.openai.com/docs/models/embeddings). -![Embedding](./images/Embedding.png) +![Embedding](./images/Embedding.png?WT.mc_id=academic-105485-koreyst) Image generation models are models that generate images. These models are often used for image editing, image synthesis, and image translation. Image generation models are often trained on large datasets of images, such as [LAION-5B](https://laion.ai/blog/laion-5b/), and can be used to generate new images or to edit existing images with inpainting, super-resolution, and colorization techniques. Examples include [DALL-E-3](https://openai.com/dall-e-3) and [Stable Diffusion models](https://github.com/Stability-AI/StableDiffusion). -![Image generation](./images/Image.png) +![Image generation](./images/Image.png?WT.mc_id=academic-105485-koreyst) Text and code generation models are models that generate text or code. These models are often used for text summarization, translation, and question answering. Text generation models are often trained on large datasets of text, such as [BookCorpus](https://www.cv-foundation.org/openaccess/content_iccv_2015/html/Zhu_Aligning_Books_and_ICCV_2015_paper.html), and can be used to generate new text, or to answer questions. Code generation models, like [CodeParrot](https://huggingface.co/codeparrot), are often trained on large datasets of code, such as GitHub, and can be used to generate new code, or to fix bugs in existing code. - ![Text and code generation](./images/Text.png) + ![Text and code generation](./images/Text.png?WT.mc_id=academic-105485-koreyst) ### Encoder-Decoder versus Decoder-only @@ -113,19 +113,19 @@ Most of the models we mentioned in previous paragraphs (OpenAI models, open sour - Find the Foundation Model of interest in the catalog, filtering by task, license, or name. It’s also possible to import new models that are not yet included in the catalog. - Review the model card, including a detailed description and code samples, and test it with the Sample Inference widget, by providing a sample prompt to test the result. -![Model card](./images/Llama1.png) +![Model card](./images/Llama1.png?WT.mc_id=academic-105485-koreyst) - Evaluate model performance with objective evaluation metrics on a specific workload and a specific set of data provided in input. -![Model evaluation](./images/Llama2.png) +![Model evaluation](./images/Llama2.png?WT.mc_id=academic-105485-koreyst) - Fine-tune the model on custom training data to improve model performance in a specific workload, leveraging the experimentation and tracking capabilities of Azure Machine Learning. -![Model fine-tuning](./images/Llama3.png) +![Model fine-tuning](./images/Llama3.png?WT.mc_id=academic-105485-koreyst) - Deploy the original pre-trained model or the fine-tuned version to a remote real time inference or batch endpoint, to enable applications to consume it. -![Model deployment](./images/Llama4.png) +![Model deployment](./images/Llama4.png?WT.mc_id=academic-105485-koreyst) ## Improving LLM results @@ -143,7 +143,7 @@ deploy an LLM in production, with different levels of complexity, cost, and qual - **Fine-tuned model**. Here, you trained the model further on your own data which leads to the model being more exact and responsive to your needs but might be costly. -![LLMs deployment](./images/Deploy.png) +![LLMs deployment](./images/Deploy.png?WT.mc_id=academic-105485-koreyst) Img source: [Four Ways that Enterprises Deploy LLMs | Fiddler AI Blog](https://www.fiddler.ai/blog/four-ways-that-enterprises-deploy-llms) diff --git a/03-using-generative-ai-responsibly/README.md b/03-using-generative-ai-responsibly/README.md index 5f610b794..37d20ad16 100644 --- a/03-using-generative-ai-responsibly/README.md +++ b/03-using-generative-ai-responsibly/README.md @@ -1,6 +1,6 @@ # Using Generative AI Responsibly -[![Using Generative AI Responsibly](./images/genai_course_3[77].png)]() +[![Using Generative AI Responsibly](./images/genai_course_3[77].png?WT.mc_id=academic-105485-koreyst)]() > **Video Coming Soon** @@ -44,7 +44,7 @@ Let's take for example we build a feature for our startup that allows students t The model produces a response like the one below: -![Prompt saying "Who was the sole survivor of the Titanic"](../03-using-generative-ai-responsibly/images/2135-ChatGPT(1)_11zon.webp) +![Prompt saying "Who was the sole survivor of the Titanic"](../03-using-generative-ai-responsibly/images/2135-ChatGPT(1)_11zon.webp?WT.mc_id=academic-105485-koreyst) > *(Source: [Flying bisons](https://flyingbisons.com))* @@ -76,7 +76,7 @@ These types of outputs are not only destructive to building positive product exp Now that we have identified the importance of Responsible Generative AI, let's look at 4 steps we can take to build our AI solutions responsibly: -![Mitigate Cycle](./images/mitigate-cycle.png) +![Mitigate Cycle](./images/mitigate-cycle.png?WT.mc_id=academic-105485-koreyst) ### Measure Potential Harms @@ -88,7 +88,7 @@ Since our startup is building an education product, it would be good to prepare It is now time to find ways where we can prevent or limit the potential harm caused by the model and its responses. We can look at this in 4 different layers: -![Mitigation Layers](./images/mitigation-layers.png) +![Mitigation Layers](./images/mitigation-layers.png?WT.mc_id=academic-105485-koreyst) - **Model**. Choosing the right model for the right use case. Larger and more complex models like GPT-4 can cause more of a risk of harmful content when applied to smaller and more specific use cases. Using your training data to fine-tune also reduces the risk of harmful content. diff --git a/04-prompt-engineering-fundamentals/README.md b/04-prompt-engineering-fundamentals/README.md index 6fe1842e0..00ac9d598 100644 --- a/04-prompt-engineering-fundamentals/README.md +++ b/04-prompt-engineering-fundamentals/README.md @@ -1,6 +1,6 @@ # Prompt Engineering Fundamentals -[![Prompt Engineering Fundamentals](./img/04-lesson-banner.png)](https://youtu.be/r2ItK3UMVTk) +[![Prompt Engineering Fundamentals](./img/04-lesson-banner.png?WT.mc_id=academic-105485-koreyst)](https://youtu.be/r2ItK3UMVTk) How you write your prompt to the LLM matters, a carefully crafted prompt can achieve achieve a better result than one that isn't. But what even are these concepts, prompt, prompt engineering and how do I improve what I send to the LLM? Questions like these are what this chapter and the upcoming chapter are looking to answer. @@ -77,7 +77,7 @@ An LLM sees prompts as a _sequence of tokens_ where different models (or version To get an intuition for how tokenization works, try tools like the [OpenAI Tokenizer](https://platform.openai.com/tokenizer) shown below. Copy in your prompt - and see how that gets converted into tokens, paying attention to how whitespace characters and punctuation marks are handled. Note that this example shows an older LLM (GPT-3) - so trying this with a newer model may produce a different result. -![Tokenization](./img/4.0-tokenizer-example.png) +![Tokenization](./img/4.0-tokenizer-example.png?WT.mc_id=academic-105485-koreyst) ### Concept: Foundation Models @@ -87,7 +87,7 @@ Want to see how prompt-based completion works? Enter the above prompt into the A But what if the user wanted to see something specific that met some criteria or task objective? This is where _instruction-tuned_ LLMs come into the picture. -![Base LLM Chat Completion](./img/4.0-playground-chat-base.png) +![Base LLM Chat Completion](./img/4.0-playground-chat-base.png?WT.mc_id=academic-105485-koreyst) ### Concept: Instruction Tuned LLMs @@ -101,7 +101,7 @@ Let's try it out - revisit the prompt above but now change the _system message_ See how the result is now tuned to reflect the desired goal and format? An educator can now directly use this response in their slides for that class. -![Instruction Tuned LLM Chat Completion](./img/4.0-playground-chat-instructions.png) +![Instruction Tuned LLM Chat Completion](./img/4.0-playground-chat-instructions.png?WT.mc_id=academic-105485-koreyst) ## Why do we need Prompt Engineering? @@ -129,15 +129,15 @@ So what happens when we run this prompt with different LLM providers? > **Response 1**: OpenAI Playground (GPT-35) -![Response 1](./img/4.0-hallucination-oai.png) +![Response 1](./img/4.0-hallucination-oai.png?WT.mc_id=academic-105485-koreyst) > **Response 2**: Azure OpenAI Playground (GPT-35) -![Response 2](./img/4.0-hallucination-aoai.png) +![Response 2](./img/4.0-hallucination-aoai.png?WT.mc_id=academic-105485-koreyst) > **Response 3**: : Hugging Face Chat Playground (LLama-2) -![Response 3](./img/4.0-hallucination-huggingchat.png) +![Response 3](./img/4.0-hallucination-huggingchat.png?WT.mc_id=academic-105485-koreyst) As expected, each model (or model version) produces slightly different responses thanks to stochastic behavior and model capability variations. For instance, one model targets an 8th grade audience while the other assumes a high-school student. But all three models did generate responses that could convince an uninformed user that the event was real diff --git a/05-advanced-prompts/README.md b/05-advanced-prompts/README.md index 0d4c46565..0ad4768a0 100644 --- a/05-advanced-prompts/README.md +++ b/05-advanced-prompts/README.md @@ -1,6 +1,6 @@ # Creating Advanced prompts -[![Creating Advanced Prompts](./images/05-lesson-banner.png)](https://youtu.be/32GBH6BTWZQ) +[![Creating Advanced Prompts](./images/05-lesson-banner.png?WT.mc_id=academic-105485-koreyst)](https://youtu.be/32GBH6BTWZQ) Let's recap some learnings from the previous chapter: @@ -601,7 +601,7 @@ Please attempt to solve the assignment by adding suitable prompts to the code. > [!TIP] > Phrase a prompt to ask it to improve, it's a good idea to limit how many improvements. You can also ask to improve it in a certain way, for example architecture, performance, security, etc. -[Solution](./solution.py) +[Solution](./solution.py?WT.mc_id=academic-105485-koreyst) ## Knowledge check diff --git a/06-text-generation-apps/README.md b/06-text-generation-apps/README.md index 74166a89d..cebb015fb 100644 --- a/06-text-generation-apps/README.md +++ b/06-text-generation-apps/README.md @@ -1,6 +1,6 @@ # Building Text Generation Applications -[![Building Text Generation Applications](./images/06-lesson-banner.png)](https://youtu.be/5jKHzY6-4s8) +[![Building Text Generation Applications](./images/06-lesson-banner.png?WT.mc_id=academic-105485-koreyst)](https://youtu.be/5jKHzY6-4s8) > *(Click the image above to view video of this lesson)* diff --git a/07-building-chat-applications/README.md b/07-building-chat-applications/README.md index 74deded6a..704fa9286 100644 --- a/07-building-chat-applications/README.md +++ b/07-building-chat-applications/README.md @@ -1,7 +1,6 @@ # Building Generative AI-Powered Chat Applications -[![Building Generative AI-Powered Chat Applications](./img/07-lesson-banner.png -)](https://youtu.be/Kw4i-tlKMrQ) +[![Building Generative AI-Powered Chat Applications](./img/07-lesson-banner.png?WT.mc_id=academic-105485-koreyst)](https://youtu.be/Kw4i-tlKMrQ) > *(Click the image above to view video of this lesson)* diff --git a/08-building-search-applications/README.md b/08-building-search-applications/README.md index 9f9c92fe5..047710830 100644 --- a/08-building-search-applications/README.md +++ b/08-building-search-applications/README.md @@ -1,6 +1,6 @@ # Building a Search Applications -[![Introduction to Generative AI and Large Language Models](./media/genai_course_8[80].png)](TBD) +[![Introduction to Generative AI and Large Language Models](./media/genai_course_8[80].png?WT.mc_id=academic-105485-koreyst)](TBD) > **Video Coming Soon** diff --git a/09-building-image-applications/README.md b/09-building-image-applications/README.md index 335b5375d..9fa236fb4 100644 --- a/09-building-image-applications/README.md +++ b/09-building-image-applications/README.md @@ -1,6 +1,6 @@ # Building Image Generation Applications -[![Building Image Generation Applications](./images/genai_course_9[70].png)](TBD) +[![Building Image Generation Applications](./images/genai_course_9[70].png?WT.mc_id=academic-105485-koreyst)](TBD) > **Video Coming Soon** @@ -36,7 +36,7 @@ As part of this lesson, we will continue to work with our startup, Edu4All, in t Here's what Edu4All's students could generate for example if they're working in class on monuments: -![Edu4All startup, class on monuments, Eiffel Tower](./images/startup.png) +![Edu4All startup, class on monuments, Eiffel Tower](./images/startup.png?WT.mc_id=academic-105485-koreyst) using a prompt like @@ -282,11 +282,11 @@ Let's look at an example of how temperature works, by running this prompt twice: > Prompt : "Bunny on horse, holding a lollipop, on a foggy meadow where it grows daffodils" -![Bunny on a horse holding a lollipop, version 1](./images/v1-generated_image.png) +![Bunny on a horse holding a lollipop, version 1](./images/v1-generated_image.png?WT.mc_id=academic-105485-koreyst) Now let's run that same prompt just to see that we won't get the same image twice: -![Generated image of bunny on horse](./images/v2-generated_image.png) +![Generated image of bunny on horse](./images/v2-generated_image.png?WT.mc_id=academic-105485-koreyst) As you can see, the images are similar, but not the same. Let's try changing the temperature value to 0.1 and see what happens: @@ -315,8 +315,8 @@ generation_response = openai.Image.create( Now when you run this code, you get these two images: -- ![Temperature 0, v1](./images/v1-0temp-generated_image.png) -- ![Temperature 0 , v2](./images/v2-0temp-generated_image.png) +- ![Temperature 0, v1](./images/v1-0temp-generated_image.png?WT.mc_id=academic-105485-koreyst) +- ![Temperature 0 , v2](./images/v2-0temp-generated_image.png?WT.mc_id=academic-105485-koreyst) Here you can clearly see how the images resemble each other more. diff --git a/10-building-low-code-ai-applications/README.md b/10-building-low-code-ai-applications/README.md index 479e08194..c25fbe409 100644 --- a/10-building-low-code-ai-applications/README.md +++ b/10-building-low-code-ai-applications/README.md @@ -1,6 +1,6 @@ # Building Low Code AI Applications -[![Building Low Code AI Applications](./images/10-lesson-banner.png)](https://youtu.be/XX8491SAF44) +[![Building Low Code AI Applications](./images/10-lesson-banner.png?WT.mc_id=academic-105485-koreyst)](https://youtu.be/XX8491SAF44) > *(Click the image above to view video of this lesson)* diff --git a/11-integrating-with-function-calling/README.md b/11-integrating-with-function-calling/README.md index 6d98ca95b..d21b6a0a6 100644 --- a/11-integrating-with-function-calling/README.md +++ b/11-integrating-with-function-calling/README.md @@ -1,6 +1,6 @@ # Integrating with function calling -![chapter image](./images/genai_course_11[90].png) +![chapter image](./images/genai_course_11[90].png?WT.mc_id=academic-105485-koreyst) You've learned a fair bit so far in the previous lessons. However, we can improve further. Some things we can address are how we can get a more consistent response format to make it easier to work with the response downstream. Also, we might want to add data from other sources to further enrich our application. @@ -163,7 +163,7 @@ Now we can send both requests to the LLM and examine the response we receive by So how do we solve the formatting problem then? By using functional calling, we can make sure that we receive structured data back. When using function calling, the LLM does not actually call or run any functions. Instead, we create a structure for the LLM to follow for its responses. We then use those structured responses to know what function to run in our applications. -![function flow](./images/Function-Flow.png) +![function flow](./images/Function-Flow.png?WT.mc_id=academic-105485-koreyst) We can then take what is returned from the function and send this back to the LLM. The LLM will then respond using natural language to answer the user's query. @@ -185,7 +185,7 @@ The process of creating a function call includes 3 main steps: 2. **Reading** the model's response to perform an action ie execute a function or API Call. 3. **Making** another call to Chat Completions API with the response from your function to use that information to create a response to the user. -![LLM Flow](./images/LLM-Flow.png) +![LLM Flow](./images/LLM-Flow.png?WT.mc_id=academic-105485-koreyst) ### Step 1 - creating messages diff --git a/12-designing-ux-for-ai-applications/README.md b/12-designing-ux-for-ai-applications/README.md index 5ab2c40f0..8466e2cdd 100644 --- a/12-designing-ux-for-ai-applications/README.md +++ b/12-designing-ux-for-ai-applications/README.md @@ -1,7 +1,6 @@ # Designing UX for AI Applications -[![Designing UX for AI Applications](./images/12-lesson-banner.png)](https://youtu.be/bO7h2_hOhR0) - +[![Designing UX for AI Applications](./images/12-lesson-banner.png?WT.mc_id=academic-105485-koreyst)](https://youtu.be/bO7h2_hOhR0) > *(Click the image above to view video of this lesson)* diff --git a/README.md b/README.md index 868614d3b..f0ed63cf5 100644 --- a/README.md +++ b/README.md @@ -1,5 +1,5 @@ -![Generative AI For Beginners](./img/1.png) +![Generative AI For Beginners](./img/1.png?WT.mc_id=academic-105485-koreyst) ### A 12 Lesson course teaching everything you need to know to start building Generative AI applications @@ -27,7 +27,7 @@ To get started, [fork this entire repo](https://github.com/microsoft/generative- Below are the links to each lesson. Feel free to explore and start at any lesson that interests you the most! -Head to the [Course Setup Page](./00-course-setup/README.md) to find the setup guide that works best for you. +Head to the [Course Setup Page](./00-course-setup/README.md?WT.mc_id=academic-105485-koreyst) to find the setup guide that works best for you. ## 🗣️ Meet Other Learners, Get Support diff --git a/docs/_sidebar.md b/docs/_sidebar.md index c6afdab65..e2efa73a2 100644 --- a/docs/_sidebar.md +++ b/docs/_sidebar.md @@ -1,2 +1,2 @@ - Getting Started - - [Introduction to Generative AI](../01-introduction-to-genai/README.md) + - [Introduction to Generative AI](../01-introduction-to-genai/README.md?WT.mc_id=academic-105485-koreyst)