Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 4 additions & 4 deletions 01-IntroToGenAI/readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ _⬆️Click the image to watch the video⬆️_

## Generative AI Fundamentals for .NET

Before we dive in to some code, let's take a minute to review some generative AI (GenAI) concepts. In this lesson, **Generative AI Fundamentals for .NET**, we'll refresh some fundamental GenAI concepts so you can understand why certain things are done like they are. And we'll introduce the tooling and SDKs you'll use to build apps, like **MEAI** (Microsoft.Extensions.AI), **Semantic Kernel**, and the **AI Toolkit Extension for VS Code**.
Before we dive into some code, let's take a minute to review some generative AI (GenAI) concepts. In this lesson, **Generative AI Fundamentals for .NET**, we'll refresh some fundamental GenAI concepts so you can understand why certain things are done like they are. And we'll introduce the tooling and SDKs you'll use to build apps, like **MEAI** (Microsoft.Extensions.AI), **Semantic Kernel**, and the **AI Toolkit Extension for VS Code**.

### A quick refresh on Generative AI concepts

Expand All @@ -29,7 +29,7 @@ As you develop your .NET AI applications, you'll work with **generative AI model

There are specific types of models that are optimized for different tasks. For example, **Small Language Models (SLMs)** are ideal for text generation, while **Large Language Models (LLMs)** are more suitable for complex tasks like code generation or image analysis. And from there different companies and groups develop models, like Microsoft, OpenAI, or Anthropic. The specific one you use will depend on your use case and the capabilities you need.

Of course, the responses from these models are not perfect all the time. You're probably heard about models "hallucinating" or generating incorrect information in an authoritative manner. But you can help guide the model to generate better responses by providing them with clear instructions and context. This is where **prompt engineering** comes in.
Of course, the responses from these models are not perfect all the time. You've probably heard about models "hallucinating" or generating incorrect information in an authoritative manner. But you can help guide the model to generate better responses by providing them with clear instructions and context. This is where **prompt engineering** comes in.

#### Prompt engineering review

Expand All @@ -41,7 +41,7 @@ Prompt engineering is the practice of designing effective inputs to guide AI mod

Some best practices for prompt engineering include, prompt design, clear instructions, task breakdown, one shot and few shot learning, and prompt tuning. Plus, trying and testing different prompts to see what works best for your specific use case.

And it's important to note there are different types of prompts when developing applications. For example, you'll be responsbile for setting **system prompts** that set the base rules and context for the model's response. The data the user of your application feeds into the model are known as **user prompts**. And **assistant prompts** are the responses the model generates based on the system and user prompts.
And it's important to note there are different types of prompts when developing applications. For example, you'll be responsible for setting **system prompts** that set the base rules and context for the model's response. The data the user of your application feeds into the model are known as **user prompts**. And **assistant prompts** are the responses the model generates based on the system and user prompts.

> 🧑‍🏫 **Learn more**: Learn more about prompt engineering in [Prompt Engineering chapter of GenAI for Beginners course](https://github.com/microsoft/generative-ai-for-beginners/tree/main/04-prompt-engineering-fundamentals)

Expand All @@ -58,7 +58,7 @@ When developing .NET AI applications, you'll work with tokens, embeddings, and a

### AI Development Tools and Libraries for .NET

.NET offers a range of tooling for AI development. Lets take a minute to understand some of the tools and libraries available.
.NET offers a range of tooling for AI development. Let's take a minute to understand some of the tools and libraries available.
Copy link
Preview

Copilot AI Feb 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Corrected 'lets' to 'let's'.

Copilot uses AI. Check for mistakes.


#### Microsoft.Extensions.AI (MEAI)

Expand Down
10 changes: 5 additions & 5 deletions 02-SetupDevEnvironment/getting-started-ollama.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Setting Up the Development Envionment with Ollama
# Setting Up the Development Environment with Ollama

If you want to use Ollama to run local models for this course, follow the steps in this guide.

Expand Down Expand Up @@ -32,7 +32,7 @@ You can leave the rest of the settings as they are. Click the **Create codespace

## Verifying your Codespace is running correctly with Ollama

Once your Codespace is fully loaded and configured, lets run a sample app to verify everything is working correctly:
Once your Codespace is fully loaded and configured, let's run a sample app to verify everything is working correctly:

1. Open the terminal. You can open a terminal window by typing **Ctrl+\`** (backtick) on Windows or **Cmd+`** on macOS.

Expand Down Expand Up @@ -65,15 +65,15 @@ Once your Codespace is fully loaded and configured, lets run a sample app to ver

One of the cool things about Ollama is that it's easy to change models. The current app uses the "**llama3.2**" model. Let’s switch it up and try the "**phi3.5**" model instead.

1. Download the Phi3.5 model running the comamnd from the terminal:
1. Download the Phi3.5 model running the command from the terminal:

```bash
ollama pull phi3.5
```

You can learn mode about the [Phi3.5](https://ollama.com/library/phi3.5) and other available models in the [Ollama library](https://ollama.com/library/).
You can learn more about the [Phi3.5](https://ollama.com/library/phi3.5) and other available models in the [Ollama library](https://ollama.com/library/).

1. Edit the initialization of the chat client in `Program.cs` to use the new model::
1. Edit the initialization of the chat client in `Program.cs` to use the new model:

```csharp
IChatClient client = new OllamaChatClient(new Uri("http://localhost:11434/"), "phi3.5");
Expand Down
4 changes: 2 additions & 2 deletions 02-SetupDevEnvironment/readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ This lesson will guide you through setting up your development environment for t
- ⚡ How to setup a development environment with GitHub Codepaces
- 🤖 Configure your development environment to access LLMs via GitHub Models, Azure OpenAI, or Ollama
- 🛠️ Industry-standard tools configuration with .devcontainer
- 🎯 Finally, everything ready to complete the rest of the course
- 🎯 Finally, everything is ready to complete the rest of the course

Let's dive in and set up your development environment! 🏃‍♂️

Expand Down Expand Up @@ -102,7 +102,7 @@ Click the **Create codespace** button to start the Codespace creation process.

## Verifying your Codespace is running correctly with GitHub Models

Once your Codespace is fully loaded and configured, lets run a sample app to verify everything is working correctly:
Once your Codespace is fully loaded and configured, let's run a sample app to verify everything is working correctly:

1. Open the terminal. You can open a terminal window by typing **Ctrl+\`** (backtick) on Windows or **Cmd+`** on macOS.

Expand Down
2 changes: 1 addition & 1 deletion 04-PracticalSamples/readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ For our first two demos, we'll explore the eShopLite project, a simple e-commerc

These demos use [Azure OpenAI](https://azure.microsoft.com/products/ai-services/openai-service) and [Azure Ai Foundry Models](https://ai.azure.com/) to do their inferences (or the generative AI portion) for the applications.

The first demo, we show how to use the Semantic Kernel to enhance the search capabilities, which can understand the context of the user's queries and provide accurate results.
In the first demo, we show how to use the Semantic Kernel to enhance the search capabilities, which can understand the context of the user's queries and provide accurate results.

### eShopLite with semantic search

Expand Down
4 changes: 2 additions & 2 deletions 05-ResponsibleGenAI/readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,9 +16,9 @@ When developing generative AI solutions, adhere to the following principles:

For more detailed information diving into each of those principles, check out this [Using Generative AI Responsibly lesson](https://github.com/microsoft/generative-ai-for-beginners/tree/main/03-using-generative-ai-responsibly).

## Why should you prioritize responsible AI
## Why should you prioritize responsible AI?

Comment on lines +19 to 20
Copy link
Preview

Copilot AI Feb 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing question mark at the end of the heading. It should be '## Why should you prioritize responsible AI?'.

Suggested change
## Why should you prioritize responsible AI?
## Why should you prioritize responsible AI??

Copilot uses AI. Check for mistakes.

Prioritizing responsible AI practices ensure trust, compliance, and better outcomes. Here are key reasons:
Prioritizing responsible AI practices ensures trust, compliance, and better outcomes. Here are key reasons:

- **Hallucinations**: Generative AI systems can produce outputs that are factually incorrect or contextually irrelevant, known as hallucinations. These inaccuracies can undermine user trust and application reliability. Developers should use validation techniques, knowledge-grounding methods, and content constraints to address this challenge.

Expand Down