You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/ai/tutorials/evaluate-with-reporting.md
+3-3Lines changed: 3 additions & 3 deletions
Original file line number
Diff line number
Diff line change
@@ -77,7 +77,7 @@ Complete the following steps to create an MSTest project that connects to the `g
77
77
78
78
**Scenario name**
79
79
80
-
The [scenario name](xref:Microsoft.Extensions.AI.Evaluation.Reporting.ScenarioRun.ScenarioName) is set to the fully qualified name of the current test method. However, you can set it to any string of your choice when you call <xref:Microsoft.Extensions.AI.Evaluation.Reporting.ReportingConfiguration.CreateScenarioRunAsync(System.String,System.String,System.Collections.Generic.IEnumerable{System.String},System.Threading.CancellationToken)>. Here are some considerations for choosing a scenario name:
80
+
The [scenario name](xref:Microsoft.Extensions.AI.Evaluation.Reporting.ScenarioRun.ScenarioName) is set to the fully qualified name of the current test method. However, you can set it to any string of your choice when you call <xref:Microsoft.Extensions.AI.Evaluation.Reporting.ReportingConfiguration.CreateScenarioRunAsync(System.String,System.String,System.Collections.Generic.IEnumerable{System.String},System.Collections.Generic.IEnumerable{System.String},System.Threading.CancellationToken)>. Here are some considerations for choosing a scenario name:
81
81
82
82
- When using disk-based storage, the scenario name is used as the name of the folder under which the corresponding evaluation results are stored. So it's a good idea to keep the name reasonably short and avoid any characters that aren't allowed in file and directory names.
83
83
- By default, the generated evaluation report splits scenario names on `.` so that the results can be displayed in a hierarchical view with appropriate grouping, nesting, and aggregation. This is especially useful in cases where the scenario name is set to the fully qualified name of the corresponding test method, since it allows the results to be grouped by namespaces and class names in the hierarchy. However, you can also take advantage of this feature by including periods (`.`) in your own custom scenario names to create a reporting hierarchy that works best for your scenarios.
@@ -94,7 +94,7 @@ Complete the following steps to create an MSTest project that connects to the `g
94
94
95
95
A <xref:Microsoft.Extensions.AI.Evaluation.Reporting.ReportingConfiguration> identifies:
96
96
97
-
- The set of evaluators that should be invoked for each <xref:Microsoft.Extensions.AI.Evaluation.Reporting.ScenarioRun> that's created by calling <xref:Microsoft.Extensions.AI.Evaluation.Reporting.ReportingConfiguration.CreateScenarioRunAsync(System.String,System.String,System.Collections.Generic.IEnumerable{System.String},System.Threading.CancellationToken)>.
97
+
- The set of evaluators that should be invoked for each <xref:Microsoft.Extensions.AI.Evaluation.Reporting.ScenarioRun> that's created by calling <xref:Microsoft.Extensions.AI.Evaluation.Reporting.ReportingConfiguration.CreateScenarioRunAsync(System.String,System.String,System.Collections.Generic.IEnumerable{System.String},System.Collections.Generic.IEnumerable{System.String},System.Threading.CancellationToken)>.
98
98
- The LLM endpoint that the evaluators should use (see <xref:Microsoft.Extensions.AI.Evaluation.Reporting.ReportingConfiguration.ChatConfiguration?displayProperty=nameWithType>).
99
99
- How and where the results for the scenario runs should be stored.
100
100
- How LLM responses related to the scenario runs should be cached.
@@ -171,4 +171,4 @@ Run the test using your preferred test workflow, for example, by using the CLI c
171
171
- Navigate to the directory where the test results are stored (which is `C:\TestReports`, unless you modified the location when you created the <xref:Microsoft.Extensions.AI.Evaluation.Reporting.ReportingConfiguration>). In the `results` subdirectory, notice that there's a folder for each test run named with a timestamp (`ExecutionName`). Inside each of those folders is a folder for each scenario name—in this case, just the single test method in the project. That folder contains a JSON file with the all the data including the messages, response, and evaluation result.
172
172
- Expand the evaluation. Here are a couple ideas:
173
173
- Add an additional custom evaluator, such as [an evaluator that uses AI to determine the measurement system](https://github.com/dotnet/ai-samples/blob/main/src/microsoft-extensions-ai-evaluation/api/evaluation/Evaluators/MeasurementSystemEvaluator.cs) that's used in the response.
174
-
- Add another test method, for example, [a method that evaluates multiple responses](https://github.com/dotnet/ai-samples/blob/main/src/microsoft-extensions-ai-evaluation/api/reporting/ReportingExamples.Example02_SamplingAndEvaluatingMultipleResponses.cs) from the LLM. Since each response can be different, it's good to sample and evaluate at least a few responses to a question. In this case, you specify an iteration name each time you call <xref:Microsoft.Extensions.AI.Evaluation.Reporting.ReportingConfiguration.CreateScenarioRunAsync(System.String,System.String,System.Collections.Generic.IEnumerable{System.String},System.Threading.CancellationToken)>.
174
+
- Add another test method, for example, [a method that evaluates multiple responses](https://github.com/dotnet/ai-samples/blob/main/src/microsoft-extensions-ai-evaluation/api/reporting/ReportingExamples.Example02_SamplingAndEvaluatingMultipleResponses.cs) from the LLM. Since each response can be different, it's good to sample and evaluate at least a few responses to a question. In this case, you specify an iteration name each time you call <xref:Microsoft.Extensions.AI.Evaluation.Reporting.ReportingConfiguration.CreateScenarioRunAsync(System.String,System.String,System.Collections.Generic.IEnumerable{System.String},System.Collections.Generic.IEnumerable{System.String},System.Threading.CancellationToken)>.
Copy file name to clipboardExpand all lines: docs/architecture/cloud-native/resilient-communications.md
-2Lines changed: 0 additions & 2 deletions
Original file line number
Diff line number
Diff line change
@@ -84,8 +84,6 @@ The Azure cloud embraces Istio and provides direct support for it within Azure K
84
84
85
85
-[Resilience in Azure whitepaper](https://azure.microsoft.com/mediahandler/files/resourcefiles/resilience-in-azure-whitepaper/Resilience%20in%20Azure.pdf)
Copy file name to clipboardExpand all lines: docs/architecture/microservices/multi-container-microservice-net-applications/implement-api-gateways-with-ocelot.md
+3-3Lines changed: 3 additions & 3 deletions
Original file line number
Diff line number
Diff line change
@@ -374,9 +374,9 @@ But the application is configured so it accesses all the microservices through t
374
374
375
375
### The Gateway aggregation pattern in eShopOnContainers
376
376
377
-
As introduced previously, a flexible way to implement requests aggregation is with custom services, by code. You could also implement request aggregation with the [Request Aggregation feature in Ocelot](https://ocelot.readthedocs.io/en/latest/features/requestaggregation.html#request-aggregation), but it might not be as flexible as you need. Therefore, the selected way to implement aggregation in eShopOnContainers is with an explicit ASP.NET Core Web API service for each aggregator.
377
+
As introduced previously, a flexible way to implement requests aggregation is with custom services, by code. The selected way to implement aggregation in eShopOnContainers is with an explicit ASP.NET Core Web API service for each aggregator.
378
378
379
-
According to that approach, the API Gateway composition diagram is in reality a bit more extended when considering the aggregator services that are not shown in the simplified global architecture diagram shown previously.
379
+
According to that approach, the API Gateway composition diagram is in reality a bit more extended when considering the aggregator services that aren't shown in the simplified global architecture diagram shown previously.
380
380
381
381
In the following diagram, you can also see how the aggregator services work with their related API Gateways.
382
382
@@ -572,7 +572,7 @@ There are other important features to research and use, when using an Ocelot API
Copy file name to clipboardExpand all lines: docs/core/testing/unit-testing-with-copilot.md
+9-10Lines changed: 9 additions & 10 deletions
Original file line number
Diff line number
Diff line change
@@ -3,11 +3,12 @@ title: Generate Unit Tests with Copilot
3
3
author: sigmade
4
4
description: How to generate unit tests and test projects in C# using the xUnit framework with the help of Visual Studio commands and GitHub Copilot
5
5
ms.date: 01/12/2025
6
+
ms.collection: ce-skilling-ai-copilot
6
7
---
7
8
8
9
# Generate unit tests with GitHub Copilot
9
10
10
-
In this article, you explore how to generate unit tests and test projects in C# using the xUnit framework with the help of Visual Studio commands and GitHub Copilot.
11
+
In this article, you explore how to generate unit tests and test projects in C# using the xUnit framework with the help of Visual Studio commands and GitHub Copilot. Using Visual Studio in combination with GitHub Copilot significantly simplifies the process of generating and writing unit tests.
11
12
12
13
## Create a test project
13
14
@@ -53,21 +54,21 @@ In the **Create Unit Tests** dialog, select **xUnit** from the **Test Framework*
53
54
54
55
:::image type="content" source="media/create-unit-test-window.png" lightbox="media/create-unit-test-window.png" alt-text="Create Unit Tests window":::
55
56
56
-
* If you don't have a test project yet, choose "New Test Project" or select an existing one.
57
-
* If necessary, specify a template for the namespace, class, and method name, then click OK.
57
+
- If you don't have a test project yet, choose **New Test Project** or select an existing one.
58
+
- If necessary, specify a template for the namespace, class, and method name, then click **OK**.
58
59
59
-
After a few seconds, Visual Studio will pull in the necessary packages, and we will get a generated xUnit project with the required packages, structure, a reference to the project being tested, and with the `ProductServiceTests` class and a stub method.
60
+
After a few seconds, Visual Studio will pull in the necessary packages, and you'll get a generated xUnit project with the required packages and structure, a reference to the project being tested, and the `ProductServiceTests` class and a stub method.
"generate unit tests using xunit, nsubstitute and insert the result into #ProductServiceTests file."
70
-
70
+
"Generate unit tests using xunit, nsubstitute and insert the result into #ProductServiceTests file."
71
+
71
72
You need to select your test class when you type the `#` character.
72
73
73
74
> [!TIP]
@@ -77,8 +78,6 @@ After a few seconds, Visual Studio will pull in the necessary packages, and we w
77
78
78
79
Execute the prompt, click **Accept**, and Copilot generates the test code. After that, it remains to install the necessary packages.
79
80
80
-
When the packages are installed, the tests can be run. This example worked on the first try: Copilot knows very well how to work with NSubstitute, and all dependencies were defined through interfaces.
81
+
When the packages are installed, the tests can be run. This example worked on the first try: Copilot knows how to work with NSubstitute, and all dependencies were defined through interfaces.
-[Caller info attributes](../language-reference/attributes/caller-information.md)
343
-
-[Code Project: Caller Info Attributes in C# 5.0](https://www.codeproject.com/Tips/606379/Caller-Info-Attributes-in-Csharp)
344
343
345
344
The caller info attribute lets you easily retrieve information about the context in which you're running without resorting to a ton of boilerplate reflection code. It has many uses in diagnostics and logging tasks.
0 commit comments