Skip to content

Commit 4f20ce9

Browse files
authored
Fix build warnings (#45942)
1 parent 9bd2fc4 commit 4f20ce9

File tree

12 files changed

+49
-59
lines changed

12 files changed

+49
-59
lines changed

docs/ai/tutorials/evaluate-with-reporting.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -77,7 +77,7 @@ Complete the following steps to create an MSTest project that connects to the `g
7777
7878
**Scenario name**
7979
80-
The [scenario name](xref:Microsoft.Extensions.AI.Evaluation.Reporting.ScenarioRun.ScenarioName) is set to the fully qualified name of the current test method. However, you can set it to any string of your choice when you call <xref:Microsoft.Extensions.AI.Evaluation.Reporting.ReportingConfiguration.CreateScenarioRunAsync(System.String,System.String,System.Collections.Generic.IEnumerable{System.String},System.Threading.CancellationToken)>. Here are some considerations for choosing a scenario name:
80+
The [scenario name](xref:Microsoft.Extensions.AI.Evaluation.Reporting.ScenarioRun.ScenarioName) is set to the fully qualified name of the current test method. However, you can set it to any string of your choice when you call <xref:Microsoft.Extensions.AI.Evaluation.Reporting.ReportingConfiguration.CreateScenarioRunAsync(System.String,System.String,System.Collections.Generic.IEnumerable{System.String},System.Collections.Generic.IEnumerable{System.String},System.Threading.CancellationToken)>. Here are some considerations for choosing a scenario name:
8181
8282
- When using disk-based storage, the scenario name is used as the name of the folder under which the corresponding evaluation results are stored. So it's a good idea to keep the name reasonably short and avoid any characters that aren't allowed in file and directory names.
8383
- By default, the generated evaluation report splits scenario names on `.` so that the results can be displayed in a hierarchical view with appropriate grouping, nesting, and aggregation. This is especially useful in cases where the scenario name is set to the fully qualified name of the corresponding test method, since it allows the results to be grouped by namespaces and class names in the hierarchy. However, you can also take advantage of this feature by including periods (`.`) in your own custom scenario names to create a reporting hierarchy that works best for your scenarios.
@@ -94,7 +94,7 @@ Complete the following steps to create an MSTest project that connects to the `g
9494
9595
A <xref:Microsoft.Extensions.AI.Evaluation.Reporting.ReportingConfiguration> identifies:
9696
97-
- The set of evaluators that should be invoked for each <xref:Microsoft.Extensions.AI.Evaluation.Reporting.ScenarioRun> that's created by calling <xref:Microsoft.Extensions.AI.Evaluation.Reporting.ReportingConfiguration.CreateScenarioRunAsync(System.String,System.String,System.Collections.Generic.IEnumerable{System.String},System.Threading.CancellationToken)>.
97+
- The set of evaluators that should be invoked for each <xref:Microsoft.Extensions.AI.Evaluation.Reporting.ScenarioRun> that's created by calling <xref:Microsoft.Extensions.AI.Evaluation.Reporting.ReportingConfiguration.CreateScenarioRunAsync(System.String,System.String,System.Collections.Generic.IEnumerable{System.String},System.Collections.Generic.IEnumerable{System.String},System.Threading.CancellationToken)>.
9898
- The LLM endpoint that the evaluators should use (see <xref:Microsoft.Extensions.AI.Evaluation.Reporting.ReportingConfiguration.ChatConfiguration?displayProperty=nameWithType>).
9999
- How and where the results for the scenario runs should be stored.
100100
- How LLM responses related to the scenario runs should be cached.
@@ -171,4 +171,4 @@ Run the test using your preferred test workflow, for example, by using the CLI c
171171
- Navigate to the directory where the test results are stored (which is `C:\TestReports`, unless you modified the location when you created the <xref:Microsoft.Extensions.AI.Evaluation.Reporting.ReportingConfiguration>). In the `results` subdirectory, notice that there's a folder for each test run named with a timestamp (`ExecutionName`). Inside each of those folders is a folder for each scenario name&mdash;in this case, just the single test method in the project. That folder contains a JSON file with the all the data including the messages, response, and evaluation result.
172172
- Expand the evaluation. Here are a couple ideas:
173173
- Add an additional custom evaluator, such as [an evaluator that uses AI to determine the measurement system](https://github.com/dotnet/ai-samples/blob/main/src/microsoft-extensions-ai-evaluation/api/evaluation/Evaluators/MeasurementSystemEvaluator.cs) that's used in the response.
174-
- Add another test method, for example, [a method that evaluates multiple responses](https://github.com/dotnet/ai-samples/blob/main/src/microsoft-extensions-ai-evaluation/api/reporting/ReportingExamples.Example02_SamplingAndEvaluatingMultipleResponses.cs) from the LLM. Since each response can be different, it's good to sample and evaluate at least a few responses to a question. In this case, you specify an iteration name each time you call <xref:Microsoft.Extensions.AI.Evaluation.Reporting.ReportingConfiguration.CreateScenarioRunAsync(System.String,System.String,System.Collections.Generic.IEnumerable{System.String},System.Threading.CancellationToken)>.
174+
- Add another test method, for example, [a method that evaluates multiple responses](https://github.com/dotnet/ai-samples/blob/main/src/microsoft-extensions-ai-evaluation/api/reporting/ReportingExamples.Example02_SamplingAndEvaluatingMultipleResponses.cs) from the LLM. Since each response can be different, it's good to sample and evaluate at least a few responses to a question. In this case, you specify an iteration name each time you call <xref:Microsoft.Extensions.AI.Evaluation.Reporting.ReportingConfiguration.CreateScenarioRunAsync(System.String,System.String,System.Collections.Generic.IEnumerable{System.String},System.Collections.Generic.IEnumerable{System.String},System.Threading.CancellationToken)>.

docs/architecture/cloud-native/resilient-communications.md

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -84,8 +84,6 @@ The Azure cloud embraces Istio and provides direct support for it within Azure K
8484

8585
- [Resilience in Azure whitepaper](https://azure.microsoft.com/mediahandler/files/resourcefiles/resilience-in-azure-whitepaper/Resilience%20in%20Azure.pdf)
8686

87-
- [network latency](https://www.techopedia.com/definition/8553/network-latency)
88-
8987
- [Redundancy](/azure/architecture/guide/design-principles/redundancy)
9088

9189
- [geo-replication](/azure/sql-database/sql-database-active-geo-replication)

docs/architecture/microservices/multi-container-microservice-net-applications/implement-api-gateways-with-ocelot.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -374,9 +374,9 @@ But the application is configured so it accesses all the microservices through t
374374

375375
### The Gateway aggregation pattern in eShopOnContainers
376376

377-
As introduced previously, a flexible way to implement requests aggregation is with custom services, by code. You could also implement request aggregation with the [Request Aggregation feature in Ocelot](https://ocelot.readthedocs.io/en/latest/features/requestaggregation.html#request-aggregation), but it might not be as flexible as you need. Therefore, the selected way to implement aggregation in eShopOnContainers is with an explicit ASP.NET Core Web API service for each aggregator.
377+
As introduced previously, a flexible way to implement requests aggregation is with custom services, by code. The selected way to implement aggregation in eShopOnContainers is with an explicit ASP.NET Core Web API service for each aggregator.
378378

379-
According to that approach, the API Gateway composition diagram is in reality a bit more extended when considering the aggregator services that are not shown in the simplified global architecture diagram shown previously.
379+
According to that approach, the API Gateway composition diagram is in reality a bit more extended when considering the aggregator services that aren't shown in the simplified global architecture diagram shown previously.
380380

381381
In the following diagram, you can also see how the aggregator services work with their related API Gateways.
382382

@@ -572,7 +572,7 @@ There are other important features to research and use, when using an Ocelot API
572572

573573
- **Rate limiting** \
574574
[https://ocelot.readthedocs.io/en/latest/features/ratelimiting.html](https://ocelot.readthedocs.io/en/latest/features/ratelimiting.html )
575-
575+
576576
- **Swagger for Ocelot** \
577577
[https://github.com/Burgyn/MMLib.SwaggerForOcelot](https://github.com/Burgyn/MMLib.SwaggerForOcelot)
578578

docs/azure/index.yml

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -262,11 +262,11 @@ additionalContent:
262262
- title: Webcasts and shows
263263
links:
264264
- text: Azure Friday
265-
url: https://azure.microsoft.com/resources/videos/azure-friday/
265+
url: /shows/azure-friday/
266266
- text: The Cloud Native Show
267-
url: /Shows/The-Cloud-Native-Show
267+
url: /shows/The-Cloud-Native-Show
268268
- text: On .NET
269-
url: /Shows/On-NET
269+
url: /shows/On-NET
270270
- text: .NET Community Standup
271271
url: https://dotnet.microsoft.com/platform/community/standup
272272
- text: On .NET Live
Loading

docs/core/testing/unit-testing-with-copilot.md

Lines changed: 9 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -3,11 +3,12 @@ title: Generate Unit Tests with Copilot
33
author: sigmade
44
description: How to generate unit tests and test projects in C# using the xUnit framework with the help of Visual Studio commands and GitHub Copilot
55
ms.date: 01/12/2025
6+
ms.collection: ce-skilling-ai-copilot
67
---
78

89
# Generate unit tests with GitHub Copilot
910

10-
In this article, you explore how to generate unit tests and test projects in C# using the xUnit framework with the help of Visual Studio commands and GitHub Copilot.
11+
In this article, you explore how to generate unit tests and test projects in C# using the xUnit framework with the help of Visual Studio commands and GitHub Copilot. Using Visual Studio in combination with GitHub Copilot significantly simplifies the process of generating and writing unit tests.
1112

1213
## Create a test project
1314

@@ -53,21 +54,21 @@ In the **Create Unit Tests** dialog, select **xUnit** from the **Test Framework*
5354
5455
:::image type="content" source="media/create-unit-test-window.png" lightbox="media/create-unit-test-window.png" alt-text="Create Unit Tests window":::
5556

56-
* If you don't have a test project yet, choose "New Test Project" or select an existing one.
57-
* If necessary, specify a template for the namespace, class, and method name, then click OK.
57+
- If you don't have a test project yet, choose **New Test Project** or select an existing one.
58+
- If necessary, specify a template for the namespace, class, and method name, then click **OK**.
5859

59-
After a few seconds, Visual Studio will pull in the necessary packages, and we will get a generated xUnit project with the required packages, structure, a reference to the project being tested, and with the `ProductServiceTests` class and a stub method.
60+
After a few seconds, Visual Studio will pull in the necessary packages, and you'll get a generated xUnit project with the required packages and structure, a reference to the project being tested, and the `ProductServiceTests` class and a stub method.
6061

6162
:::image type="content" source="media/test-mehod-stub.png" lightbox="media/test-mehod-stub.png" alt-text="Generated stub method":::
6263

6364
## Generate the tests themselves
6465

6566
- Select the method being tested again.
66-
- Right-click - **Ask Copilot**.
67+
- Right-click and select **Ask Copilot**.
6768
- Enter a simple prompt, such as:
6869

69-
"generate unit tests using xunit, nsubstitute and insert the result into #ProductServiceTests file."
70-
70+
"Generate unit tests using xunit, nsubstitute and insert the result into #ProductServiceTests file."
71+
7172
You need to select your test class when you type the `#` character.
7273

7374
> [!TIP]
@@ -77,8 +78,6 @@ After a few seconds, Visual Studio will pull in the necessary packages, and we w
7778

7879
Execute the prompt, click **Accept**, and Copilot generates the test code. After that, it remains to install the necessary packages.
7980

80-
When the packages are installed, the tests can be run. This example worked on the first try: Copilot knows very well how to work with NSubstitute, and all dependencies were defined through interfaces.
81+
When the packages are installed, the tests can be run. This example worked on the first try: Copilot knows how to work with NSubstitute, and all dependencies were defined through interfaces.
8182

8283
:::image type="content" source="media/test-copilot-result.png" lightbox="media/test-copilot-result.png" alt-text="Generated tests":::
83-
84-
Thus, using **Visual Studio** in combination with **GitHub Copilot** significantly simplifies the process of generating and writing unit tests.

docs/csharp/whats-new/csharp-version-history.md

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -340,7 +340,6 @@ C# version 5.0, released with Visual Studio 2012, was a focused version of the l
340340

341341
- [Asynchronous members](../asynchronous-programming/index.md)
342342
- [Caller info attributes](../language-reference/attributes/caller-information.md)
343-
- [Code Project: Caller Info Attributes in C# 5.0](https://www.codeproject.com/Tips/606379/Caller-Info-Attributes-in-Csharp)
344343

345344
The caller info attribute lets you easily retrieve information about the context in which you're running without resorting to a ton of boilerplate reflection code. It has many uses in diagnostics and logging tasks.
346345

0 commit comments

Comments
 (0)