Skip to content

Commit 192d7db

Browse files
ReDeiPiratiAlessio Gozzoli
andauthored
docs: Adding metadata to the tutorials (#8837)
Co-authored-by: Alessio Gozzoli <[email protected]>
1 parent c5286af commit 192d7db

8 files changed

+18
-14
lines changed

docs/source/tutorials/how_to_compare_two_ai_models_with_label_studio.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,8 @@ ipynb_repo_path: tutorials/how-to-compare-two-ai-models-with-label-studio/how_to
99
repo_url: https://github.com/HumanSignal/awesome-label-studio-tutorials/tree/main/tutorials/how-to-compare-two-ai-models-with-label-studio
1010
report_bug_url: https://github.com/HumanSignal/awesome-label-studio-tutorials/issues/new
1111
thumbnail: /images/tutorials/tutorials-compare-ai-models.png
12+
meta_title: How to Compare Two AI Models with Label Studio
13+
meta_description: Learn how to compare and evaluate two AI models with the Label Studio SDK.
1214
---
1315

1416
## Why this matters

docs/source/tutorials/how_to_connect_Hugging_Face_with_Label_Studio_SDK.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,8 @@ ipynb_repo_path: tutorials/how-to-connect-Hugging-Face-with-Label-Studio-SDK/how
99
repo_url: https://github.com/HumanSignal/awesome-label-studio-tutorials/tree/main/tutorials/how-to-connect-Hugging-Face-with-Label-Studio-SDK
1010
report_bug_url: https://github.com/HumanSignal/awesome-label-studio-tutorials/issues/new
1111
thumbnail: /images/tutorials/tutorials-hugging-face-ls-sdk.png
12+
meta_title: How to Connect Hugging Face with Label Studio SDK
13+
meta_description: Learn how to create a NLP workflow by integrating Hugging Face datasets and models with Label Studio for annotation and active learning.
1214
---
1315
**A Complete Guide to Connecting Hugging Face and Label Studio**
1416

docs/source/tutorials/how_to_create_a_Benchmark_and_Evaluate_your_models_with_Label_Studio.md

Lines changed: 3 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,8 @@ ipynb_repo_path: tutorials/how-to-create-benchmark-and-evaluate-your-models/how_
99
repo_url: https://github.com/HumanSignal/awesome-label-studio-tutorials/tree/main/tutorials/how-to-create-benchmark-and-evaluate-your-models
1010
report_bug_url: https://github.com/HumanSignal/awesome-label-studio-tutorials/issues/new
1111
thumbnail: /images/tutorials/tutorials-ai-benchmark-and-eval.png
12+
meta_title: How to Connect Hugging Face with Label Studio SDK
13+
meta_description: Learn how to use the Label Studio SDK to create a high-quality benchmark dataset to evaluate multiple AI models
1214
---
1315
Evaluating models is only as good as the benchmark you test them against.
1416
In this tutorial, you'll learn how to use **Label Studio** to create a high-quality benchmark dataset, label it with human expertise, and then evaluate multiple AI models against it — all using the **Label Studio SDK**.
@@ -371,19 +373,6 @@ wait_for_runs_to_complete(versions)
371373
version 46549 completed
372374

373375

374-
375-
```python
376-
project = ls.projects.get(193733)
377-
prompt = ls.prompts.get(37050)
378-
versions = ls.prompts.versions.list(prompt_id=prompt.id)
379-
```
380-
381-
/usr/local/lib/python3.12/dist-packages/pydantic/main.py:463: UserWarning: Pydantic serializer warnings:
382-
PydanticSerializationUnexpectedValue(Expected `str` - serialized value may not be as expected [input_value=[], input_type=list])
383-
PydanticSerializationUnexpectedValue(Expected `str` - serialized value may not be as expected [input_value=90367, input_type=int])
384-
return self.__pydantic_serializer__.to_python(
385-
386-
387376
### Collect Run Costs
388377

389378
Let’s retrieve the **costs for each model run** to include them as additional data points.
@@ -548,4 +537,4 @@ Keep iterating on what you built today:
548537
- [Blog: Why Benchmarks Matter for Evaluating LLMs](https://labelstud.io/blog/why-benchmarks-matter-for-evaluating-llms/)
549538
- [Blog: How to Build AI Benchmarks That Evolve with Your Models](https://labelstud.io/blog/how-to-build-ai-benchmarks-that-evolve-with-your-models/)
550539
- [Blog: Evaluating the GPT-5 Series on Custom Benchmarks](https://labelstud.io/blog/evaluating-the-gpt-5-series-on-custom-benchmarks/)
551-
- [Blog: How LegalBenchmarks.AI Built a Domain-Specific AI Benchmark](https://labelstud.io/blog/how-legalbenchmarks-ai-built-a-domain-specific-ai-benchmark/)
540+
- [Blog: How LegalBenchmarks.AI Built a Domain-Specific AI Benchmark](https://labelstud.io/blog/how-legalbenchmarks-ai-built-a-domain-specific-ai-benchmark/)

docs/source/tutorials/how_to_debug_agents_with_LangSmith_and_Label_Studio.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,8 @@ ipynb_repo_path: tutorials/how-to-debug-agents-with-LangSmith-and-Label-Studio/h
99
repo_url: https://github.com/HumanSignal/awesome-label-studio-tutorials/tree/main/tutorials/how-to-debug-agents-with-LangSmith-and-Label-Studio
1010
report_bug_url: https://github.com/HumanSignal/awesome-label-studio-tutorials/issues/new
1111
thumbnail: /images/tutorials/tutorials-debug-agents-langsmith.png
12+
meta_title: How to Debug Agents with LangSmith and Label Studio
13+
meta_description: Learn how LangSmith and Label Studio can work together to debug and evaluate AI Agents.
1214
---
1315
## 0. Label Studio Requirements
1416

docs/source/tutorials/how_to_embed_evaluation_workflows_in_your_research_stack_with_Label_Studio.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,8 @@ ipynb_repo_path: tutorials/how-to-embed-evaluation-workflows-in-your-research-st
99
repo_url: https://github.com/HumanSignal/awesome-label-studio-tutorials/tree/main/tutorials/how-to-embed-evaluation-workflows-in-your-research-stack-with-label-studio
1010
report_bug_url: https://github.com/HumanSignal/awesome-label-studio-tutorials/issues/new
1111
thumbnail: /images/tutorials/tutorials-eval-flows-research-stack.png
12+
meta_title: How to Embed Evaluation Workflows in Your Research Stack with Label Studio
13+
meta_description: Learn how to build an embedded evaluation workflow directly into your jupyer notebook with Label Studio.
1214
---
1315
## Label Studio Requirements
1416

docs/source/tutorials/how_to_measure_inter_annotator_agreement_and_build_human_consensus.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,9 @@ ipynb_repo_path: tutorials/how-to-measure-inter-annotator-agreement-and-build-hu
99
repo_url: https://github.com/HumanSignal/awesome-label-studio-tutorials/tree/main/tutorials/how-to-measure-inter-annotator-agreement-and-build-human-consensus
1010
report_bug_url: https://github.com/HumanSignal/awesome-label-studio-tutorials/issues/new
1111
thumbnail: /images/tutorials/tutorials-inter-annotator-agreement-and-consensus.png
12+
meta_title: "How to Measure Inter-Annotator Agreement and Build Human Consensus with Label Studio"
13+
meta_description: Learn how to measure inter-annotator agreement, build human consensus, establish ground truth and
14+
compare model predictions using the Label Studio SDK.
1215
---
1316

1417
This tutorial walks through a practical workflow to measure inter-annotator agreement, build human consensus, establish ground truth and

docs/source/tutorials/how_to_multi_turn_chat_evals_with_chainlit_and_label_studio.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,8 @@ ipynb_repo_path: tutorials/how-to-multi-turn-chat-evals-with-chainlit-and-label-
99
repo_url: https://github.com/HumanSignal/awesome-label-studio-tutorials/tree/main/tutorials/how-to-multi-turn-chat-evals-with-chainlit-and-label-studio
1010
report_bug_url: https://github.com/HumanSignal/awesome-label-studio-tutorials/issues/new
1111
thumbnail: /images/tutorials/tutorials-eval-multi-turn-chainlit.png
12+
meta_title: "How to Evaluate Multi-Turn AI Conversations with Chainlit and Label Studio"
13+
meta_description: Learn how to create a Label Studio project for evaluating chatbot conversations using the Chatbot Evaluation template.
1214
---
1315
This notebook demonstrates how to create a Label Studio project for evaluating chatbot conversations using the Chatbot Evaluation template.
1416

docs/source/tutorials/index.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,7 @@
11
---
22
title: Tutorials
3+
meta_title: Tutorials
4+
meta_description: A curated list of tutorials to help you get started or learn how to integrate Label Studio into your workflow.
35
layout: tutorials
46
hide_sidebar: true
57
---

0 commit comments

Comments
 (0)