diff --git a/docs/source/ar/preprocessing.md b/docs/source/ar/preprocessing.md index 1418c69fd7a3..b9113064804b 100644 --- a/docs/source/ar/preprocessing.md +++ b/docs/source/ar/preprocessing.md @@ -302,7 +302,7 @@ pip install datasets -قم بتحميل مجموعة بيانات [food101](https://huggingface.co/datasets/food101) (راجع دليل 🤗 [Datasets tutorial](https://huggingface.co/docs/datasets/load_hub) لمزيد من التفاصيل حول كيفية تحميل مجموعة بيانات) لمعرفة كيف يمكنك استخدام معالج الصور مع مجموعات بيانات رؤية الحاسب: +قم بتحميل مجموعة بيانات [food101](https://huggingface.co/datasets/ethz/food101) (راجع دليل 🤗 [Datasets tutorial](https://huggingface.co/docs/datasets/load_hub) لمزيد من التفاصيل حول كيفية تحميل مجموعة بيانات) لمعرفة كيف يمكنك استخدام معالج الصور مع مجموعات بيانات رؤية الحاسب: @@ -313,7 +313,7 @@ pip install datasets ```py >>> from datasets import load_dataset ->>> dataset = load_dataset("food101", split="train[:100]") +>>> dataset = load_dataset("ethz/food101", split="train[:100]") ``` بعد ذلك، الق نظرة على الصورة مع ميزة 🤗 Datasets [`Image`](https://huggingface.co/docs/datasets/package_reference/main_classes?highlight=image#datasets.Image): diff --git a/docs/source/de/preprocessing.md b/docs/source/de/preprocessing.md index baae623d6988..d686acd0cc6c 100644 --- a/docs/source/de/preprocessing.md +++ b/docs/source/de/preprocessing.md @@ -308,12 +308,12 @@ Die Länge der ersten beiden Beispiele entspricht nun der von Ihnen angegebenen Ein Merkmalsextraktor wird auch verwendet, um Bilder für Bildverarbeitungsaufgaben zu verarbeiten. Auch hier besteht das Ziel darin, das Rohbild in eine Reihe von Tensoren als Eingabe zu konvertieren. -Laden wir den [food101](https://huggingface.co/datasets/food101) Datensatz für dieses Tutorial. Verwenden Sie den Parameter 🤗 Datasets `split`, um nur eine kleine Stichprobe aus dem Trainingssplit zu laden, da der Datensatz recht groß ist: +Laden wir den [food101](https://huggingface.co/datasets/ethz/food101) Datensatz für dieses Tutorial. Verwenden Sie den Parameter 🤗 Datasets `split`, um nur eine kleine Stichprobe aus dem Trainingssplit zu laden, da der Datensatz recht groß ist: ```py >>> from datasets import load_dataset ->>> dataset = load_dataset("food101", split="train[:100]") +>>> dataset = load_dataset("ethz/food101", split="train[:100]") ``` Als Nächstes sehen Sie sich das Bild mit dem Merkmal 🤗 Datensätze [Bild](https://huggingface.co/docs/datasets/package_reference/main_classes?highlight=image#datasets.Image) an: diff --git a/docs/source/en/image_processors.md b/docs/source/en/image_processors.md index feb568bdd3ba..b043ab541dc5 100644 --- a/docs/source/en/image_processors.md +++ b/docs/source/en/image_processors.md @@ -145,7 +145,7 @@ Start by loading a small sample of the [food101](https://hf.co/datasets/food101) ```py from datasets import load_dataset -dataset = load_dataset("food101", split="train[:100]") +dataset = load_dataset("ethz/food101", split="train[:100]") ``` From the [transforms](https://pytorch.org/vision/stable/transforms.html) module, use the [Compose](https://pytorch.org/vision/master/generated/torchvision.transforms.Compose.html) API to chain together [RandomResizedCrop](https://pytorch.org/vision/main/generated/torchvision.transforms.RandomResizedCrop.html) and [ColorJitter](https://pytorch.org/vision/main/generated/torchvision.transforms.ColorJitter.html). These transforms randomly crop and resize an image, and randomly adjusts an images colors. diff --git a/docs/source/en/tasks/image_classification.md b/docs/source/en/tasks/image_classification.md index 0af4be8ed6b9..e4cae438c299 100644 --- a/docs/source/en/tasks/image_classification.md +++ b/docs/source/en/tasks/image_classification.md @@ -26,7 +26,7 @@ after a natural disaster, monitoring crop health, or helping screen medical imag This guide illustrates how to: -1. Fine-tune [ViT](../model_doc/vit) on the [Food-101](https://huggingface.co/datasets/food101) dataset to classify a food item in an image. +1. Fine-tune [ViT](../model_doc/vit) on the [Food-101](https://huggingface.co/datasets/ethz/food101) dataset to classify a food item in an image. 2. Use your fine-tuned model for inference. @@ -57,7 +57,7 @@ experiment and make sure everything works before spending more time training on ```py >>> from datasets import load_dataset ->>> food = load_dataset("food101", split="train[:5000]") +>>> food = load_dataset("ethz/food101", split="train[:5000]") ``` Split the dataset's `train` split into a train and test set with the [`~datasets.Dataset.train_test_split`] method: @@ -250,7 +250,7 @@ Great, now that you've fine-tuned a model, you can use it for inference! Load an image you'd like to run inference on: ```py ->>> ds = load_dataset("food101", split="validation[:10]") +>>> ds = load_dataset("ethz/food101", split="validation[:10]") >>> image = ds["image"][0] ``` diff --git a/docs/source/es/preprocessing.md b/docs/source/es/preprocessing.md index 8486d6a0687a..3d9d9e653fa5 100644 --- a/docs/source/es/preprocessing.md +++ b/docs/source/es/preprocessing.md @@ -321,12 +321,12 @@ Las longitudes de las dos primeras muestras coinciden ahora con la longitud máx También se utiliza un extractor de características para procesar imágenes para tareas de visión por computadora. Una vez más, el objetivo es convertir la imagen en bruto en un batch de tensores como entrada. -Vamos a cargar el dataset [food101](https://huggingface.co/datasets/food101) para este tutorial. Usa el parámetro 🤗 Datasets `split` para cargar solo una pequeña muestra de la división de entrenamiento ya que el dataset es bastante grande: +Vamos a cargar el dataset [food101](https://huggingface.co/datasets/ethz/food101) para este tutorial. Usa el parámetro 🤗 Datasets `split` para cargar solo una pequeña muestra de la división de entrenamiento ya que el dataset es bastante grande: ```py >>> from datasets import load_dataset ->>> dataset = load_dataset("food101", split="train[:100]") +>>> dataset = load_dataset("ethz/food101", split="train[:100]") ``` A continuación, observa la imagen con la función 🤗 Datasets [`Image`](https://huggingface.co/docs/datasets/package_reference/main_classes?highlight=image#datasets.Image): diff --git a/docs/source/es/tasks/image_classification.md b/docs/source/es/tasks/image_classification.md index 1bea46884202..dc0ecabde411 100644 --- a/docs/source/es/tasks/image_classification.md +++ b/docs/source/es/tasks/image_classification.md @@ -20,7 +20,7 @@ rendered properly in your Markdown viewer. La clasificación de imágenes asigna una etiqueta o clase a una imagen. A diferencia de la clasificación de texto o audio, las entradas son los valores de los píxeles que representan una imagen. La clasificación de imágenes tiene muchos usos, como la detección de daños tras una catástrofe, el control de la salud de los cultivos o la búsqueda de signos de enfermedad en imágenes médicas. -Esta guía te mostrará como hacer fine-tune al [ViT](https://huggingface.co/docs/transformers/v4.16.2/en/model_doc/vit) en el dataset [Food-101](https://huggingface.co/datasets/food101) para clasificar un alimento en una imagen. +Esta guía te mostrará como hacer fine-tune al [ViT](https://huggingface.co/docs/transformers/v4.16.2/en/model_doc/vit) en el dataset [Food-101](https://huggingface.co/datasets/ethz/food101) para clasificar un alimento en una imagen. @@ -35,7 +35,7 @@ Carga solo las primeras 5000 imágenes del dataset Food-101 de la biblioteca ```py >>> from datasets import load_dataset ->>> food = load_dataset("food101", split="train[:5000]") +>>> food = load_dataset("ethz/food101", split="train[:5000]") ``` Divide el dataset en un train y un test set: diff --git a/docs/source/it/preprocessing.md b/docs/source/it/preprocessing.md index 6d7bc5b2e3df..d7d35ee0b154 100644 --- a/docs/source/it/preprocessing.md +++ b/docs/source/it/preprocessing.md @@ -321,12 +321,12 @@ La lunghezza dei campioni adesso coincide con la massima lunghezza impostata nel Un estrattore di caratteristiche si può usare anche per processare immagini e per compiti di visione. Ancora una volta, l'obiettivo è convertire l'immagine grezza in un lotto di tensori come input. -Carica il dataset [food101](https://huggingface.co/datasets/food101) per questa esercitazione. Usa il parametro `split` di 🤗 Datasets per caricare solo un piccolo campione dal dataset di addestramento poichè il set di dati è molto grande: +Carica il dataset [food101](https://huggingface.co/datasets/ethz/food101) per questa esercitazione. Usa il parametro `split` di 🤗 Datasets per caricare solo un piccolo campione dal dataset di addestramento poichè il set di dati è molto grande: ```py >>> from datasets import load_dataset ->>> dataset = load_dataset("food101", split="train[:100]") +>>> dataset = load_dataset("ethz/food101", split="train[:100]") ``` Secondo passo, dai uno sguardo alle immagini usando la caratteristica [`Image`](https://huggingface.co/docs/datasets/package_reference/main_classes.html?highlight=image#datasets.Image) di 🤗 Datasets: diff --git a/docs/source/ja/preprocessing.md b/docs/source/ja/preprocessing.md index cb1129a8355e..99ce9aff7534 100644 --- a/docs/source/ja/preprocessing.md +++ b/docs/source/ja/preprocessing.md @@ -321,7 +321,7 @@ pip install datasets -コンピュータビジョンのデータセットで画像プロセッサを使用する方法を示すために、[food101](https://huggingface.co/datasets/food101)データセットをロードします(データセットのロード方法の詳細については🤗[Datasetsチュートリアル](https://huggingface.co/docs/datasets/load_hub)を参照): +コンピュータビジョンのデータセットで画像プロセッサを使用する方法を示すために、[food101](https://huggingface.co/datasets/ethz/food101)データセットをロードします(データセットのロード方法の詳細については🤗[Datasetsチュートリアル](https://huggingface.co/docs/datasets/load_hub)を参照): @@ -332,7 +332,7 @@ pip install datasets ```python >>> from datasets import load_dataset ->>> dataset = load_dataset("food101", split="train[:100]") +>>> dataset = load_dataset("ethz/food101", split="train[:100]") ``` 次に、🤗 Datasetsの [`Image`](https://huggingface.co/docs/datasets/package_reference/main_classes?highlight=image#datasets.Image) 機能で画像を見てみましょう: diff --git a/docs/source/ja/tasks/image_classification.md b/docs/source/ja/tasks/image_classification.md index 164176a911d5..9d8d391b5cc9 100644 --- a/docs/source/ja/tasks/image_classification.md +++ b/docs/source/ja/tasks/image_classification.md @@ -27,7 +27,7 @@ rendered properly in your Markdown viewer. このガイドでは、次の方法を説明します。 -1. [Food-101](https://huggingface.co/datasets/food101) データセットの [ViT](model_doc/vit) を微調整して、画像内の食品を分類します。 +1. [Food-101](https://huggingface.co/datasets/ethz/food101) データセットの [ViT](model_doc/vit) を微調整して、画像内の食品を分類します。 2. 微調整したモデルを推論に使用します。 @@ -58,7 +58,7 @@ Datasets、🤗 データセット ライブラリから Food-101 データセ ```py >>> from datasets import load_dataset ->>> food = load_dataset("food101", split="train[:5000]") +>>> food = load_dataset("ethz/food101", split="train[:5000]") ``` [`~datasets.Dataset.train_test_split`] メソッドを使用して、データセットの `train` 分割をトレイン セットとテスト セットに分割します。 @@ -255,7 +255,7 @@ Datasets、🤗 データセット ライブラリから Food-101 データセ 推論を実行したい画像を読み込みます。 ```py ->>> ds = load_dataset("food101", split="validation[:10]") +>>> ds = load_dataset("ethz/food101", split="validation[:10]") >>> image = ds["image"][0] ``` diff --git a/docs/source/ko/image_processors.md b/docs/source/ko/image_processors.md index eddccb799ecf..39a3f6869f52 100644 --- a/docs/source/ko/image_processors.md +++ b/docs/source/ko/image_processors.md @@ -146,7 +146,7 @@ Transformers의 비전 모델은 입력값으로 PyTorch 텐서 형태의 픽셀 ```py from datasets import load_dataset -dataset = load_dataset("food101", split="train[:100]") +dataset = load_dataset("ethz/food101", split="train[:100]") ``` [transforms](https://pytorch.org/vision/stable/transforms.html) 모듈의 [Compose](https://pytorch.org/vision/master/generated/torchvision.transforms.Compose.html)API는 여러 변환을 하나로 묶어주는 역할을 합니다. 여기서는 이미지를 무작위로 자르고 리사이즈하는 [RandomResizedCrop](https://pytorch.org/vision/main/generated/torchvision.transforms.RandomResizedCrop.html)과 색상을 무작위로 바꾸는 [ColorJitter](https://pytorch.org/vision/main/generated/torchvision.transforms.ColorJitter.html)를 함께 사용해보겠습니다. diff --git a/docs/source/ko/tasks/image_classification.md b/docs/source/ko/tasks/image_classification.md index 3e1e829ae8d5..88051cae80a5 100644 --- a/docs/source/ko/tasks/image_classification.md +++ b/docs/source/ko/tasks/image_classification.md @@ -26,7 +26,7 @@ rendered properly in your Markdown viewer. 이 가이드에서는 다음을 설명합니다: -1. [Food-101](https://huggingface.co/datasets/food101) 데이터 세트에서 [ViT](model_doc/vit)를 미세 조정하여 이미지에서 식품 항목을 분류합니다. +1. [Food-101](https://huggingface.co/datasets/ethz/food101) 데이터 세트에서 [ViT](model_doc/vit)를 미세 조정하여 이미지에서 식품 항목을 분류합니다. 2. 추론을 위해 미세 조정 모델을 사용합니다. @@ -57,7 +57,7 @@ Hugging Face 계정에 로그인하여 모델을 업로드하고 커뮤니티에 ```py >>> from datasets import load_dataset ->>> food = load_dataset("food101", split="train[:5000]") +>>> food = load_dataset("ethz/food101", split="train[:5000]") ``` 데이터 세트의 `train`을 [`~datasets.Dataset.train_test_split`] 메소드를 사용하여 훈련 및 테스트 세트로 분할하세요: @@ -252,7 +252,7 @@ Hugging Face 계정에 로그인하여 모델을 업로드하고 커뮤니티에 추론을 수행하고자 하는 이미지를 가져와봅시다: ```py ->>> ds = load_dataset("food101", split="validation[:10]") +>>> ds = load_dataset("ethz/food101", split="validation[:10]") >>> image = ds["image"][0] ``` diff --git a/docs/source/zh/preprocessing.md b/docs/source/zh/preprocessing.md index 252f41f214ea..aad163ec40bb 100644 --- a/docs/source/zh/preprocessing.md +++ b/docs/source/zh/preprocessing.md @@ -323,7 +323,7 @@ pip install datasets -加载[food101](https://huggingface.co/datasets/food101)数据集(有关如何加载数据集的更多详细信息,请参阅🤗 [Datasets教程](https://huggingface.co/docs/datasets/load_hub))以了解如何在计算机视觉数据集中使用图像处理器: +加载[food101](https://huggingface.co/datasets/ethz/food101)数据集(有关如何加载数据集的更多详细信息,请参阅🤗 [Datasets教程](https://huggingface.co/docs/datasets/load_hub))以了解如何在计算机视觉数据集中使用图像处理器: @@ -335,7 +335,7 @@ pip install datasets ```py >>> from datasets import load_dataset ->>> dataset = load_dataset("food101", split="train[:100]") +>>> dataset = load_dataset("ethz/food101", split="train[:100]") ``` 接下来,使用🤗 Datasets的[`Image`](https://huggingface.co/docs/datasets/package_reference/main_classes?highlight=image#datasets.Image)功能查看图像: