Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions docs/source/ar/preprocessing.md
Original file line number Diff line number Diff line change
Expand Up @@ -302,7 +302,7 @@ pip install datasets

</Tip>

قم بتحميل مجموعة بيانات [food101](https://huggingface.co/datasets/food101) (راجع دليل 🤗 [Datasets tutorial](https://huggingface.co/docs/datasets/load_hub) لمزيد من التفاصيل حول كيفية تحميل مجموعة بيانات) لمعرفة كيف يمكنك استخدام معالج الصور مع مجموعات بيانات رؤية الحاسب:
قم بتحميل مجموعة بيانات [food101](https://huggingface.co/datasets/ethz/food101) (راجع دليل 🤗 [Datasets tutorial](https://huggingface.co/docs/datasets/load_hub) لمزيد من التفاصيل حول كيفية تحميل مجموعة بيانات) لمعرفة كيف يمكنك استخدام معالج الصور مع مجموعات بيانات رؤية الحاسب:

<Tip>

Expand All @@ -313,7 +313,7 @@ pip install datasets
```py
>>> from datasets import load_dataset

>>> dataset = load_dataset("food101", split="train[:100]")
>>> dataset = load_dataset("ethz/food101", split="train[:100]")
```

بعد ذلك، الق نظرة على الصورة مع ميزة 🤗 Datasets [`Image`](https://huggingface.co/docs/datasets/package_reference/main_classes?highlight=image#datasets.Image):
Expand Down
4 changes: 2 additions & 2 deletions docs/source/de/preprocessing.md
Original file line number Diff line number Diff line change
Expand Up @@ -308,12 +308,12 @@ Die Länge der ersten beiden Beispiele entspricht nun der von Ihnen angegebenen

Ein Merkmalsextraktor wird auch verwendet, um Bilder für Bildverarbeitungsaufgaben zu verarbeiten. Auch hier besteht das Ziel darin, das Rohbild in eine Reihe von Tensoren als Eingabe zu konvertieren.

Laden wir den [food101](https://huggingface.co/datasets/food101) Datensatz für dieses Tutorial. Verwenden Sie den Parameter 🤗 Datasets `split`, um nur eine kleine Stichprobe aus dem Trainingssplit zu laden, da der Datensatz recht groß ist:
Laden wir den [food101](https://huggingface.co/datasets/ethz/food101) Datensatz für dieses Tutorial. Verwenden Sie den Parameter 🤗 Datasets `split`, um nur eine kleine Stichprobe aus dem Trainingssplit zu laden, da der Datensatz recht groß ist:

```py
>>> from datasets import load_dataset

>>> dataset = load_dataset("food101", split="train[:100]")
>>> dataset = load_dataset("ethz/food101", split="train[:100]")
```

Als Nächstes sehen Sie sich das Bild mit dem Merkmal 🤗 Datensätze [Bild](https://huggingface.co/docs/datasets/package_reference/main_classes?highlight=image#datasets.Image) an:
Expand Down
2 changes: 1 addition & 1 deletion docs/source/en/image_processors.md
Original file line number Diff line number Diff line change
Expand Up @@ -145,7 +145,7 @@ Start by loading a small sample of the [food101](https://hf.co/datasets/food101)
```py
from datasets import load_dataset

dataset = load_dataset("food101", split="train[:100]")
dataset = load_dataset("ethz/food101", split="train[:100]")
```

From the [transforms](https://pytorch.org/vision/stable/transforms.html) module, use the [Compose](https://pytorch.org/vision/master/generated/torchvision.transforms.Compose.html) API to chain together [RandomResizedCrop](https://pytorch.org/vision/main/generated/torchvision.transforms.RandomResizedCrop.html) and [ColorJitter](https://pytorch.org/vision/main/generated/torchvision.transforms.ColorJitter.html). These transforms randomly crop and resize an image, and randomly adjusts an images colors.
Expand Down
6 changes: 3 additions & 3 deletions docs/source/en/tasks/image_classification.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ after a natural disaster, monitoring crop health, or helping screen medical imag

This guide illustrates how to:

1. Fine-tune [ViT](../model_doc/vit) on the [Food-101](https://huggingface.co/datasets/food101) dataset to classify a food item in an image.
1. Fine-tune [ViT](../model_doc/vit) on the [Food-101](https://huggingface.co/datasets/ethz/food101) dataset to classify a food item in an image.
2. Use your fine-tuned model for inference.

<Tip>
Expand Down Expand Up @@ -57,7 +57,7 @@ experiment and make sure everything works before spending more time training on
```py
>>> from datasets import load_dataset

>>> food = load_dataset("food101", split="train[:5000]")
>>> food = load_dataset("ethz/food101", split="train[:5000]")
```

Split the dataset's `train` split into a train and test set with the [`~datasets.Dataset.train_test_split`] method:
Expand Down Expand Up @@ -250,7 +250,7 @@ Great, now that you've fine-tuned a model, you can use it for inference!
Load an image you'd like to run inference on:

```py
>>> ds = load_dataset("food101", split="validation[:10]")
>>> ds = load_dataset("ethz/food101", split="validation[:10]")
>>> image = ds["image"][0]
```

Expand Down
4 changes: 2 additions & 2 deletions docs/source/es/preprocessing.md
Original file line number Diff line number Diff line change
Expand Up @@ -321,12 +321,12 @@ Las longitudes de las dos primeras muestras coinciden ahora con la longitud máx

También se utiliza un extractor de características para procesar imágenes para tareas de visión por computadora. Una vez más, el objetivo es convertir la imagen en bruto en un batch de tensores como entrada.

Vamos a cargar el dataset [food101](https://huggingface.co/datasets/food101) para este tutorial. Usa el parámetro 🤗 Datasets `split` para cargar solo una pequeña muestra de la división de entrenamiento ya que el dataset es bastante grande:
Vamos a cargar el dataset [food101](https://huggingface.co/datasets/ethz/food101) para este tutorial. Usa el parámetro 🤗 Datasets `split` para cargar solo una pequeña muestra de la división de entrenamiento ya que el dataset es bastante grande:

```py
>>> from datasets import load_dataset

>>> dataset = load_dataset("food101", split="train[:100]")
>>> dataset = load_dataset("ethz/food101", split="train[:100]")
```

A continuación, observa la imagen con la función 🤗 Datasets [`Image`](https://huggingface.co/docs/datasets/package_reference/main_classes?highlight=image#datasets.Image):
Expand Down
4 changes: 2 additions & 2 deletions docs/source/es/tasks/image_classification.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ rendered properly in your Markdown viewer.

La clasificación de imágenes asigna una etiqueta o clase a una imagen. A diferencia de la clasificación de texto o audio, las entradas son los valores de los píxeles que representan una imagen. La clasificación de imágenes tiene muchos usos, como la detección de daños tras una catástrofe, el control de la salud de los cultivos o la búsqueda de signos de enfermedad en imágenes médicas.

Esta guía te mostrará como hacer fine-tune al [ViT](https://huggingface.co/docs/transformers/v4.16.2/en/model_doc/vit) en el dataset [Food-101](https://huggingface.co/datasets/food101) para clasificar un alimento en una imagen.
Esta guía te mostrará como hacer fine-tune al [ViT](https://huggingface.co/docs/transformers/v4.16.2/en/model_doc/vit) en el dataset [Food-101](https://huggingface.co/datasets/ethz/food101) para clasificar un alimento en una imagen.

<Tip>

Expand All @@ -35,7 +35,7 @@ Carga solo las primeras 5000 imágenes del dataset Food-101 de la biblioteca
```py
>>> from datasets import load_dataset

>>> food = load_dataset("food101", split="train[:5000]")
>>> food = load_dataset("ethz/food101", split="train[:5000]")
```

Divide el dataset en un train y un test set:
Expand Down
4 changes: 2 additions & 2 deletions docs/source/it/preprocessing.md
Original file line number Diff line number Diff line change
Expand Up @@ -321,12 +321,12 @@ La lunghezza dei campioni adesso coincide con la massima lunghezza impostata nel

Un estrattore di caratteristiche si può usare anche per processare immagini e per compiti di visione. Ancora una volta, l'obiettivo è convertire l'immagine grezza in un lotto di tensori come input.

Carica il dataset [food101](https://huggingface.co/datasets/food101) per questa esercitazione. Usa il parametro `split` di 🤗 Datasets per caricare solo un piccolo campione dal dataset di addestramento poichè il set di dati è molto grande:
Carica il dataset [food101](https://huggingface.co/datasets/ethz/food101) per questa esercitazione. Usa il parametro `split` di 🤗 Datasets per caricare solo un piccolo campione dal dataset di addestramento poichè il set di dati è molto grande:

```py
>>> from datasets import load_dataset

>>> dataset = load_dataset("food101", split="train[:100]")
>>> dataset = load_dataset("ethz/food101", split="train[:100]")
```

Secondo passo, dai uno sguardo alle immagini usando la caratteristica [`Image`](https://huggingface.co/docs/datasets/package_reference/main_classes.html?highlight=image#datasets.Image) di 🤗 Datasets:
Expand Down
4 changes: 2 additions & 2 deletions docs/source/ja/preprocessing.md
Original file line number Diff line number Diff line change
Expand Up @@ -321,7 +321,7 @@ pip install datasets

</Tip>

コンピュータビジョンのデータセットで画像プロセッサを使用する方法を示すために、[food101](https://huggingface.co/datasets/food101)データセットをロードします(データセットのロード方法の詳細については🤗[Datasetsチュートリアル](https://huggingface.co/docs/datasets/load_hub)を参照):
コンピュータビジョンのデータセットで画像プロセッサを使用する方法を示すために、[food101](https://huggingface.co/datasets/ethz/food101)データセットをロードします(データセットのロード方法の詳細については🤗[Datasetsチュートリアル](https://huggingface.co/docs/datasets/load_hub)を参照):

<Tip>

Expand All @@ -332,7 +332,7 @@ pip install datasets
```python
>>> from datasets import load_dataset

>>> dataset = load_dataset("food101", split="train[:100]")
>>> dataset = load_dataset("ethz/food101", split="train[:100]")
```

次に、🤗 Datasetsの [`Image`](https://huggingface.co/docs/datasets/package_reference/main_classes?highlight=image#datasets.Image) 機能で画像を見てみましょう:
Expand Down
6 changes: 3 additions & 3 deletions docs/source/ja/tasks/image_classification.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ rendered properly in your Markdown viewer.

このガイドでは、次の方法を説明します。

1. [Food-101](https://huggingface.co/datasets/food101) データセットの [ViT](model_doc/vit) を微調整して、画像内の食品を分類します。
1. [Food-101](https://huggingface.co/datasets/ethz/food101) データセットの [ViT](model_doc/vit) を微調整して、画像内の食品を分類します。
2. 微調整したモデルを推論に使用します。

<Tip>
Expand Down Expand Up @@ -58,7 +58,7 @@ Datasets、🤗 データセット ライブラリから Food-101 データセ
```py
>>> from datasets import load_dataset

>>> food = load_dataset("food101", split="train[:5000]")
>>> food = load_dataset("ethz/food101", split="train[:5000]")
```

[`~datasets.Dataset.train_test_split`] メソッドを使用して、データセットの `train` 分割をトレイン セットとテスト セットに分割します。
Expand Down Expand Up @@ -255,7 +255,7 @@ Datasets、🤗 データセット ライブラリから Food-101 データセ
推論を実行したい画像を読み込みます。

```py
>>> ds = load_dataset("food101", split="validation[:10]")
>>> ds = load_dataset("ethz/food101", split="validation[:10]")
>>> image = ds["image"][0]
```

Expand Down
2 changes: 1 addition & 1 deletion docs/source/ko/image_processors.md
Original file line number Diff line number Diff line change
Expand Up @@ -146,7 +146,7 @@ Transformers의 비전 모델은 입력값으로 PyTorch 텐서 형태의 픽셀
```py
from datasets import load_dataset

dataset = load_dataset("food101", split="train[:100]")
dataset = load_dataset("ethz/food101", split="train[:100]")
```

[transforms](https://pytorch.org/vision/stable/transforms.html) 모듈의 [Compose](https://pytorch.org/vision/master/generated/torchvision.transforms.Compose.html)API는 여러 변환을 하나로 묶어주는 역할을 합니다. 여기서는 이미지를 무작위로 자르고 리사이즈하는 [RandomResizedCrop](https://pytorch.org/vision/main/generated/torchvision.transforms.RandomResizedCrop.html)과 색상을 무작위로 바꾸는 [ColorJitter](https://pytorch.org/vision/main/generated/torchvision.transforms.ColorJitter.html)를 함께 사용해보겠습니다.
Expand Down
6 changes: 3 additions & 3 deletions docs/source/ko/tasks/image_classification.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ rendered properly in your Markdown viewer.

이 가이드에서는 다음을 설명합니다:

1. [Food-101](https://huggingface.co/datasets/food101) 데이터 세트에서 [ViT](model_doc/vit)를 미세 조정하여 이미지에서 식품 항목을 분류합니다.
1. [Food-101](https://huggingface.co/datasets/ethz/food101) 데이터 세트에서 [ViT](model_doc/vit)를 미세 조정하여 이미지에서 식품 항목을 분류합니다.
2. 추론을 위해 미세 조정 모델을 사용합니다.

<Tip>
Expand Down Expand Up @@ -57,7 +57,7 @@ Hugging Face 계정에 로그인하여 모델을 업로드하고 커뮤니티에
```py
>>> from datasets import load_dataset

>>> food = load_dataset("food101", split="train[:5000]")
>>> food = load_dataset("ethz/food101", split="train[:5000]")
```

데이터 세트의 `train`을 [`~datasets.Dataset.train_test_split`] 메소드를 사용하여 훈련 및 테스트 세트로 분할하세요:
Expand Down Expand Up @@ -252,7 +252,7 @@ Hugging Face 계정에 로그인하여 모델을 업로드하고 커뮤니티에
추론을 수행하고자 하는 이미지를 가져와봅시다:

```py
>>> ds = load_dataset("food101", split="validation[:10]")
>>> ds = load_dataset("ethz/food101", split="validation[:10]")
>>> image = ds["image"][0]
```

Expand Down
4 changes: 2 additions & 2 deletions docs/source/zh/preprocessing.md
Original file line number Diff line number Diff line change
Expand Up @@ -323,7 +323,7 @@ pip install datasets

</Tip>

加载[food101](https://huggingface.co/datasets/food101)数据集(有关如何加载数据集的更多详细信息,请参阅🤗 [Datasets教程](https://huggingface.co/docs/datasets/load_hub))以了解如何在计算机视觉数据集中使用图像处理器:
加载[food101](https://huggingface.co/datasets/ethz/food101)数据集(有关如何加载数据集的更多详细信息,请参阅🤗 [Datasets教程](https://huggingface.co/docs/datasets/load_hub))以了解如何在计算机视觉数据集中使用图像处理器:

<Tip>

Expand All @@ -335,7 +335,7 @@ pip install datasets
```py
>>> from datasets import load_dataset

>>> dataset = load_dataset("food101", split="train[:100]")
>>> dataset = load_dataset("ethz/food101", split="train[:100]")
```

接下来,使用🤗 Datasets的[`Image`](https://huggingface.co/docs/datasets/package_reference/main_classes?highlight=image#datasets.Image)功能查看图像:
Expand Down