Skip to content

Commit 44232ad

Browse files
committed
restore docs, tests
1 parent 8564632 commit 44232ad

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

63 files changed

+6681
-0
lines changed

docs/accelerators.md

Lines changed: 81 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,81 @@
1+
# Using OpenVINO™ Toolkit containers with GPU accelerators
2+
3+
4+
Containers can be used to execute inference operations with GPU acceleration or with the [virtual devices](https://docs.openvino.ai/nightly/openvino_docs_Runtime_Inference_Modes_Overview.html).
5+
6+
There are the following prerequisites:
7+
8+
- Use the Linux kernel with GPU models supported by you integrated GPU or discrete GPU. Check the documetnation on https://dgpu-docs.intel.com/driver/kernel-driver-types.html.
9+
On Linux host, confirm if there is available a character device /dev/dri
10+
11+
- On Windows Subsystem for Linux (WSL2) refer to the guidelines on https://docs.openvino.ai/nightly/openvino_docs_install_guides_configurations_for_intel_gpu.html#
12+
Note, that on WLS2, there must be present a character device `/dev/drx`.
13+
14+
- Docker image for the container must include GPU runtime drivers like described on https://docs.openvino.ai/nightly/openvino_docs_install_guides_configurations_for_intel_gpu.html#
15+
16+
While the host and preconfigured docker engine is up and running, use the docker run parameters like described below.
17+
18+
## Linux
19+
20+
The command below should report both CPU and GPU devices available for inference execution:
21+
```
22+
export IMAGE=openvino/ubuntu20_dev:2023.0.0
23+
docker run -it --device /dev/dri --group-add=$(stat -c \"%g\" /dev/dri/render* ) $IMAGE ./samples/cpp/samples_bin/hello_query_device
24+
```
25+
26+
`--device /dev/dri` - it passes the GPU device to the container
27+
`--group-add` - it adds a security context to the container command with permission to use the GPU device
28+
29+
## Windows Subsystem for Linux
30+
31+
On WSL2, use the command to start the container:
32+
33+
```
34+
export IMAGE=openvino/ubuntu20_dev:2023.0.0
35+
docker run -it --device=/dev/dxg -v /usr/lib/wsl:/usr/lib/wsl $IMAGE ./samples/cpp/samples_bin/hello_query_device
36+
```
37+
`--device /dev/dri` - it passes the virtual GPU device to the container
38+
`-v /usr/lib/wsl:/usr/lib/wsl` - it mounts required WSL libs into the container
39+
40+
41+
## Usage example:
42+
43+
Run the benchmark app using GPU accelerator with `-use_device_mem` param showcasing inference without copy between CPU and GPU memory
44+
```
45+
docker run --device /dev/dri --group-add=$(stat -c \"%g\" /dev/dri/render* ) $IMAGE bash -c " \
46+
curl -O https://storage.openvinotoolkit.org/repositories/open_model_zoo/2023.0/models_bin/1/resnet50-binary-0001/FP16-INT1/resnet50-binary-0001.xml && \
47+
curl -O https://storage.openvinotoolkit.org/repositories/open_model_zoo/2023.0/models_bin/1/resnet50-binary-0001/FP16-INT1/resnet50-binary-0001.bin && \
48+
./samples/cpp/samples_bin/benchmark_app -m resnet50-binary-0001.xml -d GPU -use_device_mem -inference_only=false"
49+
```
50+
In the benchmark app, the parameter `-use_device_mem` employs the OV::RemoteTensor as the input buffer. It demonstrates the gain without data copy beteen the host and the GPU device.
51+
52+
Run the benchmark app using both GPU and CPU. Load will be distributed on both types of devices:
53+
```
54+
docker run --device /dev/dri --group-add=$(stat -c \"%g\" /dev/dri/render* ) $IMAGE bash -c " \
55+
curl -O https://storage.openvinotoolkit.org/repositories/open_model_zoo/2023.0/models_bin/1/resnet50-binary-0001/FP16-INT1/resnet50-binary-0001.xml && \
56+
curl -O https://storage.openvinotoolkit.org/repositories/open_model_zoo/2023.0/models_bin/1/resnet50-binary-0001/FP16-INT1/resnet50-binary-0001.bin && \
57+
./samples/cpp/samples_bin/benchmark_app -m resnet50-binary-0001.xml -d MULTI:GPU,CPU"
58+
```
59+
60+
61+
**Check also:**
62+
63+
[Prebuilt images](#prebuilt-images)
64+
65+
[Working with OpenVINO Containers](docs/containers.md)
66+
67+
[Generating dockerfiles and building the images in Docker_CI tools](docs/openvino_docker.md)
68+
69+
[OpenVINO GPU Plugin](https://docs.openvino.ai/2023.0/openvino_docs_OV_UG_supported_plugins_GPU.html)
70+
71+
72+
73+
74+
75+
76+
77+
78+
79+
80+
81+

docs/configure_gpu_ubuntu20.md

Lines changed: 116 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,116 @@
1+
# Configuration Guide for the Intel® Graphics Compute Runtime for OpenCL™ on Ubuntu* 20.04
2+
3+
Intel® Graphics Compute Runtime for OpenCL™ driver components are required to use a GPU plugin and write custom layers for Intel® Integrated Graphics.
4+
The driver is installed in the OpenVINO™ Docker image but you need to activate it in the container for a non-root user if you have Ubuntu 20.04 on your host.
5+
To access GPU capabilities, you need to have correct permissions on the host and Docker container.
6+
Run the following commands to list the group assigned ownership of the render nodes on your host:
7+
8+
```bash
9+
$ stat -c "group_name=%G group_id=%g" /dev/dri/render*
10+
group_name=render group_id=134
11+
```
12+
13+
OpenVINO™ Docker images do not contain a render group for openvino non-root user because the render group does not have a strict group ID, unlike the video group.
14+
Choose one of the options below to set up access to a GPU device from a container.
15+
16+
## 1. Configure a Host Non-Root User to Use a GPU Device from an OpenVINO Container on Ubuntu 20 Host [RECOMMENDED]
17+
18+
To run an OpenVINO container with a default non-root user (openvino) with access to a GPU device, you need to have a non-root user with the same id as `openvino` user inside the container:
19+
By default, `openvino` user has the #1000 user ID.
20+
Create a non-root user, for example, host_openvino, on the host with the same user ID and access to video, render, docker groups:
21+
22+
```bash
23+
$ useradd -u 1000 -G users,video,render,docker host_openvino
24+
```
25+
26+
Now you can use the OpenVINO container with GPU access under the non-root user.
27+
28+
```bash
29+
$ docker run -it --rm --device /dev/dri <image_name>
30+
```
31+
32+
## 2. Configure a Container to Use a GPU Device on Ubuntu 20 Host Under a Non-Root User
33+
34+
To run an OpenVINO container as non-root with access to a GPU device, specify the render group ID from your host:
35+
36+
```bash
37+
$ docker run -it --rm --device /dev/dri --group-add=<render_group_id_on_host> <image_name>
38+
```
39+
40+
For example, get the render group ID on your host:
41+
42+
```bash
43+
$ docker run -it --rm --device /dev/dri --group-add=$(stat -c "%g" /dev/dri/render*) <image_name>
44+
```
45+
46+
Now you can use the container with GPU access under the non-root user.
47+
48+
## 3. Configure an Image to Use a GPU Device on Ubuntu 20 Host and Save It
49+
50+
To run an OpenVINO container as the root with access to a GPU device, use the command below:
51+
52+
```bash
53+
$ docker run -it --rm --user root --device /dev/dri --name my_container <image_name>
54+
```
55+
56+
Check groups for the GPU device in the container:
57+
58+
```bash
59+
$ ls -l /dev/dri/
60+
```
61+
62+
The output should look like the following:
63+
64+
```bash
65+
crw-rw---- 1 root video 226, 0 Feb 20 14:28 card0
66+
crw-rw---- 1 root 134 226, 128 Feb 20 14:28 renderD128
67+
```
68+
69+
Create a render group in the container with the same group ID as on your host:
70+
71+
```bash
72+
$ addgroup --gid 134 render
73+
```
74+
75+
Check groups for the GPU device in the container:
76+
77+
```bash
78+
$ ls -l /dev/dri/
79+
```
80+
81+
The output should look like the following:
82+
83+
```bash
84+
crw-rw---- 1 root video 226, 0 Feb 20 14:28 card0
85+
crw-rw---- 1 root render 226, 128 Feb 20 14:28 renderD128
86+
```
87+
88+
Add the non-root user to the render group:
89+
90+
```bash
91+
$ usermod -a -G render openvino
92+
$ id openvino
93+
```
94+
95+
Check that the group now contains the user:
96+
97+
```bash
98+
uid=1000(openvino) gid=1000(openvino) groups=1000(openvino),44(video),100(users),134(render)
99+
```
100+
101+
Then relogin as the non-root user:
102+
103+
```bash
104+
$ su openvino
105+
```
106+
107+
Now you can use the container with GPU access under the non-root user or you can save that container as an image and push it to your registry.
108+
Open another terminal and run the commands below:
109+
110+
```bash
111+
$ docker commit my_container my_image
112+
$ docker run -it --rm --device /dev/dri --user openvino my_image
113+
```
114+
115+
---
116+
\* Other names and brands may be claimed as the property of others.

docs/containers.md

Lines changed: 74 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,74 @@
1+
# Working with OpenVINO™ Toolkit Images
2+
3+
## Runtime images
4+
5+
The runtime images include OpenVINO toolkit with all required dependencies to run inference operations and OpenVINO API both in Python and C++.
6+
There are no development tools installed.
7+
Here are examples how the runtime image could be used:
8+
9+
```
10+
export IMAGE=openvino/ubuntu20_runtime:2023.0.0
11+
```
12+
13+
### Building and Using the OpenVINO samples:
14+
15+
```
16+
docker run -it -u root $IMAGE bash -c "/opt/intel/openvino/install_dependencies/install_openvino_dependencies.sh -y -c dev && ./samples/cpp/build_samples.sh && \
17+
/root/openvino_cpp_samples_build/intel64/Release/hello_query_device"
18+
```
19+
20+
### Using python samples
21+
```
22+
docker run -it $IMAGE python3 samples/python/hello_query_device/hello_query_device.py
23+
```
24+
25+
## Development images
26+
27+
Dev images include the OpenVINO runtime components and development tools as well. It includes a complete environment for experimenting with OpenVINO.
28+
Examples how the development container can be used are below:
29+
30+
```
31+
export IMAGE=openvino/ubuntu20_dev:2023.0.0
32+
```
33+
34+
### Listing OpenVINO Model Zoo Models
35+
```
36+
docker run $IMAGE omz_downloader --print_all
37+
```
38+
39+
### Download a model
40+
```
41+
mkdir model
42+
docker run -u $(id -u) --rm -v $(pwd)/model:/tmp/model $IMAGE omz_downloader --name mozilla-deepspeech-0.6.1 -o /tmp/model
43+
```
44+
45+
### Convert the model to IR format
46+
```
47+
docker run -u $(id -u) --rm -v $(pwd)/model:/tmp/model $IMAGE omz_converter --name mozilla-deepspeech-0.6.1 -d /tmp/model -o /tmp/model/converted/
48+
```
49+
50+
### Run benchmark app to test the model performance
51+
```
52+
docker run -u $(id -u) --rm -v $(pwd)/model:/tmp/model $IMAGE benchmark_app -m /tmp/model/converted/public/mozilla-deepspeech-0.6.1/FP32/mozilla-deepspeech-0.6.1.xml
53+
```
54+
55+
### Run a demo from an OpenVINO Model Zoo
56+
```
57+
docker run $IMAGE bash -c "git clone --depth=1 --recurse-submodules --shallow-submodules https://github.com/openvinotoolkit/open_model_zoo.git && \
58+
cd open_model_zoo/demos/classification_demo/python && \
59+
curl -O https://storage.openvinotoolkit.org/repositories/open_model_zoo/2022.1/models_bin/3/resnet50-binary-0001/FP32-INT1/resnet50-binary-0001.xml && \
60+
curl -O https://storage.openvinotoolkit.org/repositories/open_model_zoo/2022.1/models_bin/3/resnet50-binary-0001/FP32-INT1/resnet50-binary-0001.bin && \
61+
curl -O https://raw.githubusercontent.com/openvinotoolkit/model_server/main/demos/common/static/images/zebra.jpeg && \
62+
python3 classification_demo.py -m resnet50-binary-0001.xml -i zebra.jpeg --labels ../../../data/dataset_classes/imagenet_2012.txt --no_show -nstreams 1 -r"
63+
64+
```
65+
66+
**Check also:**
67+
68+
[Prebuilt images](#prebuilt-images)
69+
70+
[Deployment with GPU accelerator](docs/accelerators.md)
71+
72+
[Generating dockerfiles and building the images in Docker_CI tools](docs/openvino_docker.md)
73+
74+
[OpenVINO GPU Plugin](https://docs.openvino.ai/2023.0/openvino_docs_OV_UG_supported_plugins_GPU.html)

docs/get-started.md

Lines changed: 68 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,68 @@
1+
# Getting Started with OpenVINO™ Toolkit Images
2+
3+
You can easily get started by using the precompiled and published docker images.
4+
In order to start using them you need to meet the following prerequisites:
5+
- Linux operating system or Windows Subsystem for Linux (WSL2)
6+
- Installed docker engine or compatible container engine
7+
- Permissions to start containers (sudo or docker group membership)
8+
9+
## Pull a docker image
10+
11+
```
12+
docker pull openvino/ubuntu20_dev:latest
13+
```
14+
15+
## Start the container with an interactive session
16+
17+
```bash
18+
export IMAGE=openvino/ubuntu20_dev:latest
19+
docker run -it --rm $IMAGE /bin/bash
20+
```
21+
22+
Inside the interactive session, you can run all OpenVINO samples and tools.
23+
24+
# Run a python sample
25+
If you want to try samples, then run the image with the command like below:
26+
27+
```bash
28+
docker run -it --rm $IMAGE /bin/bash -c "python3 samples/python/hello_query_device/hello_query_device.py"
29+
```
30+
31+
# Download a model via omz_downloader
32+
```
33+
docker run -it -u $(id -u):$(id -g) -v $(pwd)/:/model/ --rm $IMAGE \
34+
/bin/bash -c "omz_downloader --name googlenet-v1 --precisions FP32 -o /model"
35+
```
36+
# Convert the model to IR format
37+
```
38+
docker run -it -u $(id -u):$(id -g) -v $(pwd)/:/model/ --rm $IMAGE \
39+
/bin/bash -c "omz_converter --name googlenet-v1 --precision FP32 -d /model -o /model"
40+
```
41+
In result, the converted model will be copied to `public/googlenet-v1/FP32` folder in the current directly:
42+
```
43+
tree public/googlenet-v1/
44+
public/googlenet-v1/
45+
├── FP32
46+
│   ├── googlenet-v1.bin
47+
│   └── googlenet-v1.xml
48+
├── googlenet-v1.caffemodel
49+
├── googlenet-v1.prototxt
50+
└── googlenet-v1.prototxt.orig
51+
```
52+
53+
# Run a benchmark app
54+
55+
```
56+
docker run -it -u $(id -u):$(id -g) -v $(pwd)/:/model/ --rm $IMAGE benchmark_app -m /model/public/googlenet-v1/FP32/googlenet-v1.xml
57+
```
58+
59+
**Check also:**
60+
61+
[Prebuilt images](#prebuilt-images)
62+
63+
[Working with OpenVINO Containers](docs/containers.md)
64+
65+
[Deployment with GPU accelerator](docs/accelerators.md)
66+
67+
[Generating dockerfiles and building the images in Docker_CI tools](docs/openvino_docker.md)
68+

docs/img/dockerfile_name.png

9.29 KB
Loading

0 commit comments

Comments
 (0)