You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
*[Labels created by GPU plugin](#labels-created-by-gpu-plugin)
27
+
*[SR-IOV use with the plugin](#sr-iov-use-with-the-plugin)
27
28
*[Issues with media workloads on multi-GPU setups](#issues-with-media-workloads-on-multi-gpu-setups)
28
29
*[Workaround for QSV and VA-API](#workaround-for-qsv-and-va-api)
29
30
30
31
31
32
## Introduction
32
33
33
34
Intel GPU plugin facilitates Kubernetes workload offloading by providing access to
34
-
discrete (including Intel® Data Center GPU Flex Series) and integrated Intel GPU devices
35
+
discrete (including Intel® Data Center GPU Flex & Max Series) and integrated Intel GPU devices
35
36
supported by the host kernel.
36
37
37
38
Use cases include, but are not limited to:
@@ -344,6 +345,12 @@ The GPU plugin functionality can be verified by deploying an [OpenCL image](../.
344
345
345
346
If installed with NFD and started with resource-management, plugin will export a set of labels for the node. For detailed info, see [labeling documentation](./labels.md).
346
347
348
+
## SR-IOV use with the plugin
349
+
350
+
GPU plugin does __not__ setup SR-IOV. It has to be configured by the cluster admin.
351
+
352
+
GPU plugin does however support provisioning Virtual Functions (VFs) to containers for a SR-IOV enabled GPU. When the plugin detects a GPU with SR-IOV VFs configured, it will only provision the VFs and leaves the PF device on the host.
353
+
347
354
## Issues with media workloads on multi-GPU setups
348
355
349
356
OneVPL media API, 3D and compute APIs provide device discovery
0 commit comments