Skip to content

Commit e9ed210

Browse files
committed
update front-end mirroring description
1 parent a914a43 commit e9ed210

File tree

1 file changed

+8
-8
lines changed

1 file changed

+8
-8
lines changed

rfcs/20200624-pluggable-device-for-tensorflow.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -75,7 +75,7 @@ This section describes the user scenarios that are supported/unsupported for Plu
7575

7676
### Front-end Mirroring mechanism
7777
This section describes the front-end mirroring mechanism for python users, pointing at previous user scenarios.
78-
* **device type && subdevice type**
78+
* **Device type && Subdevice type**
7979
Device type is user visible. User can specify the device type for the ops. e.g, "gpu", "xpu", "cpu". Subdevice type is user visible and user can specify which subdevice to use for the device type(mirroring), e.g.("NVIDIA_GPU", "INTEL_GPU", "AMD_GPU").
8080
```
8181
>> with tf.device("/gpu:0"):
@@ -86,32 +86,32 @@ This section describes the front-end mirroring mechanism for python users, point
8686
* **Front-end mirroring**
8787
In the case of two GPUs in the same system, e.g. NVIDIA GPU + INTEL GPU and installing the Intel GPU plugin.
8888
* **Option 1**
89-
Only plugged gpu device is visible, PluggableDevice overrides GPUDevice. If user want to use CUDA device, he need to uninstall the plugin
89+
Only plugged gpu device is visible, PluggableDevice(INTEL GPU) overrides the default GPUDevice(NVIDIA GPU). If user want to use NVIDIA GPU, he needs to manually uninstall the plugin.
9090
```
9191
>> gpu_device = tf.config.experimental.list_physical_devices(`GPU`)
9292
>> print(gpu_device)
9393
[PhysicalDevice(name = `physical_device:GPU:0`), device_type = `GPU`, subdevice_type = `INTEL_GPU`]
9494
>> with tf.device("/gpu:0"):
95-
.. // place ops on PluggableDevice(Intel GPU)
95+
>> .. // place ops on PluggableDevice(Intel GPU)
9696
```
9797
* **Option 2**
98-
Both plugged gpu device and default gpu device are visible, but only one gpu can work at the same time, plugged gpu device is default enabled, if user want to use CUDA device, he need to call mirroring API(set_sub_device_mapping()) to switch to CUDA device.
98+
Both plugged gpu device and default gpu device are visible, but only one gpu can work at the same time, plugged gpu device is default enabled, if user want to use NVIDIA GPU, he need to call mirroring API(set_sub_device_mapping()) to switch to NVIDIA gpu device.
9999
```
100100
>> gpu_device = tf.config.experimental.list_physical_devices(`GPU`)
101101
>> print(gpu_device)
102102
[PhysicalDevice(name = `physical_device:GPU:0`), device_type = `GPU`, subdevice_type = `INTEL_GPU`, enabled]
103103
[PhysicalDevice(name = `physical_device:GPU:0`), device_type = `GPU`, subdevice_type = `NVIDIA_GPU`, not-enabled]
104104
>> tf.config.set_subdevice_mapping("NVIDIA_GPU")
105105
>> with tf.device("/gpu:0"):
106-
.. // place ops on GPUDevice(NVIDIA GPU)
106+
>> .. // place ops on GPUDevice(NVIDIA GPU)
107107
```
108-
* **physical device name**
108+
* **Physical device name**
109109
physical device name is user visible. User can query the physical device name(e.g. "Titan V") for the specified device instance through [tf.config.experimental.get_device_details()](https://www.tensorflow.org/api_docs/python/tf/config/experimental/get_device_details).
110110
```
111111
>> gpu_device = tf.config.experimental.list_physical_devices(`GPU`)
112112
>> if gpu_device:
113-
details = tf.config.experimental.get_device_details(gpu_device[0])
114-
print(details.get(`device_name`))
113+
>> details = tf.config.experimental.get_device_details(gpu_device[0])
114+
>> print(details.get(`device_name`))
115115
"TITAN_V, XXX"
116116
```
117117

0 commit comments

Comments
 (0)