You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* 925 Update Triton
Signed-off-by: Sidney L. Bryson <[email protected]>
* 925 Update to monai 1.0.0rc1
Signed-off-by: Sidney L. Bryson <[email protected]>
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
Signed-off-by: Sidney L. Bryson <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Wenqi Li <[email protected]>
Copy file name to clipboardExpand all lines: deployment/Triton/README.md
+22-6Lines changed: 22 additions & 6 deletions
Original file line number
Diff line number
Diff line change
@@ -96,14 +96,18 @@ $ pip install nvidia-pyindex
96
96
$ pip install tritonclient[all]
97
97
```
98
98
5. Run the client program
99
-
The [client](./client/client.py) program will take an optional file input and perform classification to determine whether the study shows COVID-19 or not COVID-19. See the [NVIDIA COVID-19 Classification ](https://ngc.nvidia.com/catalog/models/nvidia:med:clara_pt_covid19_3d_ct_classification) example in NGC for more background.
99
+
The [client](./client/client_mednist.py) program will take an optional file input and perform classification on body parts using the MedNIST data set. A small subset of the database is included.
- The client calls the Triton Service using the external port configured previously.
173
177
```python:
174
-
with httpclient.InferenceServerClient("localhost:7555") as client:
178
+
with httpclient.InferenceServerClient("localhost:8000") as client:
175
179
```
176
180
- The Triton inference response is returned :
177
181
```python:
@@ -199,6 +203,18 @@ $ ./mednist_client_run.sh
199
203
```
200
204
The expected result is variety of classification results for body images and local inference times.
201
205
206
+
## Notes about the `requirements.txt` file and installed CUDA Drivers
207
+
- The requirements.txt file is used to place requirements into the Triton Server Container, but also for the client environment.
208
+
- Take care with the version of PyTorch (torch) used based on the specific GPU and installed driver versions. The --extra-index-url flag may need to be modified to correspond with the CUDA version installed on the local GPU.
209
+
- Determine your driver and CUDA version with the following command:
210
+
```
211
+
nvidia-smi
212
+
```
213
+
- Then choose the appropriate library to load for PyTorch by adding the helper flag in the `requirements.txt` file.
0 commit comments