We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
1 parent 3fc12d0 commit aabd116Copy full SHA for aabd116
docs/source/serving/deploying_with_k8s.md
@@ -49,7 +49,7 @@ data:
49
50
Next to create the deployment file for vLLM to run the model server. The following example deploys the `Mistral-7B-Instruct-v0.3` model.
51
52
-Here are two exampels for using NVIDIA GPU and AMD GPU.
+Here are two examples for using NVIDIA GPU and AMD GPU.
53
54
- NVIDIA GPU
55
0 commit comments