Skip to content

Commit 5475aa0

Browse files
authored
Update docs to fix broken links (#1300)
1 parent c936110 commit 5475aa0

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

docs/source/user-guides/local-models.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -149,7 +149,7 @@ user@host:~/$ export HUGGING_FACE_HUB_TOKEN=<your_huggingface_token>
149149
150150
### Procedure
151151
152-
1. Setup vLLM Completions Endpoint Locally. While vLLM provides an [official Docker image](https://docs.vllm.ai/en/latest/deployment/docker.html#use-vllm-s-official-docker-image), it assumes that you have GPUs available. However, if you are running vLLM on a machine without GPUs, you can use the [Dockerfile.cpu](https://github.com/vllm-project/vllm/blob/main/Dockerfile.cpu) for x86 architecture and [Dockerfile.arm](https://github.com/vllm-project/vllm/blob/main/Dockerfile.arm) for ARM architecture.
152+
1. Setup vLLM Completions Endpoint Locally. While vLLM provides an [official Docker image](https://docs.vllm.ai/en/latest/deployment/docker.html#use-vllm-s-official-docker-image), it assumes that you have GPUs available. However, if you are running vLLM on a machine without GPUs, you can use the [Dockerfile.cpu](https://github.com/vllm-project/vllm/blob/main/docker/Dockerfile.cpu) for x86 architecture and [Dockerfile.arm](https://github.com/vllm-project/vllm/blob/main/docker/Dockerfile.arm) for ARM architecture.
153153
154154
```console
155155
user@host:~/$ git clone https://github.com/vllm-project/vllm.git

0 commit comments

Comments
 (0)