Skip to content

Commit cd5f7a4

Browse files
DarkLight1337mzusman
authored andcommitted
[Doc][4/N] Reorganize API Reference (vllm-project#11843)
Signed-off-by: DarkLight1337 <[email protected]>
1 parent 48b5f2c commit cd5f7a4

File tree

24 files changed

+38
-67
lines changed

24 files changed

+38
-67
lines changed

.buildkite/test-pipeline.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -38,7 +38,7 @@ steps:
3838
- pip install -r requirements-docs.txt
3939
- SPHINXOPTS=\"-W\" make html
4040
# Check API reference (if it fails, you may have missing mock imports)
41-
- grep \"sig sig-object py\" build/html/dev/sampling_params.html
41+
- grep \"sig sig-object py\" build/html/api/params.html
4242

4343
- label: Async Engine, Inputs, Utils, Worker Test # 24min
4444
fast_check: true

Dockerfile

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,8 +2,8 @@
22
# to run the OpenAI compatible server.
33

44
# Please update any changes made here to
5-
# docs/source/dev/dockerfile/dockerfile.md and
6-
# docs/source/assets/dev/dockerfile-stages-dependency.png
5+
# docs/source/contributing/dockerfile/dockerfile.md and
6+
# docs/source/assets/contributing/dockerfile-stages-dependency.png
77

88
ARG CUDA_VERSION=12.4.1
99
#################### BASE BUILD IMAGE ####################
File renamed without changes.
File renamed without changes.
File renamed without changes.

docs/source/design/multimodal/multimodal_index.md renamed to docs/source/api/multimodal/index.md

Lines changed: 0 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -11,18 +11,8 @@ vLLM provides experimental support for multi-modal models through the {mod}`vllm
1111
Multi-modal inputs can be passed alongside text and token prompts to [supported models](#supported-mm-models)
1212
via the `multi_modal_data` field in {class}`vllm.inputs.PromptType`.
1313

14-
Currently, vLLM only has built-in support for image data. You can extend vLLM to process additional modalities
15-
by following [this guide](#adding-multimodal-plugin).
16-
1714
Looking to add your own multi-modal model? Please follow the instructions listed [here](#enabling-multimodal-inputs).
1815

19-
## Guides
20-
21-
```{toctree}
22-
:maxdepth: 1
23-
24-
adding_multimodal_plugin
25-
```
2616

2717
## Module Contents
2818

File renamed without changes.
File renamed without changes.
File renamed without changes.

docs/source/api/params.md

Lines changed: 22 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,22 @@
1+
# Optional Parameters
2+
3+
Optional parameters for vLLM APIs.
4+
5+
(sampling-params)=
6+
7+
## Sampling Parameters
8+
9+
```{eval-rst}
10+
.. autoclass:: vllm.SamplingParams
11+
:members:
12+
```
13+
14+
(pooling-params)=
15+
16+
## Pooling Parameters
17+
18+
```{eval-rst}
19+
.. autoclass:: vllm.PoolingParams
20+
:members:
21+
```
22+

0 commit comments

Comments
 (0)