We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
1 parent e9570ec commit ced212fCopy full SHA for ced212f
examples/offline_inference/openai/openai_batch.md
@@ -208,7 +208,7 @@ $ cat results.jsonl
208
209
### Additional prerequisites
210
211
-* Ensure you are using `vllm >= 0.6.7`.
+* Ensure you are using `vllm >= 0.7.0`.
212
213
### Step 1: Create your batch file
214
vllm/entrypoints/openai/protocol.py
@@ -1231,7 +1231,7 @@ class BatchResponseData(OpenAIBaseModel):
1231
1232
# The body of the response.
1233
body: Optional[Union[ChatCompletionResponse, EmbeddingResponse,
1234
- ScoreResponse]] = (None)
+ ScoreResponse]] = None
1235
1236
1237
class BatchRequestOutput(OpenAIBaseModel):
0 commit comments