Skip to content

Commit 1aa55f8

Browse files
upayuryevaDamonFool
authored andcommitted
[Doc] Correct beam_search using in generative_models.md (vllm-project#14363)
1 parent b53a97c commit 1aa55f8

File tree

1 file changed

+6
-4
lines changed

1 file changed

+6
-4
lines changed

docs/source/models/generative_models.md

Lines changed: 6 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -54,14 +54,16 @@ The {class}`~vllm.LLM.beam_search` method implements [beam search](https://huggi
5454
For example, to search using 5 beams and output at most 50 tokens:
5555

5656
```python
57+
from vllm import LLM
58+
from vllm.sampling_params import BeamSearchParams
59+
5760
llm = LLM(model="facebook/opt-125m")
5861
params = BeamSearchParams(beam_width=5, max_tokens=50)
59-
outputs = llm.generate("Hello, my name is", params)
62+
outputs = llm.beam_search([{"prompt": "Hello, my name is "}], params)
6063

6164
for output in outputs:
62-
prompt = output.prompt
63-
generated_text = output.outputs[0].text
64-
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
65+
generated_text = output.sequences[0].text
66+
print(f"Generated text: {generated_text!r}")
6567
```
6668

6769
### `LLM.chat`

0 commit comments

Comments
 (0)