@@ -119,14 +119,15 @@ def create(
119119 As an example, you can pass `{"50256": -100}` to prevent the <|endoftext|> token
120120 from being generated.
121121
122- logprobs: Include the log probabilities on the `logprobs` most likely tokens, as well the
123- chosen tokens. For example, if `logprobs` is 5, the API will return a list of
124- the 5 most likely tokens. The API will always return the `logprob` of the
125- sampled token, so there may be up to `logprobs+1` elements in the response.
122+ logprobs: Include the log probabilities on the `logprobs` most likely output tokens, as
123+ well the chosen tokens. For example, if `logprobs` is 5, the API will return a
124+ list of the 5 most likely tokens. The API will always return the `logprob` of
125+ the sampled token, so there may be up to `logprobs+1` elements in the response.
126126
127127 The maximum value for `logprobs` is 5.
128128
129- max_tokens: The maximum number of [tokens](/tokenizer) to generate in the completion.
129+ max_tokens: The maximum number of [tokens](/tokenizer) that can be generated in the
130+ completion.
130131
131132 The token count of your prompt plus `max_tokens` cannot exceed the model's
132133 context length.
@@ -288,14 +289,15 @@ def create(
288289 As an example, you can pass `{"50256": -100}` to prevent the <|endoftext|> token
289290 from being generated.
290291
291- logprobs: Include the log probabilities on the `logprobs` most likely tokens, as well the
292- chosen tokens. For example, if `logprobs` is 5, the API will return a list of
293- the 5 most likely tokens. The API will always return the `logprob` of the
294- sampled token, so there may be up to `logprobs+1` elements in the response.
292+ logprobs: Include the log probabilities on the `logprobs` most likely output tokens, as
293+ well the chosen tokens. For example, if `logprobs` is 5, the API will return a
294+ list of the 5 most likely tokens. The API will always return the `logprob` of
295+ the sampled token, so there may be up to `logprobs+1` elements in the response.
295296
296297 The maximum value for `logprobs` is 5.
297298
298- max_tokens: The maximum number of [tokens](/tokenizer) to generate in the completion.
299+ max_tokens: The maximum number of [tokens](/tokenizer) that can be generated in the
300+ completion.
299301
300302 The token count of your prompt plus `max_tokens` cannot exceed the model's
301303 context length.
@@ -450,14 +452,15 @@ def create(
450452 As an example, you can pass `{"50256": -100}` to prevent the <|endoftext|> token
451453 from being generated.
452454
453- logprobs: Include the log probabilities on the `logprobs` most likely tokens, as well the
454- chosen tokens. For example, if `logprobs` is 5, the API will return a list of
455- the 5 most likely tokens. The API will always return the `logprob` of the
456- sampled token, so there may be up to `logprobs+1` elements in the response.
455+ logprobs: Include the log probabilities on the `logprobs` most likely output tokens, as
456+ well the chosen tokens. For example, if `logprobs` is 5, the API will return a
457+ list of the 5 most likely tokens. The API will always return the `logprob` of
458+ the sampled token, so there may be up to `logprobs+1` elements in the response.
457459
458460 The maximum value for `logprobs` is 5.
459461
460- max_tokens: The maximum number of [tokens](/tokenizer) to generate in the completion.
462+ max_tokens: The maximum number of [tokens](/tokenizer) that can be generated in the
463+ completion.
461464
462465 The token count of your prompt plus `max_tokens` cannot exceed the model's
463466 context length.
@@ -687,14 +690,15 @@ async def create(
687690 As an example, you can pass `{"50256": -100}` to prevent the <|endoftext|> token
688691 from being generated.
689692
690- logprobs: Include the log probabilities on the `logprobs` most likely tokens, as well the
691- chosen tokens. For example, if `logprobs` is 5, the API will return a list of
692- the 5 most likely tokens. The API will always return the `logprob` of the
693- sampled token, so there may be up to `logprobs+1` elements in the response.
693+ logprobs: Include the log probabilities on the `logprobs` most likely output tokens, as
694+ well the chosen tokens. For example, if `logprobs` is 5, the API will return a
695+ list of the 5 most likely tokens. The API will always return the `logprob` of
696+ the sampled token, so there may be up to `logprobs+1` elements in the response.
694697
695698 The maximum value for `logprobs` is 5.
696699
697- max_tokens: The maximum number of [tokens](/tokenizer) to generate in the completion.
700+ max_tokens: The maximum number of [tokens](/tokenizer) that can be generated in the
701+ completion.
698702
699703 The token count of your prompt plus `max_tokens` cannot exceed the model's
700704 context length.
@@ -856,14 +860,15 @@ async def create(
856860 As an example, you can pass `{"50256": -100}` to prevent the <|endoftext|> token
857861 from being generated.
858862
859- logprobs: Include the log probabilities on the `logprobs` most likely tokens, as well the
860- chosen tokens. For example, if `logprobs` is 5, the API will return a list of
861- the 5 most likely tokens. The API will always return the `logprob` of the
862- sampled token, so there may be up to `logprobs+1` elements in the response.
863+ logprobs: Include the log probabilities on the `logprobs` most likely output tokens, as
864+ well the chosen tokens. For example, if `logprobs` is 5, the API will return a
865+ list of the 5 most likely tokens. The API will always return the `logprob` of
866+ the sampled token, so there may be up to `logprobs+1` elements in the response.
863867
864868 The maximum value for `logprobs` is 5.
865869
866- max_tokens: The maximum number of [tokens](/tokenizer) to generate in the completion.
870+ max_tokens: The maximum number of [tokens](/tokenizer) that can be generated in the
871+ completion.
867872
868873 The token count of your prompt plus `max_tokens` cannot exceed the model's
869874 context length.
@@ -1018,14 +1023,15 @@ async def create(
10181023 As an example, you can pass `{"50256": -100}` to prevent the <|endoftext|> token
10191024 from being generated.
10201025
1021- logprobs: Include the log probabilities on the `logprobs` most likely tokens, as well the
1022- chosen tokens. For example, if `logprobs` is 5, the API will return a list of
1023- the 5 most likely tokens. The API will always return the `logprob` of the
1024- sampled token, so there may be up to `logprobs+1` elements in the response.
1026+ logprobs: Include the log probabilities on the `logprobs` most likely output tokens, as
1027+ well the chosen tokens. For example, if `logprobs` is 5, the API will return a
1028+ list of the 5 most likely tokens. The API will always return the `logprob` of
1029+ the sampled token, so there may be up to `logprobs+1` elements in the response.
10251030
10261031 The maximum value for `logprobs` is 5.
10271032
1028- max_tokens: The maximum number of [tokens](/tokenizer) to generate in the completion.
1033+ max_tokens: The maximum number of [tokens](/tokenizer) that can be generated in the
1034+ completion.
10291035
10301036 The token count of your prompt plus `max_tokens` cannot exceed the model's
10311037 context length.
0 commit comments