@@ -159,7 +159,9 @@ def create(
159
159
Determinism is not guaranteed, and you should refer to the `system_fingerprint`
160
160
response parameter to monitor changes in the backend.
161
161
162
- stop: Up to 4 sequences where the API will stop generating further tokens. The
162
+ stop: Not supported with latest reasoning models `o3` and `o4-mini`.
163
+
164
+ Up to 4 sequences where the API will stop generating further tokens. The
163
165
returned text will not contain the stop sequence.
164
166
165
167
stream: Whether to stream back partial progress. If set, tokens will be sent as
@@ -319,7 +321,9 @@ def create(
319
321
Determinism is not guaranteed, and you should refer to the `system_fingerprint`
320
322
response parameter to monitor changes in the backend.
321
323
322
- stop: Up to 4 sequences where the API will stop generating further tokens. The
324
+ stop: Not supported with latest reasoning models `o3` and `o4-mini`.
325
+
326
+ Up to 4 sequences where the API will stop generating further tokens. The
323
327
returned text will not contain the stop sequence.
324
328
325
329
stream_options: Options for streaming response. Only set this when you set `stream: true`.
@@ -472,7 +476,9 @@ def create(
472
476
Determinism is not guaranteed, and you should refer to the `system_fingerprint`
473
477
response parameter to monitor changes in the backend.
474
478
475
- stop: Up to 4 sequences where the API will stop generating further tokens. The
479
+ stop: Not supported with latest reasoning models `o3` and `o4-mini`.
480
+
481
+ Up to 4 sequences where the API will stop generating further tokens. The
476
482
returned text will not contain the stop sequence.
477
483
478
484
stream_options: Options for streaming response. Only set this when you set `stream: true`.
@@ -703,7 +709,9 @@ async def create(
703
709
Determinism is not guaranteed, and you should refer to the `system_fingerprint`
704
710
response parameter to monitor changes in the backend.
705
711
706
- stop: Up to 4 sequences where the API will stop generating further tokens. The
712
+ stop: Not supported with latest reasoning models `o3` and `o4-mini`.
713
+
714
+ Up to 4 sequences where the API will stop generating further tokens. The
707
715
returned text will not contain the stop sequence.
708
716
709
717
stream: Whether to stream back partial progress. If set, tokens will be sent as
@@ -863,7 +871,9 @@ async def create(
863
871
Determinism is not guaranteed, and you should refer to the `system_fingerprint`
864
872
response parameter to monitor changes in the backend.
865
873
866
- stop: Up to 4 sequences where the API will stop generating further tokens. The
874
+ stop: Not supported with latest reasoning models `o3` and `o4-mini`.
875
+
876
+ Up to 4 sequences where the API will stop generating further tokens. The
867
877
returned text will not contain the stop sequence.
868
878
869
879
stream_options: Options for streaming response. Only set this when you set `stream: true`.
@@ -1016,7 +1026,9 @@ async def create(
1016
1026
Determinism is not guaranteed, and you should refer to the `system_fingerprint`
1017
1027
response parameter to monitor changes in the backend.
1018
1028
1019
- stop: Up to 4 sequences where the API will stop generating further tokens. The
1029
+ stop: Not supported with latest reasoning models `o3` and `o4-mini`.
1030
+
1031
+ Up to 4 sequences where the API will stop generating further tokens. The
1020
1032
returned text will not contain the stop sequence.
1021
1033
1022
1034
stream_options: Options for streaming response. Only set this when you set `stream: true`.
0 commit comments