@@ -159,7 +159,9 @@ def create(
159159 Determinism is not guaranteed, and you should refer to the `system_fingerprint`
160160 response parameter to monitor changes in the backend.
161161
162- stop: Up to 4 sequences where the API will stop generating further tokens. The
162+ stop: Not supported with latest reasoning models `o3` and `o4-mini`.
163+
164+ Up to 4 sequences where the API will stop generating further tokens. The
163165 returned text will not contain the stop sequence.
164166
165167 stream: Whether to stream back partial progress. If set, tokens will be sent as
@@ -319,7 +321,9 @@ def create(
319321 Determinism is not guaranteed, and you should refer to the `system_fingerprint`
320322 response parameter to monitor changes in the backend.
321323
322- stop: Up to 4 sequences where the API will stop generating further tokens. The
324+ stop: Not supported with latest reasoning models `o3` and `o4-mini`.
325+
326+ Up to 4 sequences where the API will stop generating further tokens. The
323327 returned text will not contain the stop sequence.
324328
325329 stream_options: Options for streaming response. Only set this when you set `stream: true`.
@@ -472,7 +476,9 @@ def create(
472476 Determinism is not guaranteed, and you should refer to the `system_fingerprint`
473477 response parameter to monitor changes in the backend.
474478
475- stop: Up to 4 sequences where the API will stop generating further tokens. The
479+ stop: Not supported with latest reasoning models `o3` and `o4-mini`.
480+
481+ Up to 4 sequences where the API will stop generating further tokens. The
476482 returned text will not contain the stop sequence.
477483
478484 stream_options: Options for streaming response. Only set this when you set `stream: true`.
@@ -703,7 +709,9 @@ async def create(
703709 Determinism is not guaranteed, and you should refer to the `system_fingerprint`
704710 response parameter to monitor changes in the backend.
705711
706- stop: Up to 4 sequences where the API will stop generating further tokens. The
712+ stop: Not supported with latest reasoning models `o3` and `o4-mini`.
713+
714+ Up to 4 sequences where the API will stop generating further tokens. The
707715 returned text will not contain the stop sequence.
708716
709717 stream: Whether to stream back partial progress. If set, tokens will be sent as
@@ -863,7 +871,9 @@ async def create(
863871 Determinism is not guaranteed, and you should refer to the `system_fingerprint`
864872 response parameter to monitor changes in the backend.
865873
866- stop: Up to 4 sequences where the API will stop generating further tokens. The
874+ stop: Not supported with latest reasoning models `o3` and `o4-mini`.
875+
876+ Up to 4 sequences where the API will stop generating further tokens. The
867877 returned text will not contain the stop sequence.
868878
869879 stream_options: Options for streaming response. Only set this when you set `stream: true`.
@@ -1016,7 +1026,9 @@ async def create(
10161026 Determinism is not guaranteed, and you should refer to the `system_fingerprint`
10171027 response parameter to monitor changes in the backend.
10181028
1019- stop: Up to 4 sequences where the API will stop generating further tokens. The
1029+ stop: Not supported with latest reasoning models `o3` and `o4-mini`.
1030+
1031+ Up to 4 sequences where the API will stop generating further tokens. The
10201032 returned text will not contain the stop sequence.
10211033
10221034 stream_options: Options for streaming response. Only set this when you set `stream: true`.
0 commit comments