Skip to content

Commit 6303c8c

Browse files
committed
Add warning comments referring to unimplemented functionality
1 parent a3bf37d commit 6303c8c

File tree

2 files changed

+16
-0
lines changed

2 files changed

+16
-0
lines changed

api/api.py

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -224,6 +224,17 @@ def __init__(self, *args, **kwargs):
224224
def completion(self, completion_request: CompletionRequest):
225225
"""Handle a chat completion request and yield a chunked response.
226226
227+
** Warning ** : Not all arguments of the CompletionRequest are consumed as the server isn't completely implemented.
228+
Current treatment of parameters is described below.
229+
230+
- messages: The server consumes the final element of the array as the prompt.
231+
- model: This has no impact on the server state, i.e. changing the model in the request
232+
will not change which model is responding. Instead, use the --model flag to seelect the model when starting the server.
233+
- temperature: This is used to control the randomness of the response. The server will use the temperature
234+
235+
See https://github.com/pytorch/torchchat/issues/973 for more details.
236+
237+
227238
Args:
228239
completion_request: Request object with prompt and other parameters.
229240

server.py

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -21,6 +21,11 @@ def chat_endpoint():
2121
"""
2222
Endpoint for the Chat API. This endpoint is used to generate a response to a user prompt.
2323
This endpoint emulates the behavior of the OpenAI Chat API. (https://platform.openai.com/docs/api-reference/chat)
24+
25+
** Warning ** : Not all arguments of the CompletionRequest are consumed.
26+
27+
See https://github.com/pytorch/torchchat/issues/973 and the OpenAiApiGenerator class for more details.
28+
2429
"""
2530
data = request.get_json()
2631

0 commit comments

Comments
 (0)