Skip to content

Support repeated tool calls in a loop within AssistantAgent #6268

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
1 task done
ekzhu opened this issue Apr 10, 2025 · 6 comments
Open
1 task done

Support repeated tool calls in a loop within AssistantAgent #6268

ekzhu opened this issue Apr 10, 2025 · 6 comments
Labels
help wanted Extra attention is needed proj-agentchat
Milestone

Comments

@ekzhu
Copy link
Collaborator

ekzhu commented Apr 10, 2025

Confirmation

  • I confirm that I am a maintainer and so can use this template. If I am not, I understand this issue will be closed and I will be asked to use a different template.

Issue body

Many models have been trained to call tools repeatedly until a reflection on the results. For example, Claude models have been trained to perform exactly this.

Idea:

We can support this in AssistantAgent by introducing an optional tool_call_loop parameter to the constructor. By default, tool_call_loop=False which indicates the tool calls won't get into a loop -- a single tool call at most.

User can set tool_call_loop=True which indicates the tool call will run in a loop until the model:

  1. Produces non-tool call response, or
  2. Produces a handoff tool call.

This is a start. As the next step, introduce a ToolCallConfig class which controls how a tool call loop should behave, with parameters for setting maximum number of iterations, preset response text, etc. -- we can consider merging reflect_on_tool_call into it as well.

Please discuss and suggest ideas.

Related:
#6261
#5621

@ekzhu ekzhu added help wanted Extra attention is needed proj-agentchat labels Apr 10, 2025
@ekzhu ekzhu added this to the 0.4.x-python milestone Apr 10, 2025
@philippHorn
Copy link
Contributor

The idea sounds good to me 👍

For my use-case it would be nice to have a way to enforce a text response after a certain amount of loops.

In my case the model is gathering information from a vectorDB. The agent sometimes decides to query multiple times, if it does not get the right information. But I cannot do too many loops since the cost and time gets too high.
Also since with every call the input tokens get quite a bit bigger from the tool call results of previous calls.

So this would force the model to formulate the best answer it can with the info it got after a few tries.

@zhengkezhou1
Copy link

I want try to implement this!

As @philippHorn mentioned, this feature would be very useful when we want to query a database. However, I see that Running an Agent in a Loop already provides a way to do this.

I don't have a deep understanding of the system yet, but assuming there is a tool used to interact with the database, and for some reason (the desired data doesn't exist when we query), would the difference be that:

Running an Agent in a Loop would be considered multiple independent Agent runs or interactions, with each run potentially involving one or a limited number of tool calls. Whereas the tool_call_loop feature we hope to implement would be considered a single complete interaction or operation flow, where the Agent internally and autonomously repeats tool calls until specific termination conditions are met.

@ekzhu
Copy link
Collaborator Author

ekzhu commented Apr 15, 2025

@zhengkezhou1 if you are new to the code base, I think you can get started by sketching out your design in this issue so we can get you to the right direction.

The goal for this feature is to make it much easier to use a single agent without thinking about team at all. The tool call loop provides a quick way to do the same thing as a round robin with a single agent and termination condition however, the implementation can be much more specific to AssistantAgent itself.

@zhengkezhou1
Copy link

@ekzhu you mentioned that Claude models have been trained to perform exactly this. Do you have any relevant links? I couldn't find this type of information in the documentation. Also, I did a small test with Gemini.

My initial idea is as follows:

Instead of manual checking, we will add the result obtained from the tool call to the prompt. For example: "The result of the current tool call is: [specific result of the function call]. Is this the result we need?" After that, we will send this prompt to the model.

If the response from the model indicates that the result is not what we expect, we will loop the tool call until we get the desired result, while also ensuring that if we continuously fail for a certain period, we will stop calling and return directly."

Explanation based on OpenAI API:

  1. Initial User Request: Start by sending the user's query to the OpenAI model using the Chat Completions endpoint. The messages parameter in request will contain the user's initial message.

  2. Model Initiates Tool Call: If the model determines that it needs to use a tool (based on the functions you've defined in your request), the response will include a tool_calls array. The finish_reason in the response will be "tool_calls".

  3. Execute the Tool: Application will then parse the tool_calls array, identify the function to be called (based on the function.name), and execute that function. extract the necessary arguments from tool_calls[].function.arguments.

  4. Construct the Evaluation Prompt: Once the result from executing the tool, create a new message to be added to the conversation history (the messages array). This new message will have the role set to "user" and the content will be evaluation prompt. For example:

    {
      "role": "user",
      "content": "The result of the current tool call is: `[function execution result]`. Is this the result we need? Please answer 'Yes' or 'No'."
    }

    Replace [function execution result] with the actual output from your get_weather function.

  5. Send Evaluation Prompt to Model: Make another call to the Chat Completions endpoint, this time including the entire conversation history in the messages array: the original user query, the model's tool call request (as a message with role="assistant" and the tool_calls array), the tool's response (as a message with role="tool" and the tool_call_id and content containing the function result), and new evaluation prompt.

  6. Get Model's Evaluation: The model's response to this evaluation prompt will indicate whether it believes the tool call result is satisfactory. examine the content of the model's response.

  7. Implement the Looping and Retry Mechanism:

    • If the model's response to evaluation prompt indicates "No" (or a negative sentiment based on interpretation), initiate another tool call. This involves sending the updated messages array (including the model's negative evaluation) back to the Chat Completions endpoint. The model might then generate another tool_calls request.
      *repeat steps 3-6 until the model's evaluation is "Yes" (or positive) or hit the defined limits for retries or consecutive failures.
  8. Final Response Generation: Once the model indicates the tool call result is satisfactory, make one final call to the Chat Completions endpoint with the complete messages array. The model should now be able to generate the final response to the user's initial query, leveraging the validated tool call result.

@ekzhu
Copy link
Collaborator Author

ekzhu commented Apr 17, 2025

@ekzhu you mentioned that Claude models have been trained to perform exactly this. Do you have any relevant links? I couldn't find this type of information in the documentation. Also, I did a small test with Gemini.

Sorry. I made this up. However their documentation indicates this is the recommended way. See Sequential Tools in the documentation.

@ekzhu
Copy link
Collaborator Author

ekzhu commented Apr 17, 2025

Instead of manual checking, we will add the result obtained from the tool call to the prompt. For example: "The result of the current tool call is: [specific result of the function call]. Is this the result we need?" After that, we will send this prompt to the model.

If the response from the model indicates that the result is not what we expect, we will loop the tool call until we get the desired result, while also ensuring that if we continuously fail for a certain period, we will stop calling and return directly."

Can we instead have the model decide directly whether to continue with another tool call, using only the tool call response? This is recommended by the Anthropic doc. The OpenAI Assistant API also uses similar pattern, see Run Lifecycle.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Extra attention is needed proj-agentchat
Projects
None yet
Development

No branches or pull requests

3 participants