You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In my work, most APIs return very large datasets — for example, a fund's daily holdings over the past year.
I would like the agent to be able to solve complex problems step by step, which requires retrieving data from multiple APIs at different stages.
In this scenario, the agent faces two main challenges:
Sending the entire dataset to the LLM is often unnecessary. In many cases, the LLM only needs to understand what a human would look at in a Jupyter notebook — such as the first few rows and basic column descriptions. This summary can be referred to as the table schema or metadata.
We shouldn’t let the LLM generate API calls that include the full dataset. It's slow and can be expensive.
Proposed Solution
To solve these issues, the agent should support a memory mechanism:
When the agent receives data from an API, it should store the full dataset in memory.
It then generates a summary (schema or metadata) and assigns a memory ID.
The LLM can refer to this memory ID in future steps to operate on the actual data without needing to reprocess or resend it.
The agent can then fetch the data from memory using the ID and pass it to the next API or execution step (e.g., a Python sandbox).
Use Case
For example, I want to calculate the standard deviation of a fund’s daily return rate over the past 10 years.
First, the agent retrieves the full dataset from an API.
Then, it stores the data in memory and passes only the metadata to the LLM.
The LLM generates the logic using the memory ID.
Finally, the agent uses the memory to fetch the actual data and perform the computation.
Additional Information
No response
Link to Discord or GitHub Discussion
No response
Let me know if you want to make the tone more formal, technical, or casual.
The text was updated successfully, but these errors were encountered:
Describe the Feature
In my work, most APIs return very large datasets — for example, a fund's daily holdings over the past year.
I would like the agent to be able to solve complex problems step by step, which requires retrieving data from multiple APIs at different stages.
In this scenario, the agent faces two main challenges:
Sending the entire dataset to the LLM is often unnecessary. In many cases, the LLM only needs to understand what a human would look at in a Jupyter notebook — such as the first few rows and basic column descriptions. This summary can be referred to as the table schema or metadata.
We shouldn’t let the LLM generate API calls that include the full dataset. It's slow and can be expensive.
Proposed Solution
To solve these issues, the agent should support a memory mechanism:
Use Case
For example, I want to calculate the standard deviation of a fund’s daily return rate over the past 10 years.
Additional Information
No response
Link to Discord or GitHub Discussion
No response
Let me know if you want to make the tone more formal, technical, or casual.
The text was updated successfully, but these errors were encountered: