Skip to content

[IDEA]Handle big data #1169

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
PaleNeutron opened this issue Apr 25, 2025 · 0 comments
Open

[IDEA]Handle big data #1169

PaleNeutron opened this issue Apr 25, 2025 · 0 comments
Labels
upgrade New feature or request

Comments

@PaleNeutron
Copy link

PaleNeutron commented Apr 25, 2025

Describe the Feature

In my work, most APIs return very large datasets — for example, a fund's daily holdings over the past year.

I would like the agent to be able to solve complex problems step by step, which requires retrieving data from multiple APIs at different stages.

In this scenario, the agent faces two main challenges:

  1. Sending the entire dataset to the LLM is often unnecessary. In many cases, the LLM only needs to understand what a human would look at in a Jupyter notebook — such as the first few rows and basic column descriptions. This summary can be referred to as the table schema or metadata.

  2. We shouldn’t let the LLM generate API calls that include the full dataset. It's slow and can be expensive.

Proposed Solution

To solve these issues, the agent should support a memory mechanism:

  • When the agent receives data from an API, it should store the full dataset in memory.
  • It then generates a summary (schema or metadata) and assigns a memory ID.
  • The LLM can refer to this memory ID in future steps to operate on the actual data without needing to reprocess or resend it.
  • The agent can then fetch the data from memory using the ID and pass it to the next API or execution step (e.g., a Python sandbox).

Use Case

For example, I want to calculate the standard deviation of a fund’s daily return rate over the past 10 years.

  • First, the agent retrieves the full dataset from an API.
  • Then, it stores the data in memory and passes only the metadata to the LLM.
  • The LLM generates the logic using the memory ID.
  • Finally, the agent uses the memory to fetch the actual data and perform the computation.

Additional Information

No response

Link to Discord or GitHub Discussion

No response


Let me know if you want to make the tone more formal, technical, or casual.

@PaleNeutron PaleNeutron added the upgrade New feature or request label Apr 25, 2025
@PaleNeutron PaleNeutron changed the title Handle big data [IDEA]Handle big data Apr 25, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
upgrade New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant