Skip to content

Continuously return objects from agent to launcher during runtime? #54

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
apoorvkh opened this issue Jul 29, 2024 · 1 comment
Closed
Labels
enhancement New feature or request

Comments

@apoorvkh
Copy link
Owner

[Not a priority, maybe not even necessary, just something to think about for the longer term]

I believe there is a use case where one might want to sync intermediate results from the agents to the launcher.

For example, maybe the user's function is processing a batch of inputs (one at a time), but this procedure has an expensive one-time start-up cost. Moreover, the user wants to use the intermediate results immediately (or there are so many inputs that they want to cache these results --- and preferably in the launcher --- as they go).

Serving LLMs (for inference) might be one such example: the user wants to set up their big LLM on multiple GPUs/nodes and process their API queries in this "live" fashion.

In Python, this reminds me of the generator paradigm, which yields results one-at-a-time as they are ready.

Maybe we can build a generator-like interface for our Launcher?

@apoorvkh
Copy link
Owner Author

Tracking in #84!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant