Skip to content

Does anyone want an LLM-based autosuggester? #1993

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
lstein opened this issue May 30, 2025 · 8 comments
Open

Does anyone want an LLM-based autosuggester? #1993

lstein opened this issue May 30, 2025 · 8 comments

Comments

@lstein
Copy link

lstein commented May 30, 2025

I've written an AutoSuggest class which suggests prompt completions using a locally-installed or remotely-hosted large language model. It can be customized to produce different types of completions depending on the writing task (coding, fiction, documentation) and is (optionally) aware of the context in which the prompt is being written.

Is there any interest in my contributing this to the repo as a pull request?

Image

For further information, the class will introduce a number of package dependencies:

  • langchain
  • langchain_core
  • PyEnchant

I haven't done this yet, but I'm planning to try an AutoCompleter as well which will provide a pop-up menu of next tokens, ordered by their probabilities. I'm not sure how useful this will be, however.

@lin-calvin
Copy link

Cool idea! But what about litellm? It's much lighter than langchain, so putting this in contrib would be nice. But wouldn't adding it cause the pip install to become slower?

@asmeurer
Copy link
Contributor

Related #1913

@lstein
Copy link
Author

lstein commented Jun 2, 2025

Cool idea! But what about litellm? It's much lighter than langchain, so putting this in contrib would be nice. But wouldn't adding it cause the pip install to become slower?

The code doesn't use any of langchain's fancier features, so swapping out langchain for lightllm is easily doable.

I hadn't heard about lightllm until now. It seems to have an order of magnitude fewer github stars and forks than langchain, but is it up and coming?

Regarding pip install, I don't like adding a bunch of dependencies to a project just in order to support one of its lesser-used features. I could make this an optional feature in prompt_toolkit's pyproject.toml:

pip install .[aisuggest]

So wouldn't slow the default pip install at all.

@lin-calvin
Copy link

Cool idea! But what about litellm? It's much lighter than langchain, so putting this in contrib would be nice. But wouldn't adding it cause the pip install to become slower?

The code doesn't use any of langchain's fancier features, so swapping out langchain for lightllm is easily doable.

I hadn't heard about lightllm until now. It seems to have an order of magnitude fewer github stars and forks than langchain, but is it up and coming?

Regarding pip install, I don't like adding a bunch of dependencies to a project just in order to support one of its lesser-used features. I could make this an optional feature in prompt_toolkit's pyproject.toml:

pip install .[aisuggest]

So wouldn't slow the default pip install at all.

I means litellm not lightllm

@lstein
Copy link
Author

lstein commented Jun 5, 2025

I just did a pull request: #1995 . I did have a look at litellm but it is a pretty low-level API that doesn't have the many features brought by langchain, such as support for tools, agents, and chat memory.

The size differences are not all that significant either. litellm with all its dependencies consumes 34Mb of disk. langchain and all its dependencies use 54M.

Porting the autosuggester to litellm would not be particularly difficult, and I am happy to do so if there is demand for it.

@lin-calvin
Copy link

I just did a pull request: #1995 . I did have a look at litellm but it is a pretty low-level API that doesn't have the many features brought by langchain, such as support for tools, agents, and chat memory.

The size differences are not all that significant either. litellm with all its dependencies consumes 34Mb of disk. langchain and all its dependencies use 54M.

Porting the autosuggester to litellm would not be particularly difficult, and I am happy to do so if there is demand for it.

What about plain openai sdk?

@lstein
Copy link
Author

lstein commented Jun 6, 2025

I just did a pull request: #1995 . I did have a look at litellm but it is a pretty low-level API that doesn't have the many features brought by langchain, such as support for tools, agents, and chat memory.
The size differences are not all that significant either. litellm with all its dependencies consumes 34Mb of disk. langchain and all its dependencies use 54M.
Porting the autosuggester to litellm would not be particularly difficult, and I am happy to do so if there is demand for it.

What about plain openai sdk?

Correct me if I'm wrong, but wouldn't using the openai SDK lock people into using the OpenAI API? I want people to be able to swap in anthropic, local llama, gemini, and all the other alternatives out there.

@lin-calvin
Copy link

I just did a pull request: #1995 . I did have a look at litellm but it is a pretty low-level API that doesn't have the many features brought by langchain, such as support for tools, agents, and chat memory.
The size differences are not all that significant either. litellm with all its dependencies consumes 34Mb of disk. langchain and all its dependencies use 54M.
Porting the autosuggester to litellm would not be particularly difficult, and I am happy to do so if there is demand for it.

What about plain openai sdk?

Correct me if I'm wrong, but wouldn't using the openai SDK lock people into using the OpenAI API? I want people to be able to swap in anthropic, local llama, gemini, and all the other alternatives out there.

It seems that all of the world support the openai-like api

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants