Skip to content

add support for llama.cpp local server  #1

@nischalj10

Description

@nischalj10

llama.cpp is much faster than ollama. it also provides an open ai api compatible local server. this should be a much better way to package local models in desktop apps and would be a great addition to the repo.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions