-
Notifications
You must be signed in to change notification settings - Fork 4.3k
Hosting the ggml models in the cloud #6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Labels
Comments
I think you can try huggingface. And you can load models as you want: wget https://huggingface.co/artzemliak/whisper-tiny-test-for-load/resolve/main/tiny.pt Or clone repo with LFS support. |
Perfect! This is exactly what I was looking for! |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
Currently, I am hosting the
ggml
Whisper model files on my Linode server.However, it has a limited network bandwidth per month and as more people start using
whisper.cpp
it won't be enough.What are some good options for hosting ~10GB of data?
The only requirement is to be able to
wget
/curl
the files directly - i.e. Google Drive and alike are not an option.The text was updated successfully, but these errors were encountered: