-
Notifications
You must be signed in to change notification settings - Fork 626
[Feature] Implement InternVL to llama.cpp #522
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Thank you for your suggestions. We will gradually push support for various frameworks, and we also welcome contributions from the community. |
Hey @czczup would you please clarify the reason closing this issue? |
Any update on internVL support with llama.cpp? |
1 similar comment
Any update on internVL support with llama.cpp? |
Thank you for your attention. We are actively progressing on this work, and we also welcome contributions from the community. |
Just curious why this issue was closed if you are actively progressing on this work? |
Thanks for reopening this issue.
…On Mon, Sep 30, 2024 at 2:54 PM Zhe Chen ***@***.***> wrote:
Reopened #522 <#522>.
—
Reply to this email directly, view it on GitHub
<#522 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AE32PCTXAB3YAMM2F3GTQLLZZDYT7AVCNFSM6AAAAABM2CFCKSVHI2DSMVQWIX3LMV45UABCJFZXG5LFIV3GK3TUJZXXI2LGNFRWC5DJN5XDWMJUGQ2TAMRXGIZDAMA>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
|
any update? |
November 12, any update? thank in advance |
Is there anything that could be helped with? :) InternVL2.5 is really important for the future. :) |
pay close attention to this, any update? pls |
Is it really that hard to do this? I suggest myself to implement it. Current progress: ggml-org/llama.cpp#9403 Also |
Thank you for your willingness to help! We greatly appreciate your initiative and would be glad to have your contributions. If you need us to provide any content or information, feel free to let us know. |
@G-z-w Is this model based on the LLaVA architecture? What are the differences, in input, output and internal parameters, and more? |
This model, along with other InternVL chat models, is similar to the LLaVA framework, with the specific structure shown in the link. The difference lies in dynamic resolution and pixel shuffle. If convenient, we recommend prioritizing the deployment of the InternVL 2.5 series. The details of parameters are in model card of blog. |
Is the model structure of v2.5 identical to v1.5 series? Now I can run v1.5 on llama.cpp. qlylangyu/llama.cpp#1 |
Yes, the structure of v2.5 is identical to that of the v1.5 series, except that the v2.5 series use different language models. |
@James4Ever0 any luck in running v2.5 model? Thank you |
I tried in llama.cpp today, still not supported yet: Any update for luck? |
any update? |
For anyone who is about to work with the current code, you could check my latest release here. The 18 KB archive contains function-level diffs using universal ctags and some Python magic so anyone with entry-level C++ knowledge should be able to merge the changes easily. |
+1 |
Motivation
Many llama.cpp users are requesting this so far. Ollama is one of the interfaces of llama.cpp and it is quite popular. Implementing it will significantly accelerate InterVL adoption and recognition.
Related resources
ggml-org/llama.cpp#6803
Additional context
InternVL is based on LLaMA architecture. Currently text-only InternLM models have been ported to Ollama but not for multimodal ones.
The text was updated successfully, but these errors were encountered: