Feature Request: Support multimodal LLMs such as Qwen2.5-VL as embedding models #13247
Open
4 tasks done
Labels
enhancement
New feature or request
Prerequisites
Feature Description
llama.cpp should support multimodal models built upon architectures such as Qwen2.5-VL for image and text embeddings.
Motivation
Multimodal LLMs demonstrate better alignment between image and text embeddings than constrastively trained models such as CLIP, which suffer from a modality gap (text compares better with unrelated text than it does with a related image).
Nomic's latest vision models are designed for PDF document retrieval. nomic-embed-multimodal-3b, which generates a single embedding per rasterized PDF page, is already supported by vLLM as it is compatible with the Qwen2-VL embedding model tested here. It is not yet supported by llama.cpp.
Possible Implementation
This would build upon #13209 which adds vision support for Qwen2.5-VL. Also relevant is #12898 which brings vision to the llama.cpp server and would make the embeddings useful in practice, since you can't do much with just one embedding generated via
llama-embedding
or similar.The text was updated successfully, but these errors were encountered: