Is your feature request related to a problem? Please describe.
Current candle binding supports bert embedding models that usually have limited sequence length (512).
While the Embedding Gemma supports 2K and Qwen3 0.6B embedding models supports up to 32K
Describe the solution you'd like
We should add more embedding models in candle-binding
Describe alternatives you've considered
Additional context