@@ -325,6 +325,11 @@ Text Embedding
325325 - Example HF Models
326326 - :ref: `LoRA <lora >`
327327 - :ref: `PP <distributed_serving >`
328+ * - :code: `BertModel `
329+ - BERT-based
330+ - :code: `BAAI/bge-base-en-v1.5 `, etc.
331+ -
332+ -
328333 * - :code: `Gemma2Model `
329334 - Gemma2-based
330335 - :code: `BAAI/bge-multilingual-gemma2 `, etc.
@@ -340,6 +345,16 @@ Text Embedding
340345 - :code: `ssmits/Qwen2-7B-Instruct-embed-base `, :code: `Alibaba-NLP/gte-Qwen2-1.5B-instruct `, etc.
341346 - ✅︎
342347 - ✅︎
348+ * - :code: `RobertaModel `, :code: `RobertaForMaskedLM `
349+ - RoBERTa-based
350+ - :code: `sentence-transformers/all-roberta-large-v1 `, :code: `sentence-transformers/all-roberta-large-v1 `, etc.
351+ -
352+ -
353+ * - :code: `XLMRobertaModel `
354+ - XLM-RoBERTa-based
355+ - :code: `intfloat/multilingual-e5-large `, etc.
356+ -
357+ -
343358
344359.. important ::
345360 Some model architectures support both generation and embedding tasks.
@@ -390,6 +405,36 @@ Classification
390405.. note ::
391406 As an interim measure, these models are supported in both offline and online inference via Embeddings API.
392407
408+ Sentence Pair Scoring
409+ ---------------------
410+
411+ .. list-table ::
412+ :widths: 25 25 50 5 5
413+ :header-rows: 1
414+
415+ * - Architecture
416+ - Models
417+ - Example HF Models
418+ - :ref: `LoRA <lora >`
419+ - :ref: `PP <distributed_serving >`
420+ * - :code: `BertForSequenceClassification `
421+ - BERT-based
422+ - :code: `cross-encoder/ms-marco-MiniLM-L-6-v2 `, etc.
423+ -
424+ -
425+ * - :code: `RobertaForSequenceClassification `
426+ - RoBERTa-based
427+ - :code: `cross-encoder/quora-roberta-base `, etc.
428+ -
429+ -
430+ * - :code: `XLMRobertaForSequenceClassification `
431+ - XLM-RoBERTa-based
432+ - :code: `BAAI/bge-reranker-v2-m3 `, etc.
433+ -
434+ -
435+
436+ .. note ::
437+ These models are supported in both offline and online inference via Score API.
393438
394439Multimodal Language Models
395440^^^^^^^^^^^^^^^^^^^^^^^^^^
0 commit comments