You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+8-8Lines changed: 8 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -10,13 +10,13 @@
10
10
11
11
---
12
12
13
-
# Annoucement
13
+
##Annoucement
14
14
15
15
-[2024-06] 🎬🎬 The `lmms-eval/v0.2` has been upgraded to support video evaluations for video models like LLaVA-NeXT Video and Gemini 1.5 Pro across tasks such as EgoSchema, PerceptionTest, VideoMME, and more. Please refer to the [blog](https://lmms-lab.github.io/posts/lmms-eval-0.2/) for more details
16
16
17
17
-[2024-03] 📝📝 We have released the first version of `lmms-eval`, please refer to the [blog](https://lmms-lab.github.io/posts/lmms-eval-0.1/) for more details
@@ -32,7 +32,7 @@ In the field of language models, there has been a valuable precedent set by the
32
32
33
33
We humbly obsorbed the exquisite and efficient design of [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) and introduce **lmms-eval**, an evaluation framework meticulously crafted for consistent and efficient evaluation of LMM.
34
34
35
-
# Installation
35
+
##Installation
36
36
37
37
For formal usage, you can install the package from PyPI by running the following command:
38
38
```bash
@@ -93,7 +93,7 @@ We also provide the raw data exported from Weights & Biases for the detailed res
93
93
94
94
Our Development will be continuing on the main branch, and we encourage you to give us feedback on what features are desired and how to improve the library further, or ask questions, either in issues or PRs on GitHub.
95
95
96
-
# Multiple Usages
96
+
##Multiple Usages
97
97
98
98
**Evaluation of LLaVA on MME**
99
99
@@ -191,19 +191,19 @@ python3 -m lmms_eval \
191
191
--verbosity=INFO
192
192
```
193
193
194
-
## Supported models
194
+
###Supported models
195
195
196
196
Please check [supported models](lmms_eval/models/__init__.py) for more details.
197
197
198
-
## Supported tasks
198
+
###Supported tasks
199
199
200
200
Please check [supported tasks](lmms_eval/docs/current_tasks.md) for more details.
201
201
202
-
# Add Customized Model and Dataset
202
+
##Add Customized Model and Dataset
203
203
204
204
Please refer to our [documentation](docs/README.md).
205
205
206
-
# Acknowledgement
206
+
##Acknowledgement
207
207
208
208
lmms_eval is a fork of [lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness). We recommend you to read through the [docs of lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness/tree/main/docs) for relevant information.
0 commit comments