Skip to content

Commit ed88068

Browse files
authored
Update README.md
1 parent fea3806 commit ed88068

File tree

1 file changed

+8
-8
lines changed

1 file changed

+8
-8
lines changed

README.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -10,13 +10,13 @@
1010

1111
---
1212

13-
# Annoucement
13+
## Annoucement
1414

1515
- [2024-06] 🎬🎬 The `lmms-eval/v0.2` has been upgraded to support video evaluations for video models like LLaVA-NeXT Video and Gemini 1.5 Pro across tasks such as EgoSchema, PerceptionTest, VideoMME, and more. Please refer to the [blog](https://lmms-lab.github.io/posts/lmms-eval-0.2/) for more details
1616

1717
- [2024-03] 📝📝 We have released the first version of `lmms-eval`, please refer to the [blog](https://lmms-lab.github.io/posts/lmms-eval-0.1/) for more details
1818

19-
# Why `lmms-eval`?
19+
## Why `lmms-eval`?
2020

2121
<p align="center" width="80%">
2222
<img src="https://i.postimg.cc/L5kNJsJf/Blue-Purple-Futuristic-Modern-3-D-Tech-Company-Business-Presentation.png" width="100%" height="80%">
@@ -32,7 +32,7 @@ In the field of language models, there has been a valuable precedent set by the
3232

3333
We humbly obsorbed the exquisite and efficient design of [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) and introduce **lmms-eval**, an evaluation framework meticulously crafted for consistent and efficient evaluation of LMM.
3434

35-
# Installation
35+
## Installation
3636

3737
For formal usage, you can install the package from PyPI by running the following command:
3838
```bash
@@ -93,7 +93,7 @@ We also provide the raw data exported from Weights & Biases for the detailed res
9393

9494
Our Development will be continuing on the main branch, and we encourage you to give us feedback on what features are desired and how to improve the library further, or ask questions, either in issues or PRs on GitHub.
9595

96-
# Multiple Usages
96+
## Multiple Usages
9797

9898
**Evaluation of LLaVA on MME**
9999

@@ -191,19 +191,19 @@ python3 -m lmms_eval \
191191
--verbosity=INFO
192192
```
193193

194-
## Supported models
194+
### Supported models
195195

196196
Please check [supported models](lmms_eval/models/__init__.py) for more details.
197197

198-
## Supported tasks
198+
### Supported tasks
199199

200200
Please check [supported tasks](lmms_eval/docs/current_tasks.md) for more details.
201201

202-
# Add Customized Model and Dataset
202+
## Add Customized Model and Dataset
203203

204204
Please refer to our [documentation](docs/README.md).
205205

206-
# Acknowledgement
206+
## Acknowledgement
207207

208208
lmms_eval is a fork of [lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness). We recommend you to read through the [docs of lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness/tree/main/docs) for relevant information.
209209

0 commit comments

Comments
 (0)