Skip to content

Commit 7d86e25

Browse files
authored
README: add "Supported platforms" + update hot topics
1 parent a931202 commit 7d86e25

File tree

1 file changed

+8
-1
lines changed

1 file changed

+8
-1
lines changed

README.md

Lines changed: 8 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,10 +5,11 @@ Inference of [Facebook's LLaMA](https://github.com/facebookresearch/llama) model
55
**Hot topics**
66

77
- Running on Windows: https://github.com/ggerganov/llama.cpp/issues/22
8+
- Fix Tokenizer / Unicode support: https://github.com/ggerganov/llama.cpp/issues/11
89

910
## Description
1011

11-
The main goal is to run the model using 4-bit quantization on a MacBook.
12+
The main goal is to run the model using 4-bit quantization on a MacBook
1213

1314
- Plain C/C++ implementation without dependencies
1415
- Apple silicon first-class citizen - optimized via Arm Neon and Accelerate framework
@@ -22,6 +23,12 @@ Please do not make conclusions about the models based on the results from this i
2223
For all I know, it can be completely wrong. This project is for educational purposes and is not going to be maintained properly.
2324
New features will probably be added mostly through community contributions, if any.
2425

26+
Supported platformst:
27+
28+
- [X] Mac OS
29+
- [X] Linux
30+
- [ ] Windows (soon)
31+
2532
---
2633

2734
Here is a typical run using LLaMA-7B:

0 commit comments

Comments
 (0)