We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
1 parent 7d86e25 commit 702fddfCopy full SHA for 702fddf
README.md
@@ -18,7 +18,7 @@ The main goal is to run the model using 4-bit quantization on a MacBook
18
- 4-bit quantization support
19
- Runs on the CPU
20
21
-This was hacked in an evening - I have no idea if it works correctly.
+This was [hacked in an evening](https://github.com/ggerganov/llama.cpp/issues/33#issuecomment-1465108022) - I have no idea if it works correctly.
22
Please do not make conclusions about the models based on the results from this implementation.
23
For all I know, it can be completely wrong. This project is for educational purposes and is not going to be maintained properly.
24
New features will probably be added mostly through community contributions, if any.
0 commit comments