-
Notifications
You must be signed in to change notification settings - Fork 12.9k
Closed
Description
I freshly pulled 7e4ea5b and make clean && make
d and it fails to load a model converted from pytorch using the tools from revision 63d2046 (using https://github.com/akx/ggify):
llama.cpp: loading model from models/ausboss-llama-30b-supercot-q8_0.bin
error loading model: llama.cpp: tensor '�+� ��s��93:�a-�%��Y��8Ɓ0�&�M,�9�4������"/�@�չ�"*+c�5�������9�>+n��!������O...' should not be 2563577093-dimensional
llama_init_from_file: failed to load model
llama_init_from_gpt_params: error: failed to load model 'models/ausboss-llama-30b-supercot-q8_0.bin'
main: error: unable to load model
I re-converted the model with 7e4ea5b; apparently the old file had been
llama_model_load_internal: format = ggjt v2 (latest)
and the new one is
llama_model_load_internal: format = ggjt v3 (latest)
(and 6% smaller!)
It would be nice if there was an error saying that ggjt v2 is not supported, instead of dumping out garbage tensor names and mind-bendingly large tensor dimensionalities 😁 but I suppose this doesn't necessarily need any action right now.
This seems to be related to
Metadata
Metadata
Assignees
Labels
No labels