-
-
Notifications
You must be signed in to change notification settings - Fork 451
Open
Description
Hi,
after executing this script (the first one non optional):
python scripts/lora.py --model mlx-community/Mistral-7B-Instruct-v0.2-4bit --train --iters 100 --steps-per-eval 10 --val-batches -1 --learning-rate 1e-5 --lora-layers 16 --test
i'm getting this error
Loading pretrained model
Fetching 7 files: 100%|███████████████████████████████████████████████████████████████████| 7/7 [00:00<00:00, 47662.55it/s]
Total parameters 1242.550M
Trainable parameters 0.213M
Loading datasets
Training
Traceback (most recent call last):
File "/Users/lorigr/Progetti/YouTube-Blog/LLMs/qlora-mlx/scripts/lora.py", line 382, in <module>
train(model, train_set, valid_set, opt, loss, tokenizer, args)
File "/Users/lorigr/Progetti/YouTube-Blog/LLMs/qlora-mlx/scripts/lora.py", line 265, in train
mx.eval(model.parameters(), optimizer.state, lvalue)
RuntimeError: [metal::Device] Unable to load function steel_attention_float32_bq32_bk16_bd128_wm4_wn1_maskfloat16
Function steel_attention_float32_bq32_bk16_bd128_wm4_wn1_maskfloat16 was not found in the library
i have a M1 mac with 16gb of RAM, do you know how to solve it? thanks 🤗
Metadata
Metadata
Assignees
Labels
No labels