Skip to content

Finetune error fo mlx-community/Meta-Llama-3.1-8B-Instruct-4bit #2

@pjq

Description

@pjq

Really appreciate for the youtube video for finetune with Mac M1,
And I can run the finetune successfully on my Mac M1.

python scripts/lora.py --model mlx-community/Mistral-7B-Instruct-v0.2-4bit --train --iters 100 --steps-per-eval 10 --val-batches -1 --learning-rate 1e-5 --lora-layers 16 --test

And I want to fine tune with mlx-community/Meta-Llama-3.1-8B-Instruct-4bit, and it has the errors.
Not sure you have got the chance to try it.

(mlx-env) ➜  qlora-mlx git:(main) ✗ python scripts/lora.py --model mlx-community/Meta-Llama-3.1-8B-Instruct-4bit --iters 100 --steps-per-eval 10 --val-batches -1 --learning-rate 1e-5 --lora-layers 16 --test
None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.
Loading pretrained model
Traceback (most recent call last):
  File "/workspace/YouTube-Blog/LLMs/qlora-mlx/scripts/lora.py", line 336, in <module>
    model, tokenizer, _ = lora_utils.load(args.model, tokenizer_config)
  File "/workspace/YouTube-Blog/LLMs/qlora-mlx/scripts/utils.py", line 149, in load
    model_args = models.ModelArgs.from_dict(config)
  File "/workspace/YouTube-Blog/LLMs/qlora-mlx/scripts/models.py", line 40, in from_dict
    return cls(
  File "<string>", line 14, in __init__
  File "/workspace/YouTube-Blog/LLMs/qlora-mlx/scripts/models.py", line 33, in __post_init__
    raise ValueError(f"rope_scaling must contain keys {required_keys}")
ValueError: rope_scaling must contain keys {'factor', 'type'}

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions