Skip to content

Convert-hf-to-gguf fails with command-r-plus #6488

Closed as not planned
Closed as not planned
@candre23

Description

@candre23

Trying to quantize the just-released Command R+ model. I know command R support was added a while back, but there appears to be something different about this new, bigger model that is causing issues. With a fresh clone of LCPP from a few minutes ago, this is the failure I get when trying to convert.

Traceback (most recent call last):
  File "G:\lcpp2\convert-hf-to-gguf.py", line 2443, in <module>
    main()
  File "G:\lcpp2\convert-hf-to-gguf.py", line 2424, in main
    model_instance = model_class(dir_model, ftype_map[args.outtype], fname_out, args.bigendian)
  File "G:\lcpp2\convert-hf-to-gguf.py", line 2347, in __init__
    self.hparams["max_position_embeddings"] = self.hparams["model_max_length"]
KeyError: 'model_max_length'

https://huggingface.co/CohereForAI/c4ai-command-r-plus

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions