Closed as not planned
Description
Trying to quantize the just-released Command R+ model. I know command R support was added a while back, but there appears to be something different about this new, bigger model that is causing issues. With a fresh clone of LCPP from a few minutes ago, this is the failure I get when trying to convert.
Traceback (most recent call last):
File "G:\lcpp2\convert-hf-to-gguf.py", line 2443, in <module>
main()
File "G:\lcpp2\convert-hf-to-gguf.py", line 2424, in main
model_instance = model_class(dir_model, ftype_map[args.outtype], fname_out, args.bigendian)
File "G:\lcpp2\convert-hf-to-gguf.py", line 2347, in __init__
self.hparams["max_position_embeddings"] = self.hparams["model_max_length"]
KeyError: 'model_max_length'