# Prerequisites Please answer the following questions for yourself before submitting an issue. - [x] I am running the latest code. Development is very rapid so there are no tagged versions as of now. - [x] I carefully followed the [README.md](https://github.com/ggerganov/llama.cpp/blob/master/README.md). - [x] I [searched using keywords relevant to my issue](https://docs.github.com/en/issues/tracking-your-work-with-issues/filtering-and-searching-issues-and-pull-requests) to make sure that I am creating a new issue that is not already open (or closed). - [x] I reviewed the [Discussions](https://github.com/ggerganov/llama.cpp/discussions), and have a new bug or useful enhancement to share. --- ### Question/Conjecture: I am performing model conversions as per the guidelines in this PR and using the `llama-bpe` configs fetched: https://github.com/ggerganov/llama.cpp/pull/6920#issue-2265280504 ... The recent [convert-hf-to-gguf-update.py](https://github.com/ggerganov/llama.cpp/blob/9f773486ab78d65f5cca3f7e31c862b7043bf721/convert-hf-to-gguf-update.py#L63) script fetches the llama-bpe configs, but these reflect the ones from the Base model. Recently, within the last week, [there was **a change** to these settings](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct/commit/a8977699a3d0820e80129fb3c93c20fbd9972c41) in the **meta-llama/Meta-Llama-3-8B-Instruct** repo. Is this change in the Instruct EOS pertinent to the current conversion process? To add: I haven't noticed any issues so far using either the Base model configs or the Instruct model configs.