You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Jun 24, 2024. It is now read-only.
cargo run --release -- -m ./data/gpt4all-lora-quantized.bin -f examples/alpaca_prompt.txt --repl
And got
[2023-03-29T07:21:13Z INFO llama_cli] Warning: Bad token in vocab at index 131
[2023-03-29T07:21:13Z INFO llama_cli] Warning: Bad token in vocab at index 132
[2023-03-29T07:21:13Z INFO llama_cli] Warning: Bad token in vocab at index 133
...
[2023-03-29T07:21:13Z INFO llama_cli] Warning: Bad token in vocab at index 256
[2023-03-29T07:21:13Z INFO llama_cli] Warning: Bad token in vocab at index 257
[2023-03-29T07:21:13Z INFO llama_cli] Warning: Bad token in vocab at index 258
[2023-03-29T07:21:13Z INFO llama_cli] ggml ctx size = 4017.35 MB
[2023-03-29T07:21:13Z INFO llama_cli] Loading model part 1/1 from './data/gpt4all-lora-quantized.bin'
thread 'main' panicked at 'index out of bounds: the len is 2 but the index is 2', /Users/katopz/git/katopz/llama-rs/llama-rs/src/lib.rs:773:21