You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Sep 10, 2025. It is now read-only.
I ran into an issue with loading the tokenizer, which was root caused to me using my local PyTorch build.
After building the aoti runner, I ran the following command: cmake-out/aoti_run exportedModels/stories15M.so -z /home/angelayi/.torchchat/model-cache/stories15M/tokenizer.model -i "Once upon a time”
With my local build, the above command ran into the error: couldn't load /home/angelayi/.torchchat/model-cache/stories15M/tokenizer.model which is from the sentencepiece tokenizer. Specifying -l 2 doesn't change anything as this is the default setting.
Changing to -l 3 results in the following error:
terminate called after throwing an instance of 'std::invalid_argument'
what(): invalid encoder line:
zsh: IOT instruction (core dumped) cmake-out/aoti_run ../lucy_stories15M.so -z ../tokenizer.model -l 3 -i
After re-running ./install/install_requirements.sh, this installs PyTorch version at 08142024, and runs successfully.
So I tried today's nightly (09112024) using pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu121, and this also runs successfully.
Going back to my local PyTorch build, I checked out the commit 26e5572 which corresponds to the cutoff of today's nightly, and built PyTorch locally. This runs into the initial error with the tokenizers.
I still didn't figure out how to run with my local PyTorch build, but quoting Nikita, this is motivation to create a docker/venv story :P