Skip to content

Commit 082e684

Browse files
bump tokenicer to v0.0.3 (#1308)
* bump tokenicer to v0.0.3 * fix ruff sh will compile project
1 parent 378f664 commit 082e684

File tree

4 files changed

+7
-7
lines changed

4 files changed

+7
-7
lines changed

.github/workflows/code_quality.yml

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -9,16 +9,17 @@ jobs:
99
strategy:
1010
fail-fast: false
1111
matrix:
12-
python-version: [3.11]
13-
os: [ubuntu-22.04]
12+
python-version: [3.13]
13+
os: [ubuntu-24.04]
1414
runs-on: ${{ matrix.os }}
1515

1616
steps:
1717
- uses: actions/checkout@v4
1818
with:
1919
repository: ${{ github.event.pull_request.head.repo.full_name || github.repository }}
2020
ref: ${{ github.event.pull_request.head.ref || github.ref }}
21-
- uses: actions/setup-python@v5
21+
22+
- uses: actions/setup-python@v5
2223
with:
2324
python-version: ${{ matrix.python-version }}
2425
cache: 'pip'

format/format.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
cd "$(dirname "$0")" || exit
44

55
# force ruff/isort to be same version as setup.py
6-
pip install -U gptqmodel["quality"]
6+
pip install -U ruff==0.9.6 isort==6.0.0
77

88
ruff check ../gptqmodel/models ../gptqmodel/nn_modules ../gptqmodel/quantization ../gptqmodel/utils ../gptqmodel/__init__.py ../examples ../tests ../setup.py --fix --unsafe-fixes
99
ruff_status=$?

requirements.txt

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -13,4 +13,4 @@ hf_transfer>=0.1.9
1313
huggingface_hub>=0.28.1
1414
lm-eval==0.4.7
1515
colorlog>=6.9.0
16-
tokenicer>=0.0.2
16+
tokenicer>=0.0.3

tests/test_transformers.py

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -15,15 +15,14 @@
1515
# limitations under the License.
1616
import os
1717

18-
1918
os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"
2019
import tempfile # noqa: E402
2120
import unittest # noqa: E402
2221

2322
import transformers # noqa: E402
23+
from gptqmodel.utils.torch import torch_empty_cache # noqa: E402
2424
from packaging.version import Version # noqa: E402
2525
from transformers import AutoModelForCausalLM, AutoTokenizer, GPTQConfig # noqa: E402
26-
from gptqmodel.utils.torch import torch_empty_cache # noqa: E402
2726

2827

2928
class TestTransformersIntegration(unittest.TestCase):

0 commit comments

Comments
 (0)