You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,7 +7,7 @@ GPTQ is SOTA one-shot weight quantization method
7
7
8
8
## New Features
9
9
**Changed to use only pytorch instead of the current cuda kernel.
10
-
It has no impact on memory usage. There is a slowdown below 128 length(If you use Transformers' use_cache, seq_len is effectively close to 1.), but much faster at 128 and above.**
10
+
It has no impact on memory usage. There is a slowdown below 128 length(If you use Transformers' use_cache, length is effectively close to 1.), but much faster at 128 and above.**
11
11
12
12
Changed to support new features proposed by [GPTQ](https://github.com/IST-DASLab/gptq#new-features).
0 commit comments