Skip to content

Commit 83c002d

Browse files
authored
Revert "fix opt fc1/fc2 layer modules should not be quantized (#118)" (#149)
This reverts commit c9a0688.
1 parent cd80805 commit 83c002d

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

gptqmodel/models/opt.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -15,6 +15,6 @@ class OPTGPTQ(BaseGPTQModel):
1515
layer_modules = [
1616
["self_attn.k_proj", "self_attn.v_proj", "self_attn.q_proj"],
1717
["self_attn.out_proj"],
18-
# ["fc1"], disabled: not a good candidate for quantization
19-
# ["fc2"], disabled: not a good candidate for quantization
18+
["fc1"],
19+
["fc2"],
2020
]

0 commit comments

Comments
 (0)