Skip to content

[X86] Fold some (truncate (srl (add X, C1), C2)) patterns to (add (truncate (srl X, C2)), C1') #126448

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 9 commits into from
Feb 21, 2025

Conversation

joaotgouveia
Copy link
Contributor

@joaotgouveia joaotgouveia commented Feb 10, 2025

Addresses the poor codegen identified in #123239 and a few extra cases. This transformation is correct for eq (https://alive2.llvm.org/ce/z/qZhwtT), ne (https://alive2.llvm.org/ce/z/6gsmNz), ult (https://alive2.llvm.org/ce/z/xip_td) and ugt (https://alive2.llvm.org/ce/z/39XQkX).

Fixes #123239

…uncate (srl X, C2), C1'))

C1' will be smaller than C1 so we are able to avoid generating code with MOVABS and large constants in certain cases.
Copy link

Thank you for submitting a Pull Request (PR) to the LLVM Project!

This PR will be automatically labeled and the relevant teams will be notified.

If you wish to, you can add reviewers by using the "Reviewers" section on this page.

If this is not working for you, it is probably because you do not have write permissions for the repository. In which case you can instead tag reviewers by name in a comment by using @ followed by their GitHub username.

If you have received no comments on your PR for a week, you can request a review by "ping"ing the PR by adding a comment “Ping”. The common courtesy "ping" rate is once a week. Please remember that you are asking for valuable time from other developers.

If you have further questions, they may be answered by the LLVM GitHub User Guide.

You can also ask questions in a comment on this PR, on the LLVM Discord or on the forums.

@llvmbot
Copy link
Member

llvmbot commented Feb 10, 2025

@llvm/pr-subscribers-backend-x86

Author: João Gouveia (joaotgouveia)

Changes

Addresses the poor codegen identified in #123239 and a few extra cases. This transformation is correct for eq (https://alive2.llvm.org/ce/z/qZhwtT), ne (https://alive2.llvm.org/ce/z/6gsmNz), ult (https://alive2.llvm.org/ce/z/xip_td) and ugt (https://alive2.llvm.org/ce/z/39XQkX).


Full diff: https://github.com/llvm/llvm-project/pull/126448.diff

2 Files Affected:

  • (modified) llvm/lib/Target/X86/X86ISelLowering.cpp (+61)
  • (added) llvm/test/CodeGen/X86/combine-setcc-trunc-add.ll (+123)
diff --git a/llvm/lib/Target/X86/X86ISelLowering.cpp b/llvm/lib/Target/X86/X86ISelLowering.cpp
index 9a916a663a64c2..fab1482b8675c0 100644
--- a/llvm/lib/Target/X86/X86ISelLowering.cpp
+++ b/llvm/lib/Target/X86/X86ISelLowering.cpp
@@ -48472,6 +48472,64 @@ static SDValue combineSetCCMOVMSK(SDValue EFLAGS, X86::CondCode &CC,
   return SDValue();
 }
 
+// Attempt to fold some (truncate (srl (add X, C1), C2)) patterns to
+// (add (truncate (srl X, C2), C1')). C1' will be smaller than C1 so we are able
+// to avoid generating code with MOVABS and large constants in certain cases.
+static SDValue combineSetCCTruncAdd(SDValue EFLAGS, X86::CondCode &CC,
+                                    SelectionDAG &DAG) {
+  if (!(CC == X86::COND_E || CC == X86::COND_NE || CC == X86::COND_AE ||
+        CC == X86::COND_B))
+    return SDValue();
+
+  EVT VT = EFLAGS.getValueType();
+  if (EFLAGS.getOpcode() == X86ISD::SUB && VT == MVT::i32) {
+    SDValue CmpLHS = EFLAGS.getOperand(0);
+    auto *CmpConstant = dyn_cast<ConstantSDNode>(EFLAGS.getOperand(1));
+
+    if (CmpLHS.getOpcode() != ISD::TRUNCATE || !CmpConstant)
+      return SDValue();
+
+    SDValue Srl = CmpLHS.getOperand(0);
+    EVT SrlVT = Srl.getValueType();
+    if (Srl.getOpcode() != ISD::SRL || SrlVT != MVT::i64)
+      return SDValue();
+
+    SDValue Add = Srl.getOperand(0);
+    // Avoid changing the ADD if it is used elsewhere.
+    if (Add.getOpcode() != ISD::ADD || !Add.hasOneUse())
+      return SDValue();
+
+    auto *AddConstant = dyn_cast<ConstantSDNode>(Add.getOperand(1));
+    auto *SrlConstant = dyn_cast<ConstantSDNode>(Srl.getOperand(1));
+    if (!AddConstant || !SrlConstant)
+      return SDValue();
+
+    APInt AddConstVal = AddConstant->getAPIntValue();
+    APInt SrlConstVal = SrlConstant->getAPIntValue();
+    if (!SrlConstVal.ugt(VT.getSizeInBits()))
+      return SDValue();
+
+    APInt CmpConstVal = CmpConstant->getAPIntValue();
+    APInt ShiftedAddConst = AddConstVal.lshr(SrlConstVal);
+    if (!CmpConstVal.ult(ShiftedAddConst.trunc(VT.getSizeInBits())) ||
+        (ShiftedAddConst.shl(SrlConstVal)) != AddConstVal)
+      return SDValue();
+
+    SDLoc DL(EFLAGS);
+    SDValue AddLHSSrl =
+        DAG.getNode(ISD::SRL, DL, SrlVT, Add.getOperand(0), Srl.getOperand(1));
+    SDValue Trunc = DAG.getNode(ISD::TRUNCATE, DL, VT, AddLHSSrl);
+
+    APInt NewAddConstVal =
+        (~((~AddConstVal).lshr(SrlConstVal))).trunc(VT.getSizeInBits());
+    SDValue NewAddConst = DAG.getConstant(NewAddConstVal, DL, VT);
+    SDValue NewAddNode = DAG.getNode(ISD::ADD, DL, VT, Trunc, NewAddConst);
+    return DAG.getNode(X86ISD::CMP, DL, VT, NewAddNode, EFLAGS.getOperand(1));
+  }
+
+  return SDValue();
+}
+
 /// Optimize an EFLAGS definition used according to the condition code \p CC
 /// into a simpler EFLAGS value, potentially returning a new \p CC and replacing
 /// uses of chain values.
@@ -48494,6 +48552,9 @@ static SDValue combineSetCCEFLAGS(SDValue EFLAGS, X86::CondCode &CC,
   if (SDValue R = combineSetCCMOVMSK(EFLAGS, CC, DAG, Subtarget))
     return R;
 
+  if (SDValue R = combineSetCCTruncAdd(EFLAGS, CC, DAG))
+    return R;
+
   return combineSetCCAtomicArith(EFLAGS, CC, DAG, Subtarget);
 }
 
diff --git a/llvm/test/CodeGen/X86/combine-setcc-trunc-add.ll b/llvm/test/CodeGen/X86/combine-setcc-trunc-add.ll
new file mode 100644
index 00000000000000..b84b256e7fa592
--- /dev/null
+++ b/llvm/test/CodeGen/X86/combine-setcc-trunc-add.ll
@@ -0,0 +1,123 @@
+; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py UTC_ARGS: --version 5
+; RUN: llc < %s -mtriple=x86_64-unknown | FileCheck %s --check-prefixes=X64
+
+; Test for https://github.com/llvm/llvm-project/issues/123239
+
+define i1 @test_ult_trunc_add(i64 %x) {
+; X64-LABEL: test_ult_trunc_add:
+; X64:       # %bb.0: # %entry
+; X64-NEXT:    shrq $48, %rdi
+; X64-NEXT:    addl $-65522, %edi # imm = 0xFFFF000E
+; X64-NEXT:    cmpl $3, %edi
+; X64-NEXT:    setb %al
+; X64-NEXT:    retq
+entry:
+  %add = add i64 %x, 3940649673949184
+  %shr = lshr i64 %add, 48
+  %conv = trunc i64 %shr to i32
+  %res = icmp ult i32 %conv, 3
+  ret i1 %res
+}
+
+define i1 @test_ult_add(i64 %x) {
+; X64-LABEL: test_ult_add:
+; X64:       # %bb.0: # %entry
+; X64-NEXT:    shrq $48, %rdi
+; X64-NEXT:    addl $-65522, %edi # imm = 0xFFFF000E
+; X64-NEXT:    cmpl $3, %edi
+; X64-NEXT:    setb %al
+; X64-NEXT:    retq
+entry:
+    %0 = add i64 3940649673949184, %x
+    %1 = icmp ult i64 %0, 844424930131968
+    ret i1 %1
+}
+
+define i1 @test_ugt_trunc_add(i64 %x) {
+; X64-LABEL: test_ugt_trunc_add:
+; X64:       # %bb.0: # %entry
+; X64-NEXT:    shrq $48, %rdi
+; X64-NEXT:    addl $-65522, %edi # imm = 0xFFFF000E
+; X64-NEXT:    cmpl $4, %edi
+; X64-NEXT:    setae %al
+; X64-NEXT:    retq
+entry:
+  %add = add i64 %x, 3940649673949184
+  %shr = lshr i64 %add, 48
+  %conv = trunc i64 %shr to i32
+  %res = icmp ugt i32 %conv, 3
+  ret i1 %res
+}
+
+define i1 @test_ugt_add(i64 %x) {
+; X64-LABEL: test_ugt_add:
+; X64:       # %bb.0: # %entry
+; X64-NEXT:    movabsq $3940649673949184, %rax # imm = 0xE000000000000
+; X64-NEXT:    addq %rdi, %rax
+; X64-NEXT:    movabsq $844424930131968, %rcx # imm = 0x3000000000000
+; X64-NEXT:    cmpq %rcx, %rax
+; X64-NEXT:    seta %al
+; X64-NEXT:    retq
+entry:
+    %0 = add i64 3940649673949184, %x
+    %1 = icmp ugt i64 %0, 844424930131968
+    ret i1 %1
+}
+
+define i1 @test_eq_trunc_add(i64 %x) {
+; X64-LABEL: test_eq_trunc_add:
+; X64:       # %bb.0: # %entry
+; X64-NEXT:    shrq $48, %rdi
+; X64-NEXT:    addl $-65522, %edi # imm = 0xFFFF000E
+; X64-NEXT:    cmpl $3, %edi
+; X64-NEXT:    sete %al
+; X64-NEXT:    retq
+entry:
+  %add = add i64 %x, 3940649673949184
+  %shr = lshr i64 %add, 48
+  %conv = trunc i64 %shr to i32
+  %res = icmp eq i32 %conv, 3
+  ret i1 %res
+}
+
+define i1 @test_eq_add(i64 %x) {
+; X64-LABEL: test_eq_add:
+; X64:       # %bb.0: # %entry
+; X64-NEXT:    movabsq $-3096224743817216, %rax # imm = 0xFFF5000000000000
+; X64-NEXT:    cmpq %rax, %rdi
+; X64-NEXT:    sete %al
+; X64-NEXT:    retq
+entry:
+    %0 = add i64 3940649673949184, %x
+    %1 = icmp eq i64 %0, 844424930131968
+    ret i1 %1
+}
+
+define i1 @test_ne_trunc_add(i64 %x) {
+; X64-LABEL: test_ne_trunc_add:
+; X64:       # %bb.0: # %entry
+; X64-NEXT:    shrq $48, %rdi
+; X64-NEXT:    addl $-65522, %edi # imm = 0xFFFF000E
+; X64-NEXT:    cmpl $3, %edi
+; X64-NEXT:    setne %al
+; X64-NEXT:    retq
+entry:
+  %add = add i64 %x, 3940649673949184
+  %shr = lshr i64 %add, 48
+  %conv = trunc i64 %shr to i32
+  %res = icmp ne i32 %conv, 3
+  ret i1 %res
+}
+
+define i1 @test_ne_add(i64 %x) {
+; X64-LABEL: test_ne_add:
+; X64:       # %bb.0: # %entry
+; X64-NEXT:    movabsq $-3096224743817216, %rax # imm = 0xFFF5000000000000
+; X64-NEXT:    cmpq %rax, %rdi
+; X64-NEXT:    setne %al
+; X64-NEXT:    retq
+entry:
+    %0 = add i64 3940649673949184, %x
+    %1 = icmp ne i64 %0, 844424930131968
+    ret i1 %1
+}

@joaotgouveia joaotgouveia changed the title [X86] Fold some (truncate (srl (add X, C1), C2)) patterns to (add (truncate (srl X, C2), C1')) [X86] Fold some (setcc (sub (truncate (srl (add X, C1), C2)), C3), CC) patterns to (setcc (cmp (add (truncate (srl X, C2)), C1'), C3), CC) Feb 10, 2025
@joaotgouveia
Copy link
Contributor Author

Thank you for your quick feedback. I have made the suggested changes.

Comment on lines 15 to 19
%add = add i64 %x, 3940649673949184
%shr = lshr i64 %add, 48
%conv = trunc i64 %shr to i32
%res = icmp ult i32 %conv, 3
ret i1 %res
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm wondering if we can make it more general, e.g.:
https://alive2.llvm.org/ce/z/zs6NQc
https://godbolt.org/z/jTT99GrGK

I feel the cmp is not necessary here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like something like this works: https://alive2.llvm.org/ce/z/A8XKwW
I initially missed the trunc to i16 + zext.

return SDValue();

// Avoid changing the ADD if it is used elsewhere.
if (!Srl.getOperand(0).hasOneUse())
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This can be replaced by a m_OneUse in the first pattern match


SDValue Srl;
if (!sd_match(EFLAGS.getOperand(0).getOperand(0),
m_AllOf(m_SpecificVT(MVT::i64), m_Value(Srl))))
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Merge this into the pattern match above?


EVT VT = EFLAGS.getValueType();
APInt ShiftedAddConst = AddConst.lshr(SrlConst);
if (!CmpConst.ult(ShiftedAddConst.trunc(VT.getSizeInBits())) ||
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

use AddConst.extractBits() ?

@joaotgouveia
Copy link
Contributor Author

I've changed the fold to make it more general: https://alive2.llvm.org/ce/z/F3oLAg
It looks like a potentially unnecessary MOVZ is being generated (b773400#diff-087ad9d10d1a1ef9414f8bf9111fef0b4983cebe3a1f17a20accf492790b40c0L41-R44, for example). Considering this is also noticeable here https://godbolt.org/z/jTT99GrGK, I'm guessing this is a separate issue?
Regarding the TODO comment on i64 math (b773400#diff-eb2f176d67cdf1955a90e71e25d6d39910d723d4e0b8a9bf8dfa229d3a6b2c1eR53685-R53686), should I remove it entirely?

@joaotgouveia joaotgouveia changed the title [X86] Fold some (setcc (sub (truncate (srl (add X, C1), C2)), C3), CC) patterns to (setcc (cmp (add (truncate (srl X, C2)), C1'), C3), CC) [X86] Fold some (truncate (srl (add X, C1), C2)) patterns to (add (truncate (srl X, C2)), C1') Feb 12, 2025
Comment on lines 53631 to 53633
APInt CleanupSizeConstVal = (SrlConst - 32).zextOrTrunc(VT.getSizeInBits());
SDValue CleanupSizeConst = DAG.getConstant(CleanupSizeConstVal, DL, VT);
SDValue Shl = DAG.getNode(ISD::SHL, DL, VT, NewAddNode, CleanupSizeConst);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can it help if replace SHL/SRL with getAnyExtOrTrunc? I'm not sure anyext always generates the correct result though, but should help remove the movzwl

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

getAnyExtOrTrunc does indeed remove the movzwl, but I'm not sure about it's correctness. I might be mistaken, but judging by the proofs it looks like we need the higher bits to be zeroed. Both proofs explicitly zero the higher bits of the result, and this seems to be required for the transformation to be correct (https://alive2.llvm.org/ce/z/zs6NQc using trunc + zext, https://alive2.llvm.org/ce/z/F3oLAg using shl + lshr).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think they are the two sides to the same coin. If you believe MOVZ is unnecessary, then it means we can assume the high 16-bit are all zeros. But I don't know how to prove it. I used zext just because there's no anyext in the IR.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm inclined to think that the MOVZ is unnecessary. This transformation with the AND does not use it https://godbolt.org/z/a5d5vhdsb, and my previous implementation of the fold (although less general) also did not generate any MOVZs. I'll change the implementation to use anyext.

ret i1 %res
}

define i1 @test_ugt_add(i64 %x) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are these negative tests? Add comments if they are.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, this transformation is also applicable to those tests. However, the selection DAG differs in those cases, so I figured expanding the transformation to include them should be a separate patch. If preferred, I can expand it now, though.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's ok to do in a separate patch.

@phoebewang
Copy link
Contributor

eb2f176d67cdf1955a90e71e25d6d39910d723d4e0b8a9bf8dfa229d3a6b2c1eR53685-R53686), should I remove it entirely?

This reminds me can we expand it to AND/XOR/OR/SUB, maybe in a follow up patch? Then I think we can remove it. For i64, it seems only the constant fold is benefit.

; X64-NEXT: cmpl $3, %eax
; X64-NEXT: setb %al
; X64-NEXT: retq
entry:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(style) remove entry: and avoid numbered variables

@joaotgouveia
Copy link
Contributor Author

eb2f176d67cdf1955a90e71e25d6d39910d723d4e0b8a9bf8dfa229d3a6b2c1eR53685-R53686), should I remove it entirely?

This reminds me can we expand it to AND/XOR/OR/SUB, maybe in a follow up patch? Then I think we can remove it. For i64, it seems only the constant fold is benefit.

This transformation is correct for those cases as well. Expanding it to include those ops would require adding a couple of tests and expanding the sd_match to cover them, so it should be pretty simple. I can do it in this patch or in a follow up, depending on how wide you want the scope of this patch to be.

It looks like AND is already being transformed (https://godbolt.org/z/a5d5vhdsb), so for that specific operation maybe just adding a test would suffice?

@phoebewang
Copy link
Contributor

eb2f176d67cdf1955a90e71e25d6d39910d723d4e0b8a9bf8dfa229d3a6b2c1eR53685-R53686), should I remove it entirely?

This reminds me can we expand it to AND/XOR/OR/SUB, maybe in a follow up patch? Then I think we can remove it. For i64, it seems only the constant fold is benefit.

This transformation is correct for those cases as well. Expanding it to include those ops would require adding a couple of tests and expanding the sd_match to cover them, so it should be pretty simple. I can do it in this patch or in a follow up, depending on how wide you want the scope of this patch to be.

Thanks! The general preference is to use multiple patches and keep each simple.

It looks like AND is already being transformed (https://godbolt.org/z/a5d5vhdsb), so for that specific operation maybe just adding a test would suffice?

Seems it's due to the number is happen to be power of 2, https://godbolt.org/z/1za939PKc

Copy link

github-actions bot commented Feb 14, 2025

✅ With the latest revision this PR passed the C/C++ code formatter.

; X64-NEXT: cmpl $65525, %edi # imm = 0xFFF5
; X64-NEXT: sete %al
; X64-NEXT: retq
entry:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove "entry " from all tests

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done. Sorry for missing it again after your first review.

Copy link
Contributor

@phoebewang phoebewang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.

Copy link
Collaborator

@RKSimon RKSimon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@phoebewang phoebewang merged commit 0a913b5 into llvm:main Feb 21, 2025
8 checks passed
Copy link

@joaotgouveia Congratulations on having your first Pull Request (PR) merged into the LLVM Project!

Your changes will be combined with recent changes from other authors, then tested by our build bots. If there is a problem with a build, you may receive a report in an email or a comment on this PR.

Please check whether problems have been caused by your change specifically, as the builds can include changes from many authors. It is not uncommon for your change to be included in a build that fails due to someone else's changes, or infrastructure issues.

How to do this, and the rest of the post-merge process, is covered in detail here.

If your change does cause a problem, it may be reverted, or you can revert it yourself. This is a normal part of LLVM development. You can fix your changes and open a new PR to merge them again.

If you don't get any reports, no action is required from you. Your changes are working as expected, well done!

phoebewang pushed a commit that referenced this pull request Mar 1, 2025
… `xor` (#128435)

As discussed in #126448, the fold implemented by #126448 / #128353 can
be extended to operations other than `add`. This patch extends the fold
performed by `combinei64TruncSrlAdd` to include `or` and `xor` (proof:
https://alive2.llvm.org/ce/z/AXuaQu). There's no need to extend it to
`sub` and `and`, as similar folds are already being performed for those
operations.

CC: @phoebewang @RKSimon
jph-13 pushed a commit to jph-13/llvm-project that referenced this pull request Mar 21, 2025
… `xor` (llvm#128435)

As discussed in llvm#126448, the fold implemented by llvm#126448 / llvm#128353 can
be extended to operations other than `add`. This patch extends the fold
performed by `combinei64TruncSrlAdd` to include `or` and `xor` (proof:
https://alive2.llvm.org/ce/z/AXuaQu). There's no need to extend it to
`sub` and `and`, as similar folds are already being performed for those
operations.

CC: @phoebewang @RKSimon
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Unnecessarily large constant created from reordering add and shift
4 participants