Skip to content

[DAG] Always allow folding XOR patterns to ABS pre-legalization #94601

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Jun 7, 2024

Conversation

RKSimon
Copy link
Collaborator

@RKSimon RKSimon commented Jun 6, 2024

I have the feeling that AMDGPU isn't flagging some legal ABS instructions where it could, but overall this appears to be a win.

Removes residual ARM handling for vXi64 ABS nodes to prevent infinite loops.

@llvmbot
Copy link
Member

llvmbot commented Jun 6, 2024

@llvm/pr-subscribers-backend-arm
@llvm/pr-subscribers-backend-amdgpu
@llvm/pr-subscribers-llvm-selectiondag

@llvm/pr-subscribers-backend-x86

Author: Simon Pilgrim (RKSimon)

Changes

I have the feeling that AMDGPU isn't flagging some legal ABS instructions where it could, but overall this appears to be a win.

Required for #94504 + #92576


Patch is 392.75 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/94601.diff

15 Files Affected:

  • (modified) llvm/lib/CodeGen/SelectionDAG/DAGCombiner.cpp (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/amdgpu-codegenprepare-fold-binop-select.ll (+24-24)
  • (modified) llvm/test/CodeGen/AMDGPU/amdgpu-codegenprepare-idiv.ll (+435-517)
  • (modified) llvm/test/CodeGen/AMDGPU/bypass-div.ll (+36-38)
  • (modified) llvm/test/CodeGen/AMDGPU/div_i128.ll (+412-406)
  • (modified) llvm/test/CodeGen/AMDGPU/div_v2i128.ll (+499-495)
  • (modified) llvm/test/CodeGen/AMDGPU/idiv-licm.ll (+44-56)
  • (modified) llvm/test/CodeGen/AMDGPU/itofp.i128.bf.ll (+10-10)
  • (modified) llvm/test/CodeGen/AMDGPU/itofp.i128.ll (+30-30)
  • (modified) llvm/test/CodeGen/AMDGPU/rem_i128.ll (+454-444)
  • (modified) llvm/test/CodeGen/AMDGPU/sdiv.ll (+513-553)
  • (modified) llvm/test/CodeGen/AMDGPU/sdiv64.ll (+137-157)
  • (modified) llvm/test/CodeGen/AMDGPU/srem64.ll (+116-134)
  • (modified) llvm/test/CodeGen/X86/div-rem-pair-recomposition-signed.ll (+149-149)
  • (modified) llvm/test/CodeGen/X86/pr38539.ll (+1-1)
diff --git a/llvm/lib/CodeGen/SelectionDAG/DAGCombiner.cpp b/llvm/lib/CodeGen/SelectionDAG/DAGCombiner.cpp
index 02cd125eeff0..a913472525cf 100644
--- a/llvm/lib/CodeGen/SelectionDAG/DAGCombiner.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/DAGCombiner.cpp
@@ -4042,7 +4042,7 @@ SDValue DAGCombiner::visitSUB(SDNode *N) {
   }
 
   // fold B = sra (A, size(A)-1); sub (xor (A, B), B) -> (abs A)
-  if (hasOperation(ISD::ABS, VT) &&
+  if ((!LegalOperations || hasOperation(ISD::ABS, VT)) &&
       sd_match(N1, m_Sra(m_Value(A), m_SpecificInt(BitWidth - 1))) &&
       sd_match(N0, m_Xor(m_Specific(A), m_Specific(N1))))
     return DAG.getNode(ISD::ABS, DL, VT, A);
@@ -9526,7 +9526,7 @@ SDValue DAGCombiner::visitXOR(SDNode *N) {
   }
 
   // fold Y = sra (X, size(X)-1); xor (add (X, Y), Y) -> (abs X)
-  if (TLI.isOperationLegalOrCustom(ISD::ABS, VT)) {
+  if (!LegalOperations || hasOperation(ISD::ABS, VT)) {
     SDValue A = N0Opcode == ISD::ADD ? N0 : N1;
     SDValue S = N0Opcode == ISD::SRA ? N0 : N1;
     if (A.getOpcode() == ISD::ADD && S.getOpcode() == ISD::SRA) {
diff --git a/llvm/test/CodeGen/AMDGPU/amdgpu-codegenprepare-fold-binop-select.ll b/llvm/test/CodeGen/AMDGPU/amdgpu-codegenprepare-fold-binop-select.ll
index 5c40a4ce13e3..cb59121d6970 100644
--- a/llvm/test/CodeGen/AMDGPU/amdgpu-codegenprepare-fold-binop-select.ll
+++ b/llvm/test/CodeGen/AMDGPU/amdgpu-codegenprepare-fold-binop-select.ll
@@ -135,31 +135,31 @@ define i32 @select_sdiv_lhs_opaque_const0_i32(i1 %cond) {
 ; GCN-NEXT:    s_waitcnt lgkmcnt(0)
 ; GCN-NEXT:    v_mov_b32_e32 v1, s4
 ; GCN-NEXT:    v_cndmask_b32_e32 v0, 5, v1, vcc
-; GCN-NEXT:    v_ashrrev_i32_e32 v1, 31, v0
-; GCN-NEXT:    v_add_u32_e32 v0, vcc, v0, v1
-; GCN-NEXT:    v_xor_b32_e32 v0, v0, v1
-; GCN-NEXT:    v_cvt_f32_u32_e32 v2, v0
-; GCN-NEXT:    v_sub_u32_e32 v3, vcc, 0, v0
+; GCN-NEXT:    v_sub_u32_e32 v1, vcc, 0, v0
+; GCN-NEXT:    v_max_i32_e32 v1, v0, v1
+; GCN-NEXT:    v_cvt_f32_u32_e32 v2, v1
+; GCN-NEXT:    v_sub_u32_e32 v3, vcc, 0, v1
 ; GCN-NEXT:    s_mov_b32 s4, 0xf4240
 ; GCN-NEXT:    v_rcp_iflag_f32_e32 v2, v2
+; GCN-NEXT:    v_ashrrev_i32_e32 v0, 31, v0
 ; GCN-NEXT:    v_mul_f32_e32 v2, 0x4f7ffffe, v2
 ; GCN-NEXT:    v_cvt_u32_f32_e32 v2, v2
 ; GCN-NEXT:    v_mul_lo_u32 v3, v3, v2
 ; GCN-NEXT:    v_mul_hi_u32 v3, v2, v3
 ; GCN-NEXT:    v_add_u32_e32 v2, vcc, v2, v3
 ; GCN-NEXT:    v_mul_hi_u32 v2, v2, s4
-; GCN-NEXT:    v_mul_lo_u32 v3, v2, v0
+; GCN-NEXT:    v_mul_lo_u32 v3, v2, v1
 ; GCN-NEXT:    v_add_u32_e32 v4, vcc, 1, v2
 ; GCN-NEXT:    v_sub_u32_e32 v3, vcc, 0xf4240, v3
-; GCN-NEXT:    v_cmp_ge_u32_e32 vcc, v3, v0
+; GCN-NEXT:    v_cmp_ge_u32_e32 vcc, v3, v1
 ; GCN-NEXT:    v_cndmask_b32_e32 v2, v2, v4, vcc
-; GCN-NEXT:    v_sub_u32_e64 v4, s[4:5], v3, v0
+; GCN-NEXT:    v_sub_u32_e64 v4, s[4:5], v3, v1
 ; GCN-NEXT:    v_cndmask_b32_e32 v3, v3, v4, vcc
 ; GCN-NEXT:    v_add_u32_e32 v4, vcc, 1, v2
-; GCN-NEXT:    v_cmp_ge_u32_e32 vcc, v3, v0
-; GCN-NEXT:    v_cndmask_b32_e32 v0, v2, v4, vcc
-; GCN-NEXT:    v_xor_b32_e32 v0, v0, v1
-; GCN-NEXT:    v_sub_u32_e32 v0, vcc, v0, v1
+; GCN-NEXT:    v_cmp_ge_u32_e32 vcc, v3, v1
+; GCN-NEXT:    v_cndmask_b32_e32 v1, v2, v4, vcc
+; GCN-NEXT:    v_xor_b32_e32 v1, v1, v0
+; GCN-NEXT:    v_sub_u32_e32 v0, vcc, v1, v0
 ; GCN-NEXT:    s_setpc_b64 s[30:31]
   %select = select i1 %cond, i32 ptrtoint (ptr addrspace(1) @gv to i32), i32 5
   %op = sdiv i32 1000000, %select
@@ -217,31 +217,31 @@ define i32 @select_sdiv_lhs_opaque_const1_i32(i1 %cond) {
 ; GCN-NEXT:    s_waitcnt lgkmcnt(0)
 ; GCN-NEXT:    v_mov_b32_e32 v1, s4
 ; GCN-NEXT:    v_cndmask_b32_e64 v0, v1, 5, vcc
-; GCN-NEXT:    v_ashrrev_i32_e32 v1, 31, v0
-; GCN-NEXT:    v_add_u32_e32 v0, vcc, v0, v1
-; GCN-NEXT:    v_xor_b32_e32 v0, v0, v1
-; GCN-NEXT:    v_cvt_f32_u32_e32 v2, v0
-; GCN-NEXT:    v_sub_u32_e32 v3, vcc, 0, v0
+; GCN-NEXT:    v_sub_u32_e32 v1, vcc, 0, v0
+; GCN-NEXT:    v_max_i32_e32 v1, v0, v1
+; GCN-NEXT:    v_cvt_f32_u32_e32 v2, v1
+; GCN-NEXT:    v_sub_u32_e32 v3, vcc, 0, v1
 ; GCN-NEXT:    s_mov_b32 s4, 0xf4240
 ; GCN-NEXT:    v_rcp_iflag_f32_e32 v2, v2
+; GCN-NEXT:    v_ashrrev_i32_e32 v0, 31, v0
 ; GCN-NEXT:    v_mul_f32_e32 v2, 0x4f7ffffe, v2
 ; GCN-NEXT:    v_cvt_u32_f32_e32 v2, v2
 ; GCN-NEXT:    v_mul_lo_u32 v3, v3, v2
 ; GCN-NEXT:    v_mul_hi_u32 v3, v2, v3
 ; GCN-NEXT:    v_add_u32_e32 v2, vcc, v2, v3
 ; GCN-NEXT:    v_mul_hi_u32 v2, v2, s4
-; GCN-NEXT:    v_mul_lo_u32 v3, v2, v0
+; GCN-NEXT:    v_mul_lo_u32 v3, v2, v1
 ; GCN-NEXT:    v_add_u32_e32 v4, vcc, 1, v2
 ; GCN-NEXT:    v_sub_u32_e32 v3, vcc, 0xf4240, v3
-; GCN-NEXT:    v_cmp_ge_u32_e32 vcc, v3, v0
+; GCN-NEXT:    v_cmp_ge_u32_e32 vcc, v3, v1
 ; GCN-NEXT:    v_cndmask_b32_e32 v2, v2, v4, vcc
-; GCN-NEXT:    v_sub_u32_e64 v4, s[4:5], v3, v0
+; GCN-NEXT:    v_sub_u32_e64 v4, s[4:5], v3, v1
 ; GCN-NEXT:    v_cndmask_b32_e32 v3, v3, v4, vcc
 ; GCN-NEXT:    v_add_u32_e32 v4, vcc, 1, v2
-; GCN-NEXT:    v_cmp_ge_u32_e32 vcc, v3, v0
-; GCN-NEXT:    v_cndmask_b32_e32 v0, v2, v4, vcc
-; GCN-NEXT:    v_xor_b32_e32 v0, v0, v1
-; GCN-NEXT:    v_sub_u32_e32 v0, vcc, v0, v1
+; GCN-NEXT:    v_cmp_ge_u32_e32 vcc, v3, v1
+; GCN-NEXT:    v_cndmask_b32_e32 v1, v2, v4, vcc
+; GCN-NEXT:    v_xor_b32_e32 v1, v1, v0
+; GCN-NEXT:    v_sub_u32_e32 v0, vcc, v1, v0
 ; GCN-NEXT:    s_setpc_b64 s[30:31]
   %select = select i1 %cond, i32 5, i32 ptrtoint (ptr addrspace(1) @gv to i32)
   %op = sdiv i32 1000000, %select
diff --git a/llvm/test/CodeGen/AMDGPU/amdgpu-codegenprepare-idiv.ll b/llvm/test/CodeGen/AMDGPU/amdgpu-codegenprepare-idiv.ll
index 2ad28b8dd6ec..8144fb7a3b64 100644
--- a/llvm/test/CodeGen/AMDGPU/amdgpu-codegenprepare-idiv.ll
+++ b/llvm/test/CodeGen/AMDGPU/amdgpu-codegenprepare-idiv.ll
@@ -245,39 +245,36 @@ define amdgpu_kernel void @sdiv_i32(ptr addrspace(1) %out, i32 %x, i32 %y) {
 ; GFX6-NEXT:    s_mov_b32 s7, 0xf000
 ; GFX6-NEXT:    s_mov_b32 s6, -1
 ; GFX6-NEXT:    s_waitcnt lgkmcnt(0)
-; GFX6-NEXT:    s_ashr_i32 s8, s3, 31
-; GFX6-NEXT:    s_add_i32 s3, s3, s8
-; GFX6-NEXT:    s_xor_b32 s3, s3, s8
-; GFX6-NEXT:    v_cvt_f32_u32_e32 v0, s3
-; GFX6-NEXT:    s_sub_i32 s4, 0, s3
-; GFX6-NEXT:    s_ashr_i32 s9, s2, 31
-; GFX6-NEXT:    s_add_i32 s2, s2, s9
-; GFX6-NEXT:    v_rcp_iflag_f32_e32 v0, v0
-; GFX6-NEXT:    s_xor_b32 s2, s2, s9
+; GFX6-NEXT:    s_abs_i32 s8, s3
+; GFX6-NEXT:    v_cvt_f32_u32_e32 v0, s8
+; GFX6-NEXT:    s_sub_i32 s4, 0, s8
 ; GFX6-NEXT:    s_mov_b32 s5, s1
+; GFX6-NEXT:    s_xor_b32 s1, s2, s3
+; GFX6-NEXT:    v_rcp_iflag_f32_e32 v0, v0
+; GFX6-NEXT:    s_ashr_i32 s1, s1, 31
 ; GFX6-NEXT:    v_mul_f32_e32 v0, 0x4f7ffffe, v0
 ; GFX6-NEXT:    v_cvt_u32_f32_e32 v0, v0
 ; GFX6-NEXT:    v_mul_lo_u32 v1, s4, v0
 ; GFX6-NEXT:    s_mov_b32 s4, s0
-; GFX6-NEXT:    s_xor_b32 s0, s9, s8
+; GFX6-NEXT:    s_abs_i32 s0, s2
 ; GFX6-NEXT:    v_mul_hi_u32 v1, v0, v1
 ; GFX6-NEXT:    v_add_i32_e32 v0, vcc, v0, v1
-; GFX6-NEXT:    v_mul_hi_u32 v0, s2, v0
-; GFX6-NEXT:    v_readfirstlane_b32 s1, v0
-; GFX6-NEXT:    s_mul_i32 s1, s1, s3
-; GFX6-NEXT:    s_sub_i32 s1, s2, s1
-; GFX6-NEXT:    s_sub_i32 s2, s1, s3
+; GFX6-NEXT:    v_mul_hi_u32 v0, s0, v0
+; GFX6-NEXT:    v_readfirstlane_b32 s2, v0
+; GFX6-NEXT:    s_mul_i32 s2, s2, s8
+; GFX6-NEXT:    s_sub_i32 s0, s0, s2
+; GFX6-NEXT:    s_sub_i32 s2, s0, s8
 ; GFX6-NEXT:    v_add_i32_e32 v1, vcc, 1, v0
-; GFX6-NEXT:    s_cmp_ge_u32 s1, s3
+; GFX6-NEXT:    s_cmp_ge_u32 s0, s8
 ; GFX6-NEXT:    s_cselect_b64 vcc, -1, 0
 ; GFX6-NEXT:    v_cndmask_b32_e32 v0, v0, v1, vcc
-; GFX6-NEXT:    s_cselect_b32 s1, s2, s1
+; GFX6-NEXT:    s_cselect_b32 s0, s2, s0
 ; GFX6-NEXT:    v_add_i32_e32 v1, vcc, 1, v0
-; GFX6-NEXT:    s_cmp_ge_u32 s1, s3
+; GFX6-NEXT:    s_cmp_ge_u32 s0, s8
 ; GFX6-NEXT:    s_cselect_b64 vcc, -1, 0
 ; GFX6-NEXT:    v_cndmask_b32_e32 v0, v0, v1, vcc
-; GFX6-NEXT:    v_xor_b32_e32 v0, s0, v0
-; GFX6-NEXT:    v_subrev_i32_e32 v0, vcc, s0, v0
+; GFX6-NEXT:    v_xor_b32_e32 v0, s1, v0
+; GFX6-NEXT:    v_subrev_i32_e32 v0, vcc, s1, v0
 ; GFX6-NEXT:    buffer_store_dword v0, off, s[4:7], 0
 ; GFX6-NEXT:    s_endpgm
 ;
@@ -286,16 +283,13 @@ define amdgpu_kernel void @sdiv_i32(ptr addrspace(1) %out, i32 %x, i32 %y) {
 ; GFX9-NEXT:    s_load_dwordx4 s[0:3], s[0:1], 0x24
 ; GFX9-NEXT:    v_mov_b32_e32 v1, 0
 ; GFX9-NEXT:    s_waitcnt lgkmcnt(0)
-; GFX9-NEXT:    s_ashr_i32 s4, s3, 31
-; GFX9-NEXT:    s_add_i32 s3, s3, s4
-; GFX9-NEXT:    s_xor_b32 s3, s3, s4
-; GFX9-NEXT:    v_cvt_f32_u32_e32 v0, s3
-; GFX9-NEXT:    s_ashr_i32 s5, s2, 31
-; GFX9-NEXT:    s_add_i32 s2, s2, s5
-; GFX9-NEXT:    s_xor_b32 s4, s5, s4
+; GFX9-NEXT:    s_abs_i32 s4, s3
+; GFX9-NEXT:    v_cvt_f32_u32_e32 v0, s4
+; GFX9-NEXT:    s_sub_i32 s5, 0, s4
+; GFX9-NEXT:    s_xor_b32 s3, s2, s3
+; GFX9-NEXT:    s_abs_i32 s2, s2
 ; GFX9-NEXT:    v_rcp_iflag_f32_e32 v0, v0
-; GFX9-NEXT:    s_xor_b32 s2, s2, s5
-; GFX9-NEXT:    s_sub_i32 s5, 0, s3
+; GFX9-NEXT:    s_ashr_i32 s3, s3, 31
 ; GFX9-NEXT:    v_mul_f32_e32 v0, 0x4f7ffffe, v0
 ; GFX9-NEXT:    v_cvt_u32_f32_e32 v0, v0
 ; GFX9-NEXT:    v_readfirstlane_b32 s6, v0
@@ -303,18 +297,18 @@ define amdgpu_kernel void @sdiv_i32(ptr addrspace(1) %out, i32 %x, i32 %y) {
 ; GFX9-NEXT:    s_mul_hi_u32 s5, s6, s5
 ; GFX9-NEXT:    s_add_i32 s6, s6, s5
 ; GFX9-NEXT:    s_mul_hi_u32 s5, s2, s6
-; GFX9-NEXT:    s_mul_i32 s6, s5, s3
+; GFX9-NEXT:    s_mul_i32 s6, s5, s4
 ; GFX9-NEXT:    s_sub_i32 s2, s2, s6
 ; GFX9-NEXT:    s_add_i32 s7, s5, 1
-; GFX9-NEXT:    s_sub_i32 s6, s2, s3
-; GFX9-NEXT:    s_cmp_ge_u32 s2, s3
+; GFX9-NEXT:    s_sub_i32 s6, s2, s4
+; GFX9-NEXT:    s_cmp_ge_u32 s2, s4
 ; GFX9-NEXT:    s_cselect_b32 s5, s7, s5
 ; GFX9-NEXT:    s_cselect_b32 s2, s6, s2
 ; GFX9-NEXT:    s_add_i32 s6, s5, 1
-; GFX9-NEXT:    s_cmp_ge_u32 s2, s3
+; GFX9-NEXT:    s_cmp_ge_u32 s2, s4
 ; GFX9-NEXT:    s_cselect_b32 s2, s6, s5
-; GFX9-NEXT:    s_xor_b32 s2, s2, s4
-; GFX9-NEXT:    s_sub_i32 s2, s2, s4
+; GFX9-NEXT:    s_xor_b32 s2, s2, s3
+; GFX9-NEXT:    s_sub_i32 s2, s2, s3
 ; GFX9-NEXT:    v_mov_b32_e32 v0, s2
 ; GFX9-NEXT:    global_store_dword v1, v0, s[0:1]
 ; GFX9-NEXT:    s_endpgm
@@ -366,37 +360,36 @@ define amdgpu_kernel void @srem_i32(ptr addrspace(1) %out, i32 %x, i32 %y) {
 ; GFX6-LABEL: srem_i32:
 ; GFX6:       ; %bb.0:
 ; GFX6-NEXT:    s_load_dwordx4 s[0:3], s[0:1], 0x9
+; GFX6-NEXT:    s_mov_b32 s7, 0xf000
+; GFX6-NEXT:    s_mov_b32 s6, -1
 ; GFX6-NEXT:    s_waitcnt lgkmcnt(0)
-; GFX6-NEXT:    s_ashr_i32 s4, s3, 31
-; GFX6-NEXT:    s_add_i32 s3, s3, s4
-; GFX6-NEXT:    s_xor_b32 s4, s3, s4
-; GFX6-NEXT:    v_cvt_f32_u32_e32 v0, s4
-; GFX6-NEXT:    s_sub_i32 s3, 0, s4
-; GFX6-NEXT:    s_ashr_i32 s5, s2, 31
-; GFX6-NEXT:    s_add_i32 s2, s2, s5
+; GFX6-NEXT:    s_abs_i32 s3, s3
+; GFX6-NEXT:    v_cvt_f32_u32_e32 v0, s3
+; GFX6-NEXT:    s_sub_i32 s4, 0, s3
+; GFX6-NEXT:    s_abs_i32 s8, s2
+; GFX6-NEXT:    s_mov_b32 s5, s1
 ; GFX6-NEXT:    v_rcp_iflag_f32_e32 v0, v0
-; GFX6-NEXT:    s_xor_b32 s6, s2, s5
-; GFX6-NEXT:    s_mov_b32 s2, -1
 ; GFX6-NEXT:    v_mul_f32_e32 v0, 0x4f7ffffe, v0
 ; GFX6-NEXT:    v_cvt_u32_f32_e32 v0, v0
-; GFX6-NEXT:    v_mul_lo_u32 v1, s3, v0
-; GFX6-NEXT:    s_mov_b32 s3, 0xf000
+; GFX6-NEXT:    v_mul_lo_u32 v1, s4, v0
+; GFX6-NEXT:    s_mov_b32 s4, s0
+; GFX6-NEXT:    s_ashr_i32 s0, s2, 31
 ; GFX6-NEXT:    v_mul_hi_u32 v1, v0, v1
 ; GFX6-NEXT:    v_add_i32_e32 v0, vcc, v0, v1
-; GFX6-NEXT:    v_mul_hi_u32 v0, s6, v0
-; GFX6-NEXT:    v_readfirstlane_b32 s7, v0
-; GFX6-NEXT:    s_mul_i32 s7, s7, s4
-; GFX6-NEXT:    s_sub_i32 s6, s6, s7
-; GFX6-NEXT:    s_sub_i32 s7, s6, s4
-; GFX6-NEXT:    s_cmp_ge_u32 s6, s4
-; GFX6-NEXT:    s_cselect_b32 s6, s7, s6
-; GFX6-NEXT:    s_sub_i32 s7, s6, s4
-; GFX6-NEXT:    s_cmp_ge_u32 s6, s4
-; GFX6-NEXT:    s_cselect_b32 s4, s7, s6
-; GFX6-NEXT:    s_xor_b32 s4, s4, s5
-; GFX6-NEXT:    s_sub_i32 s4, s4, s5
-; GFX6-NEXT:    v_mov_b32_e32 v0, s4
-; GFX6-NEXT:    buffer_store_dword v0, off, s[0:3], 0
+; GFX6-NEXT:    v_mul_hi_u32 v0, s8, v0
+; GFX6-NEXT:    v_readfirstlane_b32 s1, v0
+; GFX6-NEXT:    s_mul_i32 s1, s1, s3
+; GFX6-NEXT:    s_sub_i32 s1, s8, s1
+; GFX6-NEXT:    s_sub_i32 s2, s1, s3
+; GFX6-NEXT:    s_cmp_ge_u32 s1, s3
+; GFX6-NEXT:    s_cselect_b32 s1, s2, s1
+; GFX6-NEXT:    s_sub_i32 s2, s1, s3
+; GFX6-NEXT:    s_cmp_ge_u32 s1, s3
+; GFX6-NEXT:    s_cselect_b32 s1, s2, s1
+; GFX6-NEXT:    s_xor_b32 s1, s1, s0
+; GFX6-NEXT:    s_sub_i32 s0, s1, s0
+; GFX6-NEXT:    v_mov_b32_e32 v0, s0
+; GFX6-NEXT:    buffer_store_dword v0, off, s[4:7], 0
 ; GFX6-NEXT:    s_endpgm
 ;
 ; GFX9-LABEL: srem_i32:
@@ -404,15 +397,12 @@ define amdgpu_kernel void @srem_i32(ptr addrspace(1) %out, i32 %x, i32 %y) {
 ; GFX9-NEXT:    s_load_dwordx4 s[0:3], s[0:1], 0x24
 ; GFX9-NEXT:    v_mov_b32_e32 v1, 0
 ; GFX9-NEXT:    s_waitcnt lgkmcnt(0)
-; GFX9-NEXT:    s_ashr_i32 s4, s3, 31
-; GFX9-NEXT:    s_add_i32 s3, s3, s4
-; GFX9-NEXT:    s_xor_b32 s3, s3, s4
+; GFX9-NEXT:    s_abs_i32 s3, s3
 ; GFX9-NEXT:    v_cvt_f32_u32_e32 v0, s3
 ; GFX9-NEXT:    s_sub_i32 s5, 0, s3
 ; GFX9-NEXT:    s_ashr_i32 s4, s2, 31
-; GFX9-NEXT:    s_add_i32 s2, s2, s4
+; GFX9-NEXT:    s_abs_i32 s2, s2
 ; GFX9-NEXT:    v_rcp_iflag_f32_e32 v0, v0
-; GFX9-NEXT:    s_xor_b32 s2, s2, s4
 ; GFX9-NEXT:    v_mul_f32_e32 v0, 0x4f7ffffe, v0
 ; GFX9-NEXT:    v_cvt_u32_f32_e32 v0, v0
 ; GFX9-NEXT:    v_readfirstlane_b32 s6, v0
@@ -1857,126 +1847,114 @@ define amdgpu_kernel void @sdiv_v4i32(ptr addrspace(1) %out, <4 x i32> %x, <4 x
 ; GFX6-NEXT:    s_mov_b32 s19, 0xf000
 ; GFX6-NEXT:    s_mov_b32 s18, -1
 ; GFX6-NEXT:    s_waitcnt lgkmcnt(0)
-; GFX6-NEXT:    s_ashr_i32 s2, s12, 31
-; GFX6-NEXT:    s_add_i32 s3, s12, s2
-; GFX6-NEXT:    s_xor_b32 s3, s3, s2
-; GFX6-NEXT:    v_cvt_f32_u32_e32 v0, s3
-; GFX6-NEXT:    s_sub_i32 s4, 0, s3
+; GFX6-NEXT:    s_abs_i32 s2, s12
+; GFX6-NEXT:    v_cvt_f32_u32_e32 v0, s2
+; GFX6-NEXT:    s_sub_i32 s3, 0, s2
+; GFX6-NEXT:    s_xor_b32 s4, s8, s12
 ; GFX6-NEXT:    v_rcp_iflag_f32_e32 v0, v0
 ; GFX6-NEXT:    v_mul_f32_e32 v0, 0x4f7ffffe, v0
 ; GFX6-NEXT:    v_cvt_u32_f32_e32 v0, v0
-; GFX6-NEXT:    v_mul_lo_u32 v1, s4, v0
-; GFX6-NEXT:    s_ashr_i32 s4, s8, 31
-; GFX6-NEXT:    s_add_i32 s5, s8, s4
-; GFX6-NEXT:    s_xor_b32 s5, s5, s4
+; GFX6-NEXT:    v_mul_lo_u32 v1, s3, v0
+; GFX6-NEXT:    s_abs_i32 s3, s8
+; GFX6-NEXT:    s_ashr_i32 s8, s4, 31
 ; GFX6-NEXT:    v_mul_hi_u32 v1, v0, v1
-; GFX6-NEXT:    s_xor_b32 s8, s4, s2
 ; GFX6-NEXT:    v_add_i32_e32 v0, vcc, v0, v1
-; GFX6-NEXT:    v_mul_hi_u32 v0, s5, v0
-; GFX6-NEXT:    v_readfirstlane_b32 s2, v0
-; GFX6-NEXT:    s_mul_i32 s2, s2, s3
-; GFX6-NEXT:    s_sub_i32 s2, s5, s2
-; GFX6-NEXT:    s_sub_i32 s4, s2, s3
-; GFX6-NEXT:    s_cmp_ge_u32 s2, s3
+; GFX6-NEXT:    v_mul_hi_u32 v0, s3, v0
+; GFX6-NEXT:    v_readfirstlane_b32 s4, v0
+; GFX6-NEXT:    s_mul_i32 s4, s4, s2
+; GFX6-NEXT:    s_sub_i32 s3, s3, s4
+; GFX6-NEXT:    s_sub_i32 s4, s3, s2
+; GFX6-NEXT:    s_cmp_ge_u32 s3, s2
 ; GFX6-NEXT:    v_add_i32_e32 v1, vcc, 1, v0
-; GFX6-NEXT:    s_cselect_b32 s2, s4, s2
+; GFX6-NEXT:    s_cselect_b32 s3, s4, s3
 ; GFX6-NEXT:    s_cselect_b64 vcc, -1, 0
-; GFX6-NEXT:    s_cmp_ge_u32 s2, s3
+; GFX6-NEXT:    s_cmp_ge_u32 s3, s2
 ; GFX6-NEXT:    s_cselect_b64 s[2:3], -1, 0
-; GFX6-NEXT:    s_ashr_i32 s4, s13, 31
-; GFX6-NEXT:    s_add_i32 s5, s13, s4
-; GFX6-NEXT:    s_xor_b32 s5, s5, s4
-; GFX6-NEXT:    v_cvt_f32_u32_e32 v2, s5
-; GFX6-NEXT:    s_sub_i32 s6, 0, s5
+; GFX6-NEXT:    s_abs_i32 s4, s13
+; GFX6-NEXT:    v_cvt_f32_u32_e32 v2, s4
+; GFX6-NEXT:    s_sub_i32 s5, 0, s4
 ; GFX6-NEXT:    v_cndmask_b32_e32 v0, v0, v1, vcc
-; GFX6-NEXT:    v_add_i32_e32 v1, vcc, 1, v0
+; GFX6-NEXT:    s_xor_b32 s6, s9, s13
 ; GFX6-NEXT:    v_rcp_iflag_f32_e32 v2, v2
+; GFX6-NEXT:    v_add_i32_e32 v1, vcc, 1, v0
 ; GFX6-NEXT:    v_cndmask_b32_e64 v0, v0, v1, s[2:3]
-; GFX6-NEXT:    v_xor_b32_e32 v0, s8, v0
 ; GFX6-NEXT:    v_mul_f32_e32 v2, 0x4f7ffffe, v2
 ; GFX6-NEXT:    v_cvt_u32_f32_e32 v2, v2
-; GFX6-NEXT:    v_mul_lo_u32 v3, s6, v2
-; GFX6-NEXT:    s_ashr_i32 s6, s9, 31
-; GFX6-NEXT:    s_add_i32 s7, s9, s6
-; GFX6-NEXT:    s_xor_b32 s7, s7, s6
+; GFX6-NEXT:    v_xor_b32_e32 v0, s8, v0
+; GFX6-NEXT:    v_mul_lo_u32 v3, s5, v2
+; GFX6-NEXT:    s_abs_i32 s5, s9
+; GFX6-NEXT:    s_ashr_i32 s9, s6, 31
 ; GFX6-NEXT:    v_mul_hi_u32 v3, v2, v3
-; GFX6-NEXT:    s_xor_b32 s9, s6, s4
 ; GFX6-NEXT:    v_add_i32_e32 v2, vcc, v2, v3
-; GFX6-NEXT:    v_mul_hi_u32 v2, s7, v2
-; GFX6-NEXT:    v_readfirstlane_b32 s4, v2
-; GFX6-NEXT:    s_mul_i32 s4, s4, s5
-; GFX6-NEXT:    s_sub_i32 s4, s7, s4
-; GFX6-NEXT:    s_sub_i32 s6, s4, s5
-; GFX6-NEXT:    s_cmp_ge_u32 s4, s5
+; GFX6-NEXT:    v_mul_hi_u32 v2, s5, v2
+; GFX6-NEXT:    v_readfirstlane_b32 s6, v2
+; GFX6-NEXT:    s_mul_i32 s6, s6, s4
+; GFX6-NEXT:    s_sub_i32 s5, s5, s6
+; GFX6-NEXT:    s_sub_i32 s6, s5, s4
+; GFX6-NEXT:    s_cmp_ge_u32 s5, s4
 ; GFX6-NEXT:    v_add_i32_e32 v3, vcc, 1, v2
-; GFX6-NEXT:    s_cselect_b32 s4, s6, s4
+; GFX6-NEXT:    s_cselect_b32 s5, s6, s5
 ; GFX6-NEXT:    s_cselect_b64 vcc, -1, 0
-; GFX6-NEXT:    s_cmp_ge_u32 s4, s5
+; GFX6-NEXT:    s_cmp_ge_u32 s5, s4
 ; GFX6-NEXT:    s_cselect_b64 s[4:5], -1, 0
-; GFX6-NEXT:    s_ashr_i32 s6, s14, 31
-; GFX6-NEXT:    s_add_i32 s7, s14, s6
-; GFX6-NEXT:    s_xor_b32 s7, s7, s6
-; GFX6-NEXT:    v_cvt_f32_u32_e32 v4, s7
-; GFX6-NEXT:    s_sub_i32 s12, 0, s7
+; GFX6-NEXT:    s_abs_i32 s6, s14
+; GFX6-NEXT:    v_cvt_f32_u32_e32 v4, s6
+; GFX6-NEXT:    s_sub_i32 s7, 0, s6
 ; GFX6-NEXT:    v_cndmask_b32_e32 v2, v2, v3, vcc
 ; GFX6-NEXT:    v_add_i32_e32 v3, vcc, 1, v2
 ; GFX6-NEXT:    v_rcp_iflag_f32_e32 v4, v4
-; GFX6-NEXT:    v_cndmask_b32_e64 v2, v2, v3, s[4:5]
-; GFX6-NEXT:    v_xor_b32_e32 v2, s9, v2
 ; GFX6-NEXT:    v_mul_f32_e32 v4, 0x4f7ffffe, v4
 ; GFX6-NEXT:    v_cvt_u32_f32_e32 v4, v4
-; GFX6-NEXT:    v_mul_lo_u32 v5, s12, v4
-; GFX6-NEXT:    s_ashr_i32 s12, s10, 31
-; GFX6-NEXT:    s_add_i32 s10, s10, s12
-; GFX6-NEXT:    s_xor_b32 s10, s10, s12
+; GFX6-NEXT:    v_mul_lo_u32 v5, s7, v4
+; GFX6-NEXT:    s_abs_i32 s7, s10
+; GFX6-NEXT:    s_xor_b32 s10, s10, s14
+; GFX6-NEXT:    s_ashr_i32 s10, s10, 31
 ; GFX6-NEXT:    v_mul_hi_u32 v5, v4, v5
-; GFX6-NEXT:    s_xor_b32 s12, s12, s6
 ; GFX6-NEXT:    v_add_i32_e32 v4, vcc, v4, v5
-; GFX6-NEXT:    v_mul_hi_u32 v4, s10, v4
-; GFX6-NEXT:    v_readfirstlane_b32 s6, v4
-; GFX6-NEXT:    s_mul_i32 s6, s6, s7
-; GFX6-NEXT:    s_sub_i32 s6, s10, s6
-; GFX6-NEXT:    s_sub_i32 s10, s6, s7
-; GFX6-NEXT:    s_cmp_ge_u32 s6, s7
+; GFX6-NEXT:    v_mul_hi_u32 v4, s7, v4
+; GFX6-NEXT:    v_readfirstlane_b32 s12, v4
+; GFX6-NEXT:    s_mul_i32 s12, s12, s6
+; GFX6-NEXT:    s_sub_i32 s7, s7, s12
+; GFX6-NEXT:    s_sub_i32 s12, s7, s6
+; GFX6-NEXT:    s_cmp_ge_u32 s7, s6
 ; GFX6-NEXT:    v_add_i32_e32 v5, vcc, 1, v4
-; GFX6-NEXT:    s_cselect_b32 s6, s10, s6
+; GFX6-NEXT:    s_cselect_b32 s7, s12, s7
 ; GFX6-NEXT:    s_cselect_b64 vcc, -1, 0
-; GFX6-NEXT:    s_cmp_ge_u32 s6, s7
+; GFX6-NEXT:    s_cmp_ge_u32 s7, s6
 ; GFX6-NEXT:    s_cselect_b64 s[6:7], -1, 0
-; GFX6-NEXT:    s_ashr_i32 s10, s15, 31
-; GFX6-NEXT:    s_add_i32 s13, s15, s10
-; GFX6-NEXT:    s_xor_b32 s13, s13, s10
-; GFX6-NEXT:    v_cvt_f32_u32_e32 v6, s13
-; GFX6-NEXT:    s_sub_i32 s0, 0, s13
+; GFX6-NEXT:    s_abs_i32 s12, s15
+; GFX6-NEXT:    v_cvt_f32_u32_e32 v6, s12
+; GFX6-NEXT:    s_sub_i32 s0, 0, s12
 ; GFX6-NEXT:    v_cndmask_b32_e32 v4, v4, v5, vcc
 ; GFX6-NEXT:    v_add_i32_e32 v5, vcc, 1, v4
 ; GFX6-NEXT:    v_rcp_iflag_f32_e32 v1, v6
-; GFX6-NEXT:    v_cndmask_b32_e64 v4, v4, v5, s[6:7]
-; GFX6-NEXT:    v_xor_b32_e32 v4, s12, v4
+; GFX6-NEXT:    s_abs_i32 s1, s11
 ; GFX6-NEXT:    v_subrev_i32_e32 v0, vcc, s8, v0
 ; GFX6-NEXT:    v_mul_f32_e32 v1, 0x4f7ffffe, v1
-; GFX6-NEXT:    v_cvt_u32_f32_e32 v3, v1
-; GFX6-NEXT:    v_subrev_i32_e32 v1, vcc, s9, v2
-; GFX6-NEXT:    v_mul_lo_u32 v2, s0, v3
-; GFX6-NEXT:    s_ashr_i32 s0, s11, 31
-; GFX6-NEXT:    s_add_i32 s1, s11, s0
-; GFX6-NEXT:    s_xor_b32 s1, s1, s0
-; GFX6-NEXT:    v_mul_hi_u32 v2, v3, v2
-; GFX6-NEXT:    s_xor_b32 s0, s0, s10
-; GFX6-NEXT:    v_add_i32_e32 v2, vcc, v3, v2
-; GFX6-NEXT:    v_mul_hi_u32 v3, s1, v2
-; GFX6-NEXT:    v_subrev_i32_e32 v2, vcc, s12, v4
-; GFX6-NEXT:    v_readfirstlane_b32 s2, v3
-; GFX6-NEXT:    s_mul_i32 s2, s2, s13
+; GFX6-NEXT:    v_cvt_u32_f32_e32 v6, v1
+; GFX6-NEXT:    v_cndmask_b32_e64 v1, v2, v3, s[4:5]
+; GFX6-NEXT:    v_cndmask_b32_e64 v3, v4, v5, s[6:7]
+; GFX6-NEXT:    v_xor_b32_e32 v1, s9, v1
+; GFX6-NEXT:    v_mul_lo_u32 v2, s0, v6
+; GFX6-NEXT:    s_xor_b32 s0, s11, s15
+; GFX6-NEXT:    v_xor_b32_e32 v3, s10, v3
+; GFX6-NEXT:  ...
[truncated]

Copy link
Contributor

@arsenm arsenm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

AMDGPU only has truly legal abs for the SALU 32-bit case. We fake report it legal and then expand the VALU version in selection. It looks like for targets with packed operations, we get a bad expansion in the 16-bit vector case. e.g. v2i16 we get 2 instructions, but some complicated expansion for v4i16 instead of just splitting the vector

Copy link
Contributor

@arsenm arsenm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All the div test changes makes me thing we should be emitting an abs directly in the expansions

RKSimon added a commit that referenced this pull request Jun 6, 2024
@RKSimon RKSimon requested a review from davemgreen June 6, 2024 12:51
Copy link

github-actions bot commented Jun 6, 2024

⚠️ C/C++ code formatter, clang-format found issues in your code. ⚠️

You can test this locally with the following command:
git-clang-format --diff c0b468523c9c5517e61a197e7c1fe6cb52f8999c fe0b9945547466fa1603eb3d863d730f34392144 -- llvm/lib/CodeGen/SelectionDAG/DAGCombiner.cpp llvm/lib/Target/ARM/ARMISelLowering.cpp
View the diff from clang-format here.
diff --git a/llvm/lib/Target/ARM/ARMISelLowering.cpp b/llvm/lib/Target/ARM/ARMISelLowering.cpp
index e327047198..faf5bcd3cd 100644
--- a/llvm/lib/Target/ARM/ARMISelLowering.cpp
+++ b/llvm/lib/Target/ARM/ARMISelLowering.cpp
@@ -18863,7 +18863,8 @@ SDValue ARMTargetLowering::PerformDAGCombine(SDNode *N,
   case ISD::SELECT_CC:
   case ISD::SELECT:     return PerformSELECTCombine(N, DCI, Subtarget);
   case ISD::VSELECT:    return PerformVSELECTCombine(N, DCI, Subtarget);
-  case ISD::SETCC:      return PerformVSetCCToVCTPCombine(N, DCI, Subtarget);
+  case ISD::SETCC:
+    return PerformVSetCCToVCTPCombine(N, DCI, Subtarget);
   case ARMISD::ADDE:    return PerformADDECombine(N, DCI, Subtarget);
   case ARMISD::UMLAL:   return PerformUMLALCombine(N, DCI.DAG, Subtarget);
   case ISD::ADD:        return PerformADDCombine(N, DCI, Subtarget);

@RKSimon
Copy link
Collaborator Author

RKSimon commented Jun 6, 2024

@davemgreen I've had to pull out a few ARM-specific changes from #94504 to prevent infinite loops re-expanding vXi64 ABS.

We've lost the neon vabdl.u32 pattern, do you want me to tweak the neon tablegen pattern in this patch? It will be removed entirely in #94504 which should be ready to go immediately after this patch.

@davemgreen
Copy link
Collaborator

It has always felt funny to me for DAG to create nodes that the target doesn't have support for. I guess it makes sense for abs.

@davemgreen I've had to pull out a few ARM-specific changes from #94504 to prevent infinite loops re-expanding vXi64 ABS.

We've lost the neon vabdl.u32 pattern, do you want me to tweak the neon tablegen pattern in this patch? It will be removed entirely in #94504 which should be ready to go immediately after this patch.

Do you mean that this and #94504 cancel each other out and the regression goes away? If so this sounds fine, yeah.

@RKSimon
Copy link
Collaborator Author

RKSimon commented Jun 6, 2024

It has always felt funny to me for DAG to create nodes that the target doesn't have support for. I guess it makes sense for abs.

@davemgreen I've had to pull out a few ARM-specific changes from #94504 to prevent infinite loops re-expanding vXi64 ABS.

We've lost the neon vabdl.u32 pattern, do you want me to tweak the neon tablegen pattern in this patch? It will be removed entirely in #94504 which should be ready to go immediately after this patch.

Do you mean that this and #94504 cancel each other out and the regression goes away? If so this sounds fine, yeah.

Thanks - I've simplified #94504 now so would prefer to land that first if you are happy to accept it.

Removes residual ARM handling for vXi64 ABS nodes to prevent infinite loops.
@RKSimon RKSimon merged commit af3ffff into llvm:main Jun 7, 2024
3 of 6 checks passed
@RKSimon RKSimon deleted the legal-abs branch June 7, 2024 10:02
@HerrCai0907 HerrCai0907 mentioned this pull request Jun 13, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants