Skip to content

[DAG] SimplifyMultipleUseDemandedBits - ignore SRL node if we're just demanding known sign bits #114389

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Oct 31, 2024

Conversation

RKSimon
Copy link
Collaborator

@RKSimon RKSimon commented Oct 31, 2024

Check to see if we are only demanding (shifted) signbits from a SRL node that are also signbits in the source node.

We can't demand any upper zero bits that the SRL will shift in (up to max shift amount), and the lower demanded bits bound must already be all signbits.

@llvmbot
Copy link
Member

llvmbot commented Oct 31, 2024

@llvm/pr-subscribers-backend-amdgpu

@llvm/pr-subscribers-llvm-selectiondag

Author: Simon Pilgrim (RKSimon)

Changes

Check to see if we are only demanding (shifted) signbits from a SRL node that are also signbits in the source node.

We can't demand any upper zero bits that the SRL will shift in (up to max shift amount), and the lower demanded bits bound must already be all signbits.


Patch is 58.01 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/114389.diff

6 Files Affected:

  • (modified) llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp (+18)
  • (modified) llvm/test/CodeGen/AMDGPU/div-rem-by-constant-64.ll (+22-28)
  • (modified) llvm/test/CodeGen/AMDGPU/fptoi.i128.ll (+72-80)
  • (modified) llvm/test/CodeGen/RISCV/pr95284.ll (+10-12)
  • (modified) llvm/test/CodeGen/RISCV/urem-seteq-illegal-types.ll (+11-11)
  • (modified) llvm/test/CodeGen/X86/scmp.ll (+313-324)
diff --git a/llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp b/llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp
index fabcbc5f0e856d..de291493faddbb 100644
--- a/llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp
@@ -808,6 +808,24 @@ SDValue TargetLowering::SimplifyMultipleUseDemandedBits(
     }
     break;
   }
+  case ISD::SRL: {
+    // If we are only demanding sign bits then we can use the shift source
+    // directly.
+    if (std::optional<uint64_t> MaxSA =
+            DAG.getValidMaximumShiftAmount(Op, DemandedElts, Depth + 1)) {
+      SDValue Op0 = Op.getOperand(0);
+      unsigned ShAmt = *MaxSA;
+      unsigned NumSignBits =
+          DAG.ComputeNumSignBits(Op0, DemandedElts, Depth + 1);
+      unsigned LoDemandedBits = DemandedBits.countr_zero();
+      unsigned HiDemandedBits = DemandedBits.countl_zero();
+      // Must already be signbits in DemandedBits bounds, and can't demand any
+      // shifted in zeroes.
+      if (HiDemandedBits >= ShAmt && LoDemandedBits >= (BitWidth - NumSignBits))
+        return Op0;
+    }
+    break;
+  }
   case ISD::SETCC: {
     SDValue Op0 = Op.getOperand(0);
     SDValue Op1 = Op.getOperand(1);
diff --git a/llvm/test/CodeGen/AMDGPU/div-rem-by-constant-64.ll b/llvm/test/CodeGen/AMDGPU/div-rem-by-constant-64.ll
index 4143c65a840d71..662de47413654f 100644
--- a/llvm/test/CodeGen/AMDGPU/div-rem-by-constant-64.ll
+++ b/llvm/test/CodeGen/AMDGPU/div-rem-by-constant-64.ll
@@ -1052,16 +1052,15 @@ define noundef i64 @srem64_i32max(i64 noundef %i)  {
 ; GFX9-NEXT:    s_mov_b32 s6, 0x80000001
 ; GFX9-NEXT:    v_ashrrev_i32_e32 v6, 31, v1
 ; GFX9-NEXT:    v_mad_u64_u32 v[4:5], s[4:5], v1, 3, v[2:3]
-; GFX9-NEXT:    v_mul_i32_i24_e32 v8, 3, v6
-; GFX9-NEXT:    v_lshl_add_u32 v9, v6, 31, v6
-; GFX9-NEXT:    v_mov_b32_e32 v10, v5
+; GFX9-NEXT:    v_lshl_add_u32 v8, v6, 31, v6
+; GFX9-NEXT:    v_mad_u64_u32 v[6:7], s[4:5], v6, 3, 0
+; GFX9-NEXT:    v_mov_b32_e32 v9, v5
 ; GFX9-NEXT:    v_mov_b32_e32 v5, v3
 ; GFX9-NEXT:    v_mad_u64_u32 v[2:3], s[4:5], v0, s6, v[4:5]
-; GFX9-NEXT:    v_mad_u64_u32 v[6:7], s[4:5], v6, 3, 0
-; GFX9-NEXT:    v_mov_b32_e32 v2, v3
-; GFX9-NEXT:    v_add_co_u32_e32 v2, vcc, v10, v2
-; GFX9-NEXT:    v_add3_u32 v7, v7, v9, v8
+; GFX9-NEXT:    v_add3_u32 v7, v7, v8, v6
 ; GFX9-NEXT:    v_mad_u64_u32 v[4:5], s[4:5], v0, -1, v[6:7]
+; GFX9-NEXT:    v_mov_b32_e32 v2, v3
+; GFX9-NEXT:    v_add_co_u32_e32 v2, vcc, v9, v2
 ; GFX9-NEXT:    v_addc_co_u32_e64 v3, s[4:5], 0, 0, vcc
 ; GFX9-NEXT:    v_mad_u64_u32 v[2:3], s[4:5], v1, s6, v[2:3]
 ; GFX9-NEXT:    v_sub_u32_e32 v5, v5, v1
@@ -1085,10 +1084,9 @@ define noundef i64 @srem64_i32max(i64 noundef %i)  {
 ; GFX942:       ; %bb.0: ; %entry
 ; GFX942-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
 ; GFX942-NEXT:    v_ashrrev_i32_e32 v2, 31, v1
-; GFX942-NEXT:    v_mul_i32_i24_e32 v4, 3, v2
-; GFX942-NEXT:    v_lshl_add_u32 v5, v2, 31, v2
+; GFX942-NEXT:    v_lshl_add_u32 v4, v2, 31, v2
 ; GFX942-NEXT:    v_mad_u64_u32 v[2:3], s[0:1], v2, 3, 0
-; GFX942-NEXT:    v_add3_u32 v3, v3, v5, v4
+; GFX942-NEXT:    v_add3_u32 v3, v3, v4, v2
 ; GFX942-NEXT:    v_mul_hi_u32 v4, v0, 3
 ; GFX942-NEXT:    v_mov_b32_e32 v5, 0
 ; GFX942-NEXT:    v_mad_u64_u32 v[6:7], s[0:1], v1, 3, v[4:5]
@@ -1125,17 +1123,16 @@ define noundef i64 @srem64_i32max(i64 noundef %i)  {
 ; GFX1030-NEXT:    v_mul_hi_u32 v2, v0, 3
 ; GFX1030-NEXT:    v_mov_b32_e32 v3, 0
 ; GFX1030-NEXT:    v_ashrrev_i32_e32 v6, 31, v1
-; GFX1030-NEXT:    v_mul_i32_i24_e32 v7, 3, v6
 ; GFX1030-NEXT:    v_mad_u64_u32 v[4:5], null, v1, 3, v[2:3]
-; GFX1030-NEXT:    v_mov_b32_e32 v8, v5
+; GFX1030-NEXT:    v_mov_b32_e32 v7, v5
 ; GFX1030-NEXT:    v_mov_b32_e32 v5, v3
 ; GFX1030-NEXT:    v_mad_u64_u32 v[2:3], null, v6, 3, 0
 ; GFX1030-NEXT:    v_lshl_add_u32 v6, v6, 31, v6
 ; GFX1030-NEXT:    v_mad_u64_u32 v[4:5], null, 0x80000001, v0, v[4:5]
-; GFX1030-NEXT:    v_add3_u32 v3, v3, v6, v7
+; GFX1030-NEXT:    v_add3_u32 v3, v3, v6, v2
 ; GFX1030-NEXT:    v_mov_b32_e32 v4, v5
 ; GFX1030-NEXT:    v_mad_u64_u32 v[2:3], null, v0, -1, v[2:3]
-; GFX1030-NEXT:    v_add_co_u32 v4, s4, v8, v4
+; GFX1030-NEXT:    v_add_co_u32 v4, s4, v7, v4
 ; GFX1030-NEXT:    v_add_co_ci_u32_e64 v5, null, 0, 0, s4
 ; GFX1030-NEXT:    v_sub_nc_u32_e32 v6, v3, v1
 ; GFX1030-NEXT:    v_mad_u64_u32 v[3:4], null, 0x80000001, v1, v[4:5]
@@ -1167,16 +1164,15 @@ define noundef i64 @sdiv64_i32max(i64 noundef %i)  {
 ; GFX9-NEXT:    s_mov_b32 s6, 0x80000001
 ; GFX9-NEXT:    v_ashrrev_i32_e32 v6, 31, v1
 ; GFX9-NEXT:    v_mad_u64_u32 v[4:5], s[4:5], v1, 3, v[2:3]
-; GFX9-NEXT:    v_mul_i32_i24_e32 v8, 3, v6
-; GFX9-NEXT:    v_lshl_add_u32 v9, v6, 31, v6
-; GFX9-NEXT:    v_mov_b32_e32 v10, v5
+; GFX9-NEXT:    v_lshl_add_u32 v8, v6, 31, v6
+; GFX9-NEXT:    v_mad_u64_u32 v[6:7], s[4:5], v6, 3, 0
+; GFX9-NEXT:    v_mov_b32_e32 v9, v5
 ; GFX9-NEXT:    v_mov_b32_e32 v5, v3
 ; GFX9-NEXT:    v_mad_u64_u32 v[2:3], s[4:5], v0, s6, v[4:5]
-; GFX9-NEXT:    v_mad_u64_u32 v[6:7], s[4:5], v6, 3, 0
-; GFX9-NEXT:    v_mov_b32_e32 v2, v3
-; GFX9-NEXT:    v_add_co_u32_e32 v2, vcc, v10, v2
-; GFX9-NEXT:    v_add3_u32 v7, v7, v9, v8
+; GFX9-NEXT:    v_add3_u32 v7, v7, v8, v6
 ; GFX9-NEXT:    v_mad_u64_u32 v[4:5], s[4:5], v0, -1, v[6:7]
+; GFX9-NEXT:    v_mov_b32_e32 v2, v3
+; GFX9-NEXT:    v_add_co_u32_e32 v2, vcc, v9, v2
 ; GFX9-NEXT:    v_addc_co_u32_e64 v3, s[4:5], 0, 0, vcc
 ; GFX9-NEXT:    v_mad_u64_u32 v[2:3], s[4:5], v1, s6, v[2:3]
 ; GFX9-NEXT:    v_sub_u32_e32 v5, v5, v1
@@ -1195,10 +1191,9 @@ define noundef i64 @sdiv64_i32max(i64 noundef %i)  {
 ; GFX942:       ; %bb.0: ; %entry
 ; GFX942-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
 ; GFX942-NEXT:    v_ashrrev_i32_e32 v2, 31, v1
-; GFX942-NEXT:    v_mul_i32_i24_e32 v4, 3, v2
-; GFX942-NEXT:    v_lshl_add_u32 v5, v2, 31, v2
+; GFX942-NEXT:    v_lshl_add_u32 v4, v2, 31, v2
 ; GFX942-NEXT:    v_mad_u64_u32 v[2:3], s[0:1], v2, 3, 0
-; GFX942-NEXT:    v_add3_u32 v3, v3, v5, v4
+; GFX942-NEXT:    v_add3_u32 v3, v3, v4, v2
 ; GFX942-NEXT:    v_mul_hi_u32 v4, v0, 3
 ; GFX942-NEXT:    v_mov_b32_e32 v5, 0
 ; GFX942-NEXT:    v_mad_u64_u32 v[6:7], s[0:1], v1, 3, v[4:5]
@@ -1227,17 +1222,16 @@ define noundef i64 @sdiv64_i32max(i64 noundef %i)  {
 ; GFX1030-NEXT:    v_mul_hi_u32 v2, v0, 3
 ; GFX1030-NEXT:    v_mov_b32_e32 v3, 0
 ; GFX1030-NEXT:    v_ashrrev_i32_e32 v6, 31, v1
-; GFX1030-NEXT:    v_mul_i32_i24_e32 v7, 3, v6
 ; GFX1030-NEXT:    v_mad_u64_u32 v[4:5], null, v1, 3, v[2:3]
-; GFX1030-NEXT:    v_mov_b32_e32 v8, v5
+; GFX1030-NEXT:    v_mov_b32_e32 v7, v5
 ; GFX1030-NEXT:    v_mov_b32_e32 v5, v3
 ; GFX1030-NEXT:    v_mad_u64_u32 v[2:3], null, v6, 3, 0
 ; GFX1030-NEXT:    v_lshl_add_u32 v6, v6, 31, v6
 ; GFX1030-NEXT:    v_mad_u64_u32 v[4:5], null, 0x80000001, v0, v[4:5]
-; GFX1030-NEXT:    v_add3_u32 v3, v3, v6, v7
+; GFX1030-NEXT:    v_add3_u32 v3, v3, v6, v2
 ; GFX1030-NEXT:    v_mov_b32_e32 v4, v5
 ; GFX1030-NEXT:    v_mad_u64_u32 v[2:3], null, v0, -1, v[2:3]
-; GFX1030-NEXT:    v_add_co_u32 v4, s4, v8, v4
+; GFX1030-NEXT:    v_add_co_u32 v4, s4, v7, v4
 ; GFX1030-NEXT:    v_add_co_ci_u32_e64 v5, null, 0, 0, s4
 ; GFX1030-NEXT:    v_sub_nc_u32_e32 v6, v3, v1
 ; GFX1030-NEXT:    v_mad_u64_u32 v[3:4], null, 0x80000001, v1, v[4:5]
diff --git a/llvm/test/CodeGen/AMDGPU/fptoi.i128.ll b/llvm/test/CodeGen/AMDGPU/fptoi.i128.ll
index 786fe03164690e..68ebc21e2ba4d6 100644
--- a/llvm/test/CodeGen/AMDGPU/fptoi.i128.ll
+++ b/llvm/test/CodeGen/AMDGPU/fptoi.i128.ll
@@ -37,12 +37,11 @@ define i128 @fptosi_f64_to_i128(double %x) {
 ; SDAG-NEXT:  ; %bb.2: ; %fp-to-i-if-end9
 ; SDAG-NEXT:    v_cndmask_b32_e64 v0, 0, 1, vcc
 ; SDAG-NEXT:    v_add_co_u32_e64 v9, s[4:5], -1, v0
-; SDAG-NEXT:    v_addc_co_u32_e64 v10, s[4:5], 0, -1, s[4:5]
 ; SDAG-NEXT:    s_mov_b64 s[4:5], 0x432
 ; SDAG-NEXT:    v_and_b32_e32 v0, 0xfffff, v5
 ; SDAG-NEXT:    v_cmp_lt_u64_e64 s[4:5], s[4:5], v[6:7]
 ; SDAG-NEXT:    v_cndmask_b32_e64 v8, -1, 0, vcc
-; SDAG-NEXT:    v_cndmask_b32_e64 v11, -1, 1, vcc
+; SDAG-NEXT:    v_cndmask_b32_e64 v10, -1, 1, vcc
 ; SDAG-NEXT:    v_or_b32_e32 v5, 0x100000, v0
 ; SDAG-NEXT:    ; implicit-def: $vgpr0_vgpr1
 ; SDAG-NEXT:    ; implicit-def: $vgpr2_vgpr3
@@ -62,34 +61,33 @@ define i128 @fptosi_f64_to_i128(double %x) {
 ; SDAG-NEXT:    v_cndmask_b32_e64 v2, v2, v0, s[4:5]
 ; SDAG-NEXT:    v_lshlrev_b64 v[0:1], v7, v[4:5]
 ; SDAG-NEXT:    v_cndmask_b32_e64 v2, 0, v2, s[6:7]
-; SDAG-NEXT:    v_cndmask_b32_e64 v12, 0, v0, s[4:5]
+; SDAG-NEXT:    v_cndmask_b32_e64 v11, 0, v0, s[4:5]
 ; SDAG-NEXT:    v_cndmask_b32_e64 v7, 0, v1, s[4:5]
-; SDAG-NEXT:    v_mad_u64_u32 v[0:1], s[4:5], v12, v11, 0
+; SDAG-NEXT:    v_mad_u64_u32 v[0:1], s[4:5], v11, v10, 0
 ; SDAG-NEXT:    v_mov_b32_e32 v3, 0
-; SDAG-NEXT:    v_mul_lo_u32 v13, v8, v2
-; SDAG-NEXT:    v_mad_u64_u32 v[4:5], s[4:5], v11, v2, 0
+; SDAG-NEXT:    v_mul_lo_u32 v12, v8, v2
+; SDAG-NEXT:    v_mad_u64_u32 v[4:5], s[4:5], v10, v2, 0
 ; SDAG-NEXT:    v_mov_b32_e32 v2, v1
-; SDAG-NEXT:    v_mul_lo_u32 v6, v11, v6
-; SDAG-NEXT:    v_mad_u64_u32 v[1:2], s[4:5], v7, v11, v[2:3]
-; SDAG-NEXT:    v_mul_lo_u32 v10, v10, v12
-; SDAG-NEXT:    v_add3_u32 v5, v5, v6, v13
+; SDAG-NEXT:    v_mul_lo_u32 v6, v10, v6
+; SDAG-NEXT:    v_mad_u64_u32 v[1:2], s[4:5], v7, v10, v[2:3]
+; SDAG-NEXT:    v_mul_lo_u32 v10, v9, v7
+; SDAG-NEXT:    v_add3_u32 v5, v5, v6, v12
 ; SDAG-NEXT:    v_mov_b32_e32 v6, v2
 ; SDAG-NEXT:    v_mov_b32_e32 v2, v3
-; SDAG-NEXT:    v_mad_u64_u32 v[1:2], s[4:5], v12, v8, v[1:2]
-; SDAG-NEXT:    v_mad_u64_u32 v[3:4], s[4:5], v9, v12, v[4:5]
+; SDAG-NEXT:    v_mad_u64_u32 v[1:2], s[4:5], v11, v8, v[1:2]
+; SDAG-NEXT:    v_mad_u64_u32 v[3:4], s[4:5], v9, v11, v[4:5]
 ; SDAG-NEXT:    v_add_co_u32_e64 v5, s[4:5], v6, v2
 ; SDAG-NEXT:    v_addc_co_u32_e64 v6, s[4:5], 0, 0, s[4:5]
-; SDAG-NEXT:    v_mul_lo_u32 v9, v9, v7
+; SDAG-NEXT:    v_mul_lo_u32 v9, v9, v11
 ; SDAG-NEXT:    v_mad_u64_u32 v[5:6], s[4:5], v7, v8, v[5:6]
-; SDAG-NEXT:    ; implicit-def: $vgpr11
 ; SDAG-NEXT:    ; implicit-def: $vgpr8
-; SDAG-NEXT:    v_add3_u32 v4, v10, v4, v9
+; SDAG-NEXT:    v_add3_u32 v4, v9, v4, v10
 ; SDAG-NEXT:    v_add_co_u32_e64 v2, s[4:5], v5, v3
 ; SDAG-NEXT:    v_addc_co_u32_e64 v3, s[4:5], v6, v4, s[4:5]
 ; SDAG-NEXT:    ; implicit-def: $vgpr6_vgpr7
 ; SDAG-NEXT:    ; implicit-def: $vgpr4_vgpr5
-; SDAG-NEXT:    ; implicit-def: $vgpr9
 ; SDAG-NEXT:    ; implicit-def: $vgpr10
+; SDAG-NEXT:    ; implicit-def: $vgpr9
 ; SDAG-NEXT:  .LBB0_4: ; %Flow
 ; SDAG-NEXT:    s_andn2_saveexec_b64 s[12:13], s[12:13]
 ; SDAG-NEXT:    s_cbranch_execz .LBB0_6
@@ -102,9 +100,9 @@ define i128 @fptosi_f64_to_i128(double %x) {
 ; SDAG-NEXT:    v_cndmask_b32_e64 v1, 0, v1, s[4:5]
 ; SDAG-NEXT:    v_cndmask_b32_e64 v6, v0, v4, s[6:7]
 ; SDAG-NEXT:    v_cndmask_b32_e64 v5, v1, v5, s[6:7]
-; SDAG-NEXT:    v_mad_u64_u32 v[0:1], s[4:5], v6, v11, 0
+; SDAG-NEXT:    v_mad_u64_u32 v[0:1], s[4:5], v6, v10, 0
 ; SDAG-NEXT:    v_mov_b32_e32 v2, 0
-; SDAG-NEXT:    v_mad_u64_u32 v[3:4], s[4:5], v5, v11, v[1:2]
+; SDAG-NEXT:    v_mad_u64_u32 v[3:4], s[4:5], v5, v10, v[1:2]
 ; SDAG-NEXT:    v_mov_b32_e32 v7, v4
 ; SDAG-NEXT:    v_mov_b32_e32 v4, v2
 ; SDAG-NEXT:    v_mad_u64_u32 v[1:2], s[4:5], v6, v8, v[3:4]
@@ -112,7 +110,7 @@ define i128 @fptosi_f64_to_i128(double %x) {
 ; SDAG-NEXT:    v_addc_co_u32_e64 v3, s[4:5], 0, 0, s[4:5]
 ; SDAG-NEXT:    v_mad_u64_u32 v[2:3], s[4:5], v5, v8, v[2:3]
 ; SDAG-NEXT:    v_mad_u64_u32 v[2:3], s[4:5], v9, v6, v[2:3]
-; SDAG-NEXT:    v_mad_u64_u32 v[3:4], s[4:5], v10, v6, v[3:4]
+; SDAG-NEXT:    v_mad_u64_u32 v[3:4], s[4:5], v9, v6, v[3:4]
 ; SDAG-NEXT:    v_mad_i32_i24 v3, v9, v5, v3
 ; SDAG-NEXT:  .LBB0_6: ; %Flow1
 ; SDAG-NEXT:    s_or_b64 exec, exec, s[12:13]
@@ -409,12 +407,11 @@ define i128 @fptoui_f64_to_i128(double %x) {
 ; SDAG-NEXT:  ; %bb.2: ; %fp-to-i-if-end9
 ; SDAG-NEXT:    v_cndmask_b32_e64 v0, 0, 1, vcc
 ; SDAG-NEXT:    v_add_co_u32_e64 v9, s[4:5], -1, v0
-; SDAG-NEXT:    v_addc_co_u32_e64 v10, s[4:5], 0, -1, s[4:5]
 ; SDAG-NEXT:    s_mov_b64 s[4:5], 0x432
 ; SDAG-NEXT:    v_and_b32_e32 v0, 0xfffff, v5
 ; SDAG-NEXT:    v_cmp_lt_u64_e64 s[4:5], s[4:5], v[6:7]
 ; SDAG-NEXT:    v_cndmask_b32_e64 v8, -1, 0, vcc
-; SDAG-NEXT:    v_cndmask_b32_e64 v11, -1, 1, vcc
+; SDAG-NEXT:    v_cndmask_b32_e64 v10, -1, 1, vcc
 ; SDAG-NEXT:    v_or_b32_e32 v5, 0x100000, v0
 ; SDAG-NEXT:    ; implicit-def: $vgpr0_vgpr1
 ; SDAG-NEXT:    ; implicit-def: $vgpr2_vgpr3
@@ -434,34 +431,33 @@ define i128 @fptoui_f64_to_i128(double %x) {
 ; SDAG-NEXT:    v_cndmask_b32_e64 v2, v2, v0, s[4:5]
 ; SDAG-NEXT:    v_lshlrev_b64 v[0:1], v7, v[4:5]
 ; SDAG-NEXT:    v_cndmask_b32_e64 v2, 0, v2, s[6:7]
-; SDAG-NEXT:    v_cndmask_b32_e64 v12, 0, v0, s[4:5]
+; SDAG-NEXT:    v_cndmask_b32_e64 v11, 0, v0, s[4:5]
 ; SDAG-NEXT:    v_cndmask_b32_e64 v7, 0, v1, s[4:5]
-; SDAG-NEXT:    v_mad_u64_u32 v[0:1], s[4:5], v12, v11, 0
+; SDAG-NEXT:    v_mad_u64_u32 v[0:1], s[4:5], v11, v10, 0
 ; SDAG-NEXT:    v_mov_b32_e32 v3, 0
-; SDAG-NEXT:    v_mul_lo_u32 v13, v8, v2
-; SDAG-NEXT:    v_mad_u64_u32 v[4:5], s[4:5], v11, v2, 0
+; SDAG-NEXT:    v_mul_lo_u32 v12, v8, v2
+; SDAG-NEXT:    v_mad_u64_u32 v[4:5], s[4:5], v10, v2, 0
 ; SDAG-NEXT:    v_mov_b32_e32 v2, v1
-; SDAG-NEXT:    v_mul_lo_u32 v6, v11, v6
-; SDAG-NEXT:    v_mad_u64_u32 v[1:2], s[4:5], v7, v11, v[2:3]
-; SDAG-NEXT:    v_mul_lo_u32 v10, v10, v12
-; SDAG-NEXT:    v_add3_u32 v5, v5, v6, v13
+; SDAG-NEXT:    v_mul_lo_u32 v6, v10, v6
+; SDAG-NEXT:    v_mad_u64_u32 v[1:2], s[4:5], v7, v10, v[2:3]
+; SDAG-NEXT:    v_mul_lo_u32 v10, v9, v7
+; SDAG-NEXT:    v_add3_u32 v5, v5, v6, v12
 ; SDAG-NEXT:    v_mov_b32_e32 v6, v2
 ; SDAG-NEXT:    v_mov_b32_e32 v2, v3
-; SDAG-NEXT:    v_mad_u64_u32 v[1:2], s[4:5], v12, v8, v[1:2]
-; SDAG-NEXT:    v_mad_u64_u32 v[3:4], s[4:5], v9, v12, v[4:5]
+; SDAG-NEXT:    v_mad_u64_u32 v[1:2], s[4:5], v11, v8, v[1:2]
+; SDAG-NEXT:    v_mad_u64_u32 v[3:4], s[4:5], v9, v11, v[4:5]
 ; SDAG-NEXT:    v_add_co_u32_e64 v5, s[4:5], v6, v2
 ; SDAG-NEXT:    v_addc_co_u32_e64 v6, s[4:5], 0, 0, s[4:5]
-; SDAG-NEXT:    v_mul_lo_u32 v9, v9, v7
+; SDAG-NEXT:    v_mul_lo_u32 v9, v9, v11
 ; SDAG-NEXT:    v_mad_u64_u32 v[5:6], s[4:5], v7, v8, v[5:6]
-; SDAG-NEXT:    ; implicit-def: $vgpr11
 ; SDAG-NEXT:    ; implicit-def: $vgpr8
-; SDAG-NEXT:    v_add3_u32 v4, v10, v4, v9
+; SDAG-NEXT:    v_add3_u32 v4, v9, v4, v10
 ; SDAG-NEXT:    v_add_co_u32_e64 v2, s[4:5], v5, v3
 ; SDAG-NEXT:    v_addc_co_u32_e64 v3, s[4:5], v6, v4, s[4:5]
 ; SDAG-NEXT:    ; implicit-def: $vgpr6_vgpr7
 ; SDAG-NEXT:    ; implicit-def: $vgpr4_vgpr5
-; SDAG-NEXT:    ; implicit-def: $vgpr9
 ; SDAG-NEXT:    ; implicit-def: $vgpr10
+; SDAG-NEXT:    ; implicit-def: $vgpr9
 ; SDAG-NEXT:  .LBB1_4: ; %Flow
 ; SDAG-NEXT:    s_andn2_saveexec_b64 s[12:13], s[12:13]
 ; SDAG-NEXT:    s_cbranch_execz .LBB1_6
@@ -474,9 +470,9 @@ define i128 @fptoui_f64_to_i128(double %x) {
 ; SDAG-NEXT:    v_cndmask_b32_e64 v1, 0, v1, s[4:5]
 ; SDAG-NEXT:    v_cndmask_b32_e64 v6, v0, v4, s[6:7]
 ; SDAG-NEXT:    v_cndmask_b32_e64 v5, v1, v5, s[6:7]
-; SDAG-NEXT:    v_mad_u64_u32 v[0:1], s[4:5], v6, v11, 0
+; SDAG-NEXT:    v_mad_u64_u32 v[0:1], s[4:5], v6, v10, 0
 ; SDAG-NEXT:    v_mov_b32_e32 v2, 0
-; SDAG-NEXT:    v_mad_u64_u32 v[3:4], s[4:5], v5, v11, v[1:2]
+; SDAG-NEXT:    v_mad_u64_u32 v[3:4], s[4:5], v5, v10, v[1:2]
 ; SDAG-NEXT:    v_mov_b32_e32 v7, v4
 ; SDAG-NEXT:    v_mov_b32_e32 v4, v2
 ; SDAG-NEXT:    v_mad_u64_u32 v[1:2], s[4:5], v6, v8, v[3:4]
@@ -484,7 +480,7 @@ define i128 @fptoui_f64_to_i128(double %x) {
 ; SDAG-NEXT:    v_addc_co_u32_e64 v3, s[4:5], 0, 0, s[4:5]
 ; SDAG-NEXT:    v_mad_u64_u32 v[2:3], s[4:5], v5, v8, v[2:3]
 ; SDAG-NEXT:    v_mad_u64_u32 v[2:3], s[4:5], v9, v6, v[2:3]
-; SDAG-NEXT:    v_mad_u64_u32 v[3:4], s[4:5], v10, v6, v[3:4]
+; SDAG-NEXT:    v_mad_u64_u32 v[3:4], s[4:5], v9, v6, v[3:4]
 ; SDAG-NEXT:    v_mad_i32_i24 v3, v9, v5, v3
 ; SDAG-NEXT:  .LBB1_6: ; %Flow1
 ; SDAG-NEXT:    s_or_b64 exec, exec, s[12:13]
@@ -780,7 +776,6 @@ define i128 @fptosi_f32_to_i128(float %x) {
 ; SDAG-NEXT:  ; %bb.2: ; %fp-to-i-if-end9
 ; SDAG-NEXT:    v_cndmask_b32_e64 v0, 0, 1, vcc
 ; SDAG-NEXT:    v_add_co_u32_e64 v9, s[4:5], -1, v0
-; SDAG-NEXT:    v_addc_co_u32_e64 v11, s[4:5], 0, -1, s[4:5]
 ; SDAG-NEXT:    s_mov_b64 s[4:5], 0x95
 ; SDAG-NEXT:    v_and_b32_e32 v0, 0x7fffff, v4
 ; SDAG-NEXT:    v_cmp_lt_u64_e64 s[4:5], s[4:5], v[5:6]
@@ -806,24 +801,24 @@ define i128 @fptosi_f32_to_i128(float %x) {
 ; SDAG-NEXT:    v_cndmask_b32_e64 v2, v2, v0, s[4:5]
 ; SDAG-NEXT:    v_lshlrev_b64 v[0:1], v4, v[6:7]
 ; SDAG-NEXT:    v_cndmask_b32_e64 v2, 0, v2, s[6:7]
-; SDAG-NEXT:    v_cndmask_b32_e64 v13, 0, v0, s[4:5]
-; SDAG-NEXT:    v_cndmask_b32_e64 v12, 0, v1, s[4:5]
-; SDAG-NEXT:    v_mad_u64_u32 v[0:1], s[4:5], v13, v10, 0
-; SDAG-NEXT:    v_mul_lo_u32 v14, v8, v2
-; SDAG-NEXT:    v_mul_lo_u32 v15, v10, v3
+; SDAG-NEXT:    v_cndmask_b32_e64 v12, 0, v0, s[4:5]
+; SDAG-NEXT:    v_cndmask_b32_e64 v11, 0, v1, s[4:5]
+; SDAG-NEXT:    v_mad_u64_u32 v[0:1], s[4:5], v12, v10, 0
+; SDAG-NEXT:    v_mul_lo_u32 v13, v8, v2
+; SDAG-NEXT:    v_mul_lo_u32 v14, v10, v3
 ; SDAG-NEXT:    v_mov_b32_e32 v6, v1
-; SDAG-NEXT:    v_mad_u64_u32 v[4:5], s[4:5], v12, v10, v[6:7]
+; SDAG-NEXT:    v_mad_u64_u32 v[4:5], s[4:5], v11, v10, v[6:7]
 ; SDAG-NEXT:    v_mad_u64_u32 v[2:3], s[4:5], v10, v2, 0
 ; SDAG-NEXT:    v_mov_b32_e32 v6, v5
 ; SDAG-NEXT:    v_mov_b32_e32 v5, v7
-; SDAG-NEXT:    v_mad_u64_u32 v[4:5], s[4:5], v13, v8, v[4:5]
-; SDAG-NEXT:    v_add3_u32 v3, v3, v15, v14
-; SDAG-NEXT:    v_mad_u64_u32 v[1:2], s[4:5], v9, v13, v[2:3]
+; SDAG-NEXT:    v_mad_u64_u32 v[4:5], s[4:5], v12, v8, v[4:5]
+; SDAG-NEXT:    v_add3_u32 v3, v3, v14, v13
+; SDAG-NEXT:    v_mad_u64_u32 v[1:2], s[4:5], v9, v12, v[2:3]
 ; SDAG-NEXT:    v_add_co_u32_e64 v5, s[4:5], v6, v5
 ; SDAG-NEXT:    v_addc_co_u32_e64 v6, s[4:5], 0, 0, s[4:5]
-; SDAG-NEXT:    v_mul_lo_u32 v3, v9, v12
-; SDAG-NEXT:    v_mul_lo_u32 v7, v11, v13
-; SDAG-NEXT:    v_mad_u64_u32 v[5:6], s[4:5], v12, v8, v[5:6]
+; SDAG-NEXT:    v_mul_lo_u32 v3, v9, v11
+; SDAG-NEXT:    v_mul_lo_u32 v7, v9, v12
+; SDAG-NEXT:    v_mad_u64_u32 v[5:6], s[4:5], v11, v8, v[5:6]
 ; SDAG-NEXT:    ; implicit-def: $vgpr10
 ; SDAG-NEXT:    ; implicit-def: $vgpr8
 ; SDAG-NEXT:    ; implicit-def: $vgpr9
@@ -1138,7 +1133,6 @@ define i128 @fptoui_f32_to_i128(float %x) {
 ; SDAG-NEXT:  ; %bb.2: ; %fp-to-i-if-end9
 ; SDAG-NEXT:    v_cndmask_b32_e64 v0, 0, 1, vcc
 ; SDAG-NEXT:    v_add_co_u32_e64 v9, s[4:5], -1, v0
-; SDAG-NEXT:    v_addc_co_u32_e64 v11, s[4:5], 0, -1, s[4:5]
 ; SDAG-NEXT:    s_mov_b64 s[4:5], 0x95
 ; SDAG-NEXT:    v_and_b32_e32 v0, 0x7fffff, v4
 ; SDAG-NEXT:    v_cmp_lt_u64_e64 s[4:5], s[4:5], v[5:6]
@@ -1164,24 +1158,24 @@ define i128 @fptoui_f32_to_i128(float %x) {
 ; SDAG-NEXT:    v_cndmask_b32_e64 v2, v2, v0, s[4:5]
 ; SDAG-NEXT:    v_lshlrev_b64 v[0:1], v4, v[6:7]
 ; SDAG-NEXT:    v_cndmask_b32_e64 v2, 0, v2, s[6:7]
-; SDAG-NEXT:    v_cndmask_b32_e64 v13, 0, v0, s[4:5]
-; SDAG-NEXT:    v_cndmask_b32_e64 v12, 0, v1, s[4:5]
-; SDAG-NEXT:    v_mad_u64_u32 v[0:1], s[4:5], v13, v10, 0
-; SDAG-NEXT:    v_mul_lo_u32 v14, v8, v2
-; SDAG-NEXT:    v_mul_lo_u32 v15, v10, v3
+; SDAG-NEXT:    v_cndmask_b32_e64 v12, 0, v0, s[4:5]
+; SDAG-NEXT:    v_cndmask_b32_e64 v11, 0, v1, s[4:5]
+; SDAG-NEXT:    v_mad_u64_u32 v[0:1], s[4:5], v12, v10, 0
+; SDAG-NEXT:    v_mul_lo_u32 v13, v8, v2
+; SDAG-NEXT:    v_mul_lo_u32 v14, v10, v3
 ; SDAG-NEXT:    v_mov_b32_e32 v6, v1
-; SDAG-NEXT:    v_mad_u64_u32 v[4:5], s[4:5], v12, v10, v[6:7]
+; SDAG-NEXT:    v_mad_u64_u32 v[4:5], s[4:5], v11, v10, v[6:7]
 ; SDAG-NEXT:    v_mad_u64_u32 v[2:3], s[4:5], v10, v2, 0
 ; SDAG-NEXT:    v_mov_b32_e32 v6, v5
 ; SDAG-NEXT:    v_mov_b32_e32 v5, v7
-; SDAG-NEXT:    v_mad_u64_u32 v[4:5], s[4:5], v13, v8, v[4:5]
-; SDAG-NEXT:    v_add3_u32 v3, v3, v15, v14
-; SDAG-NEXT:    v_mad_u64_u32 v[1:2], s[4:5], v9, v13, v[2:3]
+; SDAG-NEXT:    v_mad_u64_u32 v[4:5], s[4:5], v12, v8, v[4:5]
+; SDAG-NEXT:    v_add3_u32 v3, v3, v14, v13
+; SDAG-NEXT:    v_mad_u64_u32 v[1:2], s[4:5], v9, v12, v[2:3]
 ; SDAG-NEXT:    v_add_co_u32_e64 v5, s[4:5], v6, v5
 ; SDAG-NEXT:    v_addc_co_u32_e64 v6, s[4:5], 0, 0, s[4:5]
-; SDAG-NEXT:    v_mul_lo_u32 v3, v9, v12
-; SDAG-NEXT:    v_mul_lo_u32 v7, v11, v13
-; SDAG-NEXT:    v_mad_u64_u32 v[5:6], s[4:5], v12, v8, v[5:6]
+; SDAG-NEXT:    v_mul_lo_u32 v3, v9, v11
+; SDAG-NEXT:    v_mul_lo_u32 v7, v9, v12
+; SDAG-NEXT:    v_mad_u64_u32 v[5:6], s[4:5], v11, v8, v[5:6]
 ; SDAG-NEXT:    ; implic...
[truncated]

@llvmbot
Copy link
Member

llvmbot commented Oct 31, 2024

@llvm/pr-subscribers-backend-x86

Author: Simon Pilgrim (RKSimon)

Changes

Check to see if we are only demanding (shifted) signbits from a SRL node that are also signbits in the source node.

We can't demand any upper zero bits that the SRL will shift in (up to max shift amount), and the lower demanded bits bound must already be all signbits.


Patch is 58.01 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/114389.diff

6 Files Affected:

  • (modified) llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp (+18)
  • (modified) llvm/test/CodeGen/AMDGPU/div-rem-by-constant-64.ll (+22-28)
  • (modified) llvm/test/CodeGen/AMDGPU/fptoi.i128.ll (+72-80)
  • (modified) llvm/test/CodeGen/RISCV/pr95284.ll (+10-12)
  • (modified) llvm/test/CodeGen/RISCV/urem-seteq-illegal-types.ll (+11-11)
  • (modified) llvm/test/CodeGen/X86/scmp.ll (+313-324)
diff --git a/llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp b/llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp
index fabcbc5f0e856d..de291493faddbb 100644
--- a/llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp
@@ -808,6 +808,24 @@ SDValue TargetLowering::SimplifyMultipleUseDemandedBits(
     }
     break;
   }
+  case ISD::SRL: {
+    // If we are only demanding sign bits then we can use the shift source
+    // directly.
+    if (std::optional<uint64_t> MaxSA =
+            DAG.getValidMaximumShiftAmount(Op, DemandedElts, Depth + 1)) {
+      SDValue Op0 = Op.getOperand(0);
+      unsigned ShAmt = *MaxSA;
+      unsigned NumSignBits =
+          DAG.ComputeNumSignBits(Op0, DemandedElts, Depth + 1);
+      unsigned LoDemandedBits = DemandedBits.countr_zero();
+      unsigned HiDemandedBits = DemandedBits.countl_zero();
+      // Must already be signbits in DemandedBits bounds, and can't demand any
+      // shifted in zeroes.
+      if (HiDemandedBits >= ShAmt && LoDemandedBits >= (BitWidth - NumSignBits))
+        return Op0;
+    }
+    break;
+  }
   case ISD::SETCC: {
     SDValue Op0 = Op.getOperand(0);
     SDValue Op1 = Op.getOperand(1);
diff --git a/llvm/test/CodeGen/AMDGPU/div-rem-by-constant-64.ll b/llvm/test/CodeGen/AMDGPU/div-rem-by-constant-64.ll
index 4143c65a840d71..662de47413654f 100644
--- a/llvm/test/CodeGen/AMDGPU/div-rem-by-constant-64.ll
+++ b/llvm/test/CodeGen/AMDGPU/div-rem-by-constant-64.ll
@@ -1052,16 +1052,15 @@ define noundef i64 @srem64_i32max(i64 noundef %i)  {
 ; GFX9-NEXT:    s_mov_b32 s6, 0x80000001
 ; GFX9-NEXT:    v_ashrrev_i32_e32 v6, 31, v1
 ; GFX9-NEXT:    v_mad_u64_u32 v[4:5], s[4:5], v1, 3, v[2:3]
-; GFX9-NEXT:    v_mul_i32_i24_e32 v8, 3, v6
-; GFX9-NEXT:    v_lshl_add_u32 v9, v6, 31, v6
-; GFX9-NEXT:    v_mov_b32_e32 v10, v5
+; GFX9-NEXT:    v_lshl_add_u32 v8, v6, 31, v6
+; GFX9-NEXT:    v_mad_u64_u32 v[6:7], s[4:5], v6, 3, 0
+; GFX9-NEXT:    v_mov_b32_e32 v9, v5
 ; GFX9-NEXT:    v_mov_b32_e32 v5, v3
 ; GFX9-NEXT:    v_mad_u64_u32 v[2:3], s[4:5], v0, s6, v[4:5]
-; GFX9-NEXT:    v_mad_u64_u32 v[6:7], s[4:5], v6, 3, 0
-; GFX9-NEXT:    v_mov_b32_e32 v2, v3
-; GFX9-NEXT:    v_add_co_u32_e32 v2, vcc, v10, v2
-; GFX9-NEXT:    v_add3_u32 v7, v7, v9, v8
+; GFX9-NEXT:    v_add3_u32 v7, v7, v8, v6
 ; GFX9-NEXT:    v_mad_u64_u32 v[4:5], s[4:5], v0, -1, v[6:7]
+; GFX9-NEXT:    v_mov_b32_e32 v2, v3
+; GFX9-NEXT:    v_add_co_u32_e32 v2, vcc, v9, v2
 ; GFX9-NEXT:    v_addc_co_u32_e64 v3, s[4:5], 0, 0, vcc
 ; GFX9-NEXT:    v_mad_u64_u32 v[2:3], s[4:5], v1, s6, v[2:3]
 ; GFX9-NEXT:    v_sub_u32_e32 v5, v5, v1
@@ -1085,10 +1084,9 @@ define noundef i64 @srem64_i32max(i64 noundef %i)  {
 ; GFX942:       ; %bb.0: ; %entry
 ; GFX942-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
 ; GFX942-NEXT:    v_ashrrev_i32_e32 v2, 31, v1
-; GFX942-NEXT:    v_mul_i32_i24_e32 v4, 3, v2
-; GFX942-NEXT:    v_lshl_add_u32 v5, v2, 31, v2
+; GFX942-NEXT:    v_lshl_add_u32 v4, v2, 31, v2
 ; GFX942-NEXT:    v_mad_u64_u32 v[2:3], s[0:1], v2, 3, 0
-; GFX942-NEXT:    v_add3_u32 v3, v3, v5, v4
+; GFX942-NEXT:    v_add3_u32 v3, v3, v4, v2
 ; GFX942-NEXT:    v_mul_hi_u32 v4, v0, 3
 ; GFX942-NEXT:    v_mov_b32_e32 v5, 0
 ; GFX942-NEXT:    v_mad_u64_u32 v[6:7], s[0:1], v1, 3, v[4:5]
@@ -1125,17 +1123,16 @@ define noundef i64 @srem64_i32max(i64 noundef %i)  {
 ; GFX1030-NEXT:    v_mul_hi_u32 v2, v0, 3
 ; GFX1030-NEXT:    v_mov_b32_e32 v3, 0
 ; GFX1030-NEXT:    v_ashrrev_i32_e32 v6, 31, v1
-; GFX1030-NEXT:    v_mul_i32_i24_e32 v7, 3, v6
 ; GFX1030-NEXT:    v_mad_u64_u32 v[4:5], null, v1, 3, v[2:3]
-; GFX1030-NEXT:    v_mov_b32_e32 v8, v5
+; GFX1030-NEXT:    v_mov_b32_e32 v7, v5
 ; GFX1030-NEXT:    v_mov_b32_e32 v5, v3
 ; GFX1030-NEXT:    v_mad_u64_u32 v[2:3], null, v6, 3, 0
 ; GFX1030-NEXT:    v_lshl_add_u32 v6, v6, 31, v6
 ; GFX1030-NEXT:    v_mad_u64_u32 v[4:5], null, 0x80000001, v0, v[4:5]
-; GFX1030-NEXT:    v_add3_u32 v3, v3, v6, v7
+; GFX1030-NEXT:    v_add3_u32 v3, v3, v6, v2
 ; GFX1030-NEXT:    v_mov_b32_e32 v4, v5
 ; GFX1030-NEXT:    v_mad_u64_u32 v[2:3], null, v0, -1, v[2:3]
-; GFX1030-NEXT:    v_add_co_u32 v4, s4, v8, v4
+; GFX1030-NEXT:    v_add_co_u32 v4, s4, v7, v4
 ; GFX1030-NEXT:    v_add_co_ci_u32_e64 v5, null, 0, 0, s4
 ; GFX1030-NEXT:    v_sub_nc_u32_e32 v6, v3, v1
 ; GFX1030-NEXT:    v_mad_u64_u32 v[3:4], null, 0x80000001, v1, v[4:5]
@@ -1167,16 +1164,15 @@ define noundef i64 @sdiv64_i32max(i64 noundef %i)  {
 ; GFX9-NEXT:    s_mov_b32 s6, 0x80000001
 ; GFX9-NEXT:    v_ashrrev_i32_e32 v6, 31, v1
 ; GFX9-NEXT:    v_mad_u64_u32 v[4:5], s[4:5], v1, 3, v[2:3]
-; GFX9-NEXT:    v_mul_i32_i24_e32 v8, 3, v6
-; GFX9-NEXT:    v_lshl_add_u32 v9, v6, 31, v6
-; GFX9-NEXT:    v_mov_b32_e32 v10, v5
+; GFX9-NEXT:    v_lshl_add_u32 v8, v6, 31, v6
+; GFX9-NEXT:    v_mad_u64_u32 v[6:7], s[4:5], v6, 3, 0
+; GFX9-NEXT:    v_mov_b32_e32 v9, v5
 ; GFX9-NEXT:    v_mov_b32_e32 v5, v3
 ; GFX9-NEXT:    v_mad_u64_u32 v[2:3], s[4:5], v0, s6, v[4:5]
-; GFX9-NEXT:    v_mad_u64_u32 v[6:7], s[4:5], v6, 3, 0
-; GFX9-NEXT:    v_mov_b32_e32 v2, v3
-; GFX9-NEXT:    v_add_co_u32_e32 v2, vcc, v10, v2
-; GFX9-NEXT:    v_add3_u32 v7, v7, v9, v8
+; GFX9-NEXT:    v_add3_u32 v7, v7, v8, v6
 ; GFX9-NEXT:    v_mad_u64_u32 v[4:5], s[4:5], v0, -1, v[6:7]
+; GFX9-NEXT:    v_mov_b32_e32 v2, v3
+; GFX9-NEXT:    v_add_co_u32_e32 v2, vcc, v9, v2
 ; GFX9-NEXT:    v_addc_co_u32_e64 v3, s[4:5], 0, 0, vcc
 ; GFX9-NEXT:    v_mad_u64_u32 v[2:3], s[4:5], v1, s6, v[2:3]
 ; GFX9-NEXT:    v_sub_u32_e32 v5, v5, v1
@@ -1195,10 +1191,9 @@ define noundef i64 @sdiv64_i32max(i64 noundef %i)  {
 ; GFX942:       ; %bb.0: ; %entry
 ; GFX942-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
 ; GFX942-NEXT:    v_ashrrev_i32_e32 v2, 31, v1
-; GFX942-NEXT:    v_mul_i32_i24_e32 v4, 3, v2
-; GFX942-NEXT:    v_lshl_add_u32 v5, v2, 31, v2
+; GFX942-NEXT:    v_lshl_add_u32 v4, v2, 31, v2
 ; GFX942-NEXT:    v_mad_u64_u32 v[2:3], s[0:1], v2, 3, 0
-; GFX942-NEXT:    v_add3_u32 v3, v3, v5, v4
+; GFX942-NEXT:    v_add3_u32 v3, v3, v4, v2
 ; GFX942-NEXT:    v_mul_hi_u32 v4, v0, 3
 ; GFX942-NEXT:    v_mov_b32_e32 v5, 0
 ; GFX942-NEXT:    v_mad_u64_u32 v[6:7], s[0:1], v1, 3, v[4:5]
@@ -1227,17 +1222,16 @@ define noundef i64 @sdiv64_i32max(i64 noundef %i)  {
 ; GFX1030-NEXT:    v_mul_hi_u32 v2, v0, 3
 ; GFX1030-NEXT:    v_mov_b32_e32 v3, 0
 ; GFX1030-NEXT:    v_ashrrev_i32_e32 v6, 31, v1
-; GFX1030-NEXT:    v_mul_i32_i24_e32 v7, 3, v6
 ; GFX1030-NEXT:    v_mad_u64_u32 v[4:5], null, v1, 3, v[2:3]
-; GFX1030-NEXT:    v_mov_b32_e32 v8, v5
+; GFX1030-NEXT:    v_mov_b32_e32 v7, v5
 ; GFX1030-NEXT:    v_mov_b32_e32 v5, v3
 ; GFX1030-NEXT:    v_mad_u64_u32 v[2:3], null, v6, 3, 0
 ; GFX1030-NEXT:    v_lshl_add_u32 v6, v6, 31, v6
 ; GFX1030-NEXT:    v_mad_u64_u32 v[4:5], null, 0x80000001, v0, v[4:5]
-; GFX1030-NEXT:    v_add3_u32 v3, v3, v6, v7
+; GFX1030-NEXT:    v_add3_u32 v3, v3, v6, v2
 ; GFX1030-NEXT:    v_mov_b32_e32 v4, v5
 ; GFX1030-NEXT:    v_mad_u64_u32 v[2:3], null, v0, -1, v[2:3]
-; GFX1030-NEXT:    v_add_co_u32 v4, s4, v8, v4
+; GFX1030-NEXT:    v_add_co_u32 v4, s4, v7, v4
 ; GFX1030-NEXT:    v_add_co_ci_u32_e64 v5, null, 0, 0, s4
 ; GFX1030-NEXT:    v_sub_nc_u32_e32 v6, v3, v1
 ; GFX1030-NEXT:    v_mad_u64_u32 v[3:4], null, 0x80000001, v1, v[4:5]
diff --git a/llvm/test/CodeGen/AMDGPU/fptoi.i128.ll b/llvm/test/CodeGen/AMDGPU/fptoi.i128.ll
index 786fe03164690e..68ebc21e2ba4d6 100644
--- a/llvm/test/CodeGen/AMDGPU/fptoi.i128.ll
+++ b/llvm/test/CodeGen/AMDGPU/fptoi.i128.ll
@@ -37,12 +37,11 @@ define i128 @fptosi_f64_to_i128(double %x) {
 ; SDAG-NEXT:  ; %bb.2: ; %fp-to-i-if-end9
 ; SDAG-NEXT:    v_cndmask_b32_e64 v0, 0, 1, vcc
 ; SDAG-NEXT:    v_add_co_u32_e64 v9, s[4:5], -1, v0
-; SDAG-NEXT:    v_addc_co_u32_e64 v10, s[4:5], 0, -1, s[4:5]
 ; SDAG-NEXT:    s_mov_b64 s[4:5], 0x432
 ; SDAG-NEXT:    v_and_b32_e32 v0, 0xfffff, v5
 ; SDAG-NEXT:    v_cmp_lt_u64_e64 s[4:5], s[4:5], v[6:7]
 ; SDAG-NEXT:    v_cndmask_b32_e64 v8, -1, 0, vcc
-; SDAG-NEXT:    v_cndmask_b32_e64 v11, -1, 1, vcc
+; SDAG-NEXT:    v_cndmask_b32_e64 v10, -1, 1, vcc
 ; SDAG-NEXT:    v_or_b32_e32 v5, 0x100000, v0
 ; SDAG-NEXT:    ; implicit-def: $vgpr0_vgpr1
 ; SDAG-NEXT:    ; implicit-def: $vgpr2_vgpr3
@@ -62,34 +61,33 @@ define i128 @fptosi_f64_to_i128(double %x) {
 ; SDAG-NEXT:    v_cndmask_b32_e64 v2, v2, v0, s[4:5]
 ; SDAG-NEXT:    v_lshlrev_b64 v[0:1], v7, v[4:5]
 ; SDAG-NEXT:    v_cndmask_b32_e64 v2, 0, v2, s[6:7]
-; SDAG-NEXT:    v_cndmask_b32_e64 v12, 0, v0, s[4:5]
+; SDAG-NEXT:    v_cndmask_b32_e64 v11, 0, v0, s[4:5]
 ; SDAG-NEXT:    v_cndmask_b32_e64 v7, 0, v1, s[4:5]
-; SDAG-NEXT:    v_mad_u64_u32 v[0:1], s[4:5], v12, v11, 0
+; SDAG-NEXT:    v_mad_u64_u32 v[0:1], s[4:5], v11, v10, 0
 ; SDAG-NEXT:    v_mov_b32_e32 v3, 0
-; SDAG-NEXT:    v_mul_lo_u32 v13, v8, v2
-; SDAG-NEXT:    v_mad_u64_u32 v[4:5], s[4:5], v11, v2, 0
+; SDAG-NEXT:    v_mul_lo_u32 v12, v8, v2
+; SDAG-NEXT:    v_mad_u64_u32 v[4:5], s[4:5], v10, v2, 0
 ; SDAG-NEXT:    v_mov_b32_e32 v2, v1
-; SDAG-NEXT:    v_mul_lo_u32 v6, v11, v6
-; SDAG-NEXT:    v_mad_u64_u32 v[1:2], s[4:5], v7, v11, v[2:3]
-; SDAG-NEXT:    v_mul_lo_u32 v10, v10, v12
-; SDAG-NEXT:    v_add3_u32 v5, v5, v6, v13
+; SDAG-NEXT:    v_mul_lo_u32 v6, v10, v6
+; SDAG-NEXT:    v_mad_u64_u32 v[1:2], s[4:5], v7, v10, v[2:3]
+; SDAG-NEXT:    v_mul_lo_u32 v10, v9, v7
+; SDAG-NEXT:    v_add3_u32 v5, v5, v6, v12
 ; SDAG-NEXT:    v_mov_b32_e32 v6, v2
 ; SDAG-NEXT:    v_mov_b32_e32 v2, v3
-; SDAG-NEXT:    v_mad_u64_u32 v[1:2], s[4:5], v12, v8, v[1:2]
-; SDAG-NEXT:    v_mad_u64_u32 v[3:4], s[4:5], v9, v12, v[4:5]
+; SDAG-NEXT:    v_mad_u64_u32 v[1:2], s[4:5], v11, v8, v[1:2]
+; SDAG-NEXT:    v_mad_u64_u32 v[3:4], s[4:5], v9, v11, v[4:5]
 ; SDAG-NEXT:    v_add_co_u32_e64 v5, s[4:5], v6, v2
 ; SDAG-NEXT:    v_addc_co_u32_e64 v6, s[4:5], 0, 0, s[4:5]
-; SDAG-NEXT:    v_mul_lo_u32 v9, v9, v7
+; SDAG-NEXT:    v_mul_lo_u32 v9, v9, v11
 ; SDAG-NEXT:    v_mad_u64_u32 v[5:6], s[4:5], v7, v8, v[5:6]
-; SDAG-NEXT:    ; implicit-def: $vgpr11
 ; SDAG-NEXT:    ; implicit-def: $vgpr8
-; SDAG-NEXT:    v_add3_u32 v4, v10, v4, v9
+; SDAG-NEXT:    v_add3_u32 v4, v9, v4, v10
 ; SDAG-NEXT:    v_add_co_u32_e64 v2, s[4:5], v5, v3
 ; SDAG-NEXT:    v_addc_co_u32_e64 v3, s[4:5], v6, v4, s[4:5]
 ; SDAG-NEXT:    ; implicit-def: $vgpr6_vgpr7
 ; SDAG-NEXT:    ; implicit-def: $vgpr4_vgpr5
-; SDAG-NEXT:    ; implicit-def: $vgpr9
 ; SDAG-NEXT:    ; implicit-def: $vgpr10
+; SDAG-NEXT:    ; implicit-def: $vgpr9
 ; SDAG-NEXT:  .LBB0_4: ; %Flow
 ; SDAG-NEXT:    s_andn2_saveexec_b64 s[12:13], s[12:13]
 ; SDAG-NEXT:    s_cbranch_execz .LBB0_6
@@ -102,9 +100,9 @@ define i128 @fptosi_f64_to_i128(double %x) {
 ; SDAG-NEXT:    v_cndmask_b32_e64 v1, 0, v1, s[4:5]
 ; SDAG-NEXT:    v_cndmask_b32_e64 v6, v0, v4, s[6:7]
 ; SDAG-NEXT:    v_cndmask_b32_e64 v5, v1, v5, s[6:7]
-; SDAG-NEXT:    v_mad_u64_u32 v[0:1], s[4:5], v6, v11, 0
+; SDAG-NEXT:    v_mad_u64_u32 v[0:1], s[4:5], v6, v10, 0
 ; SDAG-NEXT:    v_mov_b32_e32 v2, 0
-; SDAG-NEXT:    v_mad_u64_u32 v[3:4], s[4:5], v5, v11, v[1:2]
+; SDAG-NEXT:    v_mad_u64_u32 v[3:4], s[4:5], v5, v10, v[1:2]
 ; SDAG-NEXT:    v_mov_b32_e32 v7, v4
 ; SDAG-NEXT:    v_mov_b32_e32 v4, v2
 ; SDAG-NEXT:    v_mad_u64_u32 v[1:2], s[4:5], v6, v8, v[3:4]
@@ -112,7 +110,7 @@ define i128 @fptosi_f64_to_i128(double %x) {
 ; SDAG-NEXT:    v_addc_co_u32_e64 v3, s[4:5], 0, 0, s[4:5]
 ; SDAG-NEXT:    v_mad_u64_u32 v[2:3], s[4:5], v5, v8, v[2:3]
 ; SDAG-NEXT:    v_mad_u64_u32 v[2:3], s[4:5], v9, v6, v[2:3]
-; SDAG-NEXT:    v_mad_u64_u32 v[3:4], s[4:5], v10, v6, v[3:4]
+; SDAG-NEXT:    v_mad_u64_u32 v[3:4], s[4:5], v9, v6, v[3:4]
 ; SDAG-NEXT:    v_mad_i32_i24 v3, v9, v5, v3
 ; SDAG-NEXT:  .LBB0_6: ; %Flow1
 ; SDAG-NEXT:    s_or_b64 exec, exec, s[12:13]
@@ -409,12 +407,11 @@ define i128 @fptoui_f64_to_i128(double %x) {
 ; SDAG-NEXT:  ; %bb.2: ; %fp-to-i-if-end9
 ; SDAG-NEXT:    v_cndmask_b32_e64 v0, 0, 1, vcc
 ; SDAG-NEXT:    v_add_co_u32_e64 v9, s[4:5], -1, v0
-; SDAG-NEXT:    v_addc_co_u32_e64 v10, s[4:5], 0, -1, s[4:5]
 ; SDAG-NEXT:    s_mov_b64 s[4:5], 0x432
 ; SDAG-NEXT:    v_and_b32_e32 v0, 0xfffff, v5
 ; SDAG-NEXT:    v_cmp_lt_u64_e64 s[4:5], s[4:5], v[6:7]
 ; SDAG-NEXT:    v_cndmask_b32_e64 v8, -1, 0, vcc
-; SDAG-NEXT:    v_cndmask_b32_e64 v11, -1, 1, vcc
+; SDAG-NEXT:    v_cndmask_b32_e64 v10, -1, 1, vcc
 ; SDAG-NEXT:    v_or_b32_e32 v5, 0x100000, v0
 ; SDAG-NEXT:    ; implicit-def: $vgpr0_vgpr1
 ; SDAG-NEXT:    ; implicit-def: $vgpr2_vgpr3
@@ -434,34 +431,33 @@ define i128 @fptoui_f64_to_i128(double %x) {
 ; SDAG-NEXT:    v_cndmask_b32_e64 v2, v2, v0, s[4:5]
 ; SDAG-NEXT:    v_lshlrev_b64 v[0:1], v7, v[4:5]
 ; SDAG-NEXT:    v_cndmask_b32_e64 v2, 0, v2, s[6:7]
-; SDAG-NEXT:    v_cndmask_b32_e64 v12, 0, v0, s[4:5]
+; SDAG-NEXT:    v_cndmask_b32_e64 v11, 0, v0, s[4:5]
 ; SDAG-NEXT:    v_cndmask_b32_e64 v7, 0, v1, s[4:5]
-; SDAG-NEXT:    v_mad_u64_u32 v[0:1], s[4:5], v12, v11, 0
+; SDAG-NEXT:    v_mad_u64_u32 v[0:1], s[4:5], v11, v10, 0
 ; SDAG-NEXT:    v_mov_b32_e32 v3, 0
-; SDAG-NEXT:    v_mul_lo_u32 v13, v8, v2
-; SDAG-NEXT:    v_mad_u64_u32 v[4:5], s[4:5], v11, v2, 0
+; SDAG-NEXT:    v_mul_lo_u32 v12, v8, v2
+; SDAG-NEXT:    v_mad_u64_u32 v[4:5], s[4:5], v10, v2, 0
 ; SDAG-NEXT:    v_mov_b32_e32 v2, v1
-; SDAG-NEXT:    v_mul_lo_u32 v6, v11, v6
-; SDAG-NEXT:    v_mad_u64_u32 v[1:2], s[4:5], v7, v11, v[2:3]
-; SDAG-NEXT:    v_mul_lo_u32 v10, v10, v12
-; SDAG-NEXT:    v_add3_u32 v5, v5, v6, v13
+; SDAG-NEXT:    v_mul_lo_u32 v6, v10, v6
+; SDAG-NEXT:    v_mad_u64_u32 v[1:2], s[4:5], v7, v10, v[2:3]
+; SDAG-NEXT:    v_mul_lo_u32 v10, v9, v7
+; SDAG-NEXT:    v_add3_u32 v5, v5, v6, v12
 ; SDAG-NEXT:    v_mov_b32_e32 v6, v2
 ; SDAG-NEXT:    v_mov_b32_e32 v2, v3
-; SDAG-NEXT:    v_mad_u64_u32 v[1:2], s[4:5], v12, v8, v[1:2]
-; SDAG-NEXT:    v_mad_u64_u32 v[3:4], s[4:5], v9, v12, v[4:5]
+; SDAG-NEXT:    v_mad_u64_u32 v[1:2], s[4:5], v11, v8, v[1:2]
+; SDAG-NEXT:    v_mad_u64_u32 v[3:4], s[4:5], v9, v11, v[4:5]
 ; SDAG-NEXT:    v_add_co_u32_e64 v5, s[4:5], v6, v2
 ; SDAG-NEXT:    v_addc_co_u32_e64 v6, s[4:5], 0, 0, s[4:5]
-; SDAG-NEXT:    v_mul_lo_u32 v9, v9, v7
+; SDAG-NEXT:    v_mul_lo_u32 v9, v9, v11
 ; SDAG-NEXT:    v_mad_u64_u32 v[5:6], s[4:5], v7, v8, v[5:6]
-; SDAG-NEXT:    ; implicit-def: $vgpr11
 ; SDAG-NEXT:    ; implicit-def: $vgpr8
-; SDAG-NEXT:    v_add3_u32 v4, v10, v4, v9
+; SDAG-NEXT:    v_add3_u32 v4, v9, v4, v10
 ; SDAG-NEXT:    v_add_co_u32_e64 v2, s[4:5], v5, v3
 ; SDAG-NEXT:    v_addc_co_u32_e64 v3, s[4:5], v6, v4, s[4:5]
 ; SDAG-NEXT:    ; implicit-def: $vgpr6_vgpr7
 ; SDAG-NEXT:    ; implicit-def: $vgpr4_vgpr5
-; SDAG-NEXT:    ; implicit-def: $vgpr9
 ; SDAG-NEXT:    ; implicit-def: $vgpr10
+; SDAG-NEXT:    ; implicit-def: $vgpr9
 ; SDAG-NEXT:  .LBB1_4: ; %Flow
 ; SDAG-NEXT:    s_andn2_saveexec_b64 s[12:13], s[12:13]
 ; SDAG-NEXT:    s_cbranch_execz .LBB1_6
@@ -474,9 +470,9 @@ define i128 @fptoui_f64_to_i128(double %x) {
 ; SDAG-NEXT:    v_cndmask_b32_e64 v1, 0, v1, s[4:5]
 ; SDAG-NEXT:    v_cndmask_b32_e64 v6, v0, v4, s[6:7]
 ; SDAG-NEXT:    v_cndmask_b32_e64 v5, v1, v5, s[6:7]
-; SDAG-NEXT:    v_mad_u64_u32 v[0:1], s[4:5], v6, v11, 0
+; SDAG-NEXT:    v_mad_u64_u32 v[0:1], s[4:5], v6, v10, 0
 ; SDAG-NEXT:    v_mov_b32_e32 v2, 0
-; SDAG-NEXT:    v_mad_u64_u32 v[3:4], s[4:5], v5, v11, v[1:2]
+; SDAG-NEXT:    v_mad_u64_u32 v[3:4], s[4:5], v5, v10, v[1:2]
 ; SDAG-NEXT:    v_mov_b32_e32 v7, v4
 ; SDAG-NEXT:    v_mov_b32_e32 v4, v2
 ; SDAG-NEXT:    v_mad_u64_u32 v[1:2], s[4:5], v6, v8, v[3:4]
@@ -484,7 +480,7 @@ define i128 @fptoui_f64_to_i128(double %x) {
 ; SDAG-NEXT:    v_addc_co_u32_e64 v3, s[4:5], 0, 0, s[4:5]
 ; SDAG-NEXT:    v_mad_u64_u32 v[2:3], s[4:5], v5, v8, v[2:3]
 ; SDAG-NEXT:    v_mad_u64_u32 v[2:3], s[4:5], v9, v6, v[2:3]
-; SDAG-NEXT:    v_mad_u64_u32 v[3:4], s[4:5], v10, v6, v[3:4]
+; SDAG-NEXT:    v_mad_u64_u32 v[3:4], s[4:5], v9, v6, v[3:4]
 ; SDAG-NEXT:    v_mad_i32_i24 v3, v9, v5, v3
 ; SDAG-NEXT:  .LBB1_6: ; %Flow1
 ; SDAG-NEXT:    s_or_b64 exec, exec, s[12:13]
@@ -780,7 +776,6 @@ define i128 @fptosi_f32_to_i128(float %x) {
 ; SDAG-NEXT:  ; %bb.2: ; %fp-to-i-if-end9
 ; SDAG-NEXT:    v_cndmask_b32_e64 v0, 0, 1, vcc
 ; SDAG-NEXT:    v_add_co_u32_e64 v9, s[4:5], -1, v0
-; SDAG-NEXT:    v_addc_co_u32_e64 v11, s[4:5], 0, -1, s[4:5]
 ; SDAG-NEXT:    s_mov_b64 s[4:5], 0x95
 ; SDAG-NEXT:    v_and_b32_e32 v0, 0x7fffff, v4
 ; SDAG-NEXT:    v_cmp_lt_u64_e64 s[4:5], s[4:5], v[5:6]
@@ -806,24 +801,24 @@ define i128 @fptosi_f32_to_i128(float %x) {
 ; SDAG-NEXT:    v_cndmask_b32_e64 v2, v2, v0, s[4:5]
 ; SDAG-NEXT:    v_lshlrev_b64 v[0:1], v4, v[6:7]
 ; SDAG-NEXT:    v_cndmask_b32_e64 v2, 0, v2, s[6:7]
-; SDAG-NEXT:    v_cndmask_b32_e64 v13, 0, v0, s[4:5]
-; SDAG-NEXT:    v_cndmask_b32_e64 v12, 0, v1, s[4:5]
-; SDAG-NEXT:    v_mad_u64_u32 v[0:1], s[4:5], v13, v10, 0
-; SDAG-NEXT:    v_mul_lo_u32 v14, v8, v2
-; SDAG-NEXT:    v_mul_lo_u32 v15, v10, v3
+; SDAG-NEXT:    v_cndmask_b32_e64 v12, 0, v0, s[4:5]
+; SDAG-NEXT:    v_cndmask_b32_e64 v11, 0, v1, s[4:5]
+; SDAG-NEXT:    v_mad_u64_u32 v[0:1], s[4:5], v12, v10, 0
+; SDAG-NEXT:    v_mul_lo_u32 v13, v8, v2
+; SDAG-NEXT:    v_mul_lo_u32 v14, v10, v3
 ; SDAG-NEXT:    v_mov_b32_e32 v6, v1
-; SDAG-NEXT:    v_mad_u64_u32 v[4:5], s[4:5], v12, v10, v[6:7]
+; SDAG-NEXT:    v_mad_u64_u32 v[4:5], s[4:5], v11, v10, v[6:7]
 ; SDAG-NEXT:    v_mad_u64_u32 v[2:3], s[4:5], v10, v2, 0
 ; SDAG-NEXT:    v_mov_b32_e32 v6, v5
 ; SDAG-NEXT:    v_mov_b32_e32 v5, v7
-; SDAG-NEXT:    v_mad_u64_u32 v[4:5], s[4:5], v13, v8, v[4:5]
-; SDAG-NEXT:    v_add3_u32 v3, v3, v15, v14
-; SDAG-NEXT:    v_mad_u64_u32 v[1:2], s[4:5], v9, v13, v[2:3]
+; SDAG-NEXT:    v_mad_u64_u32 v[4:5], s[4:5], v12, v8, v[4:5]
+; SDAG-NEXT:    v_add3_u32 v3, v3, v14, v13
+; SDAG-NEXT:    v_mad_u64_u32 v[1:2], s[4:5], v9, v12, v[2:3]
 ; SDAG-NEXT:    v_add_co_u32_e64 v5, s[4:5], v6, v5
 ; SDAG-NEXT:    v_addc_co_u32_e64 v6, s[4:5], 0, 0, s[4:5]
-; SDAG-NEXT:    v_mul_lo_u32 v3, v9, v12
-; SDAG-NEXT:    v_mul_lo_u32 v7, v11, v13
-; SDAG-NEXT:    v_mad_u64_u32 v[5:6], s[4:5], v12, v8, v[5:6]
+; SDAG-NEXT:    v_mul_lo_u32 v3, v9, v11
+; SDAG-NEXT:    v_mul_lo_u32 v7, v9, v12
+; SDAG-NEXT:    v_mad_u64_u32 v[5:6], s[4:5], v11, v8, v[5:6]
 ; SDAG-NEXT:    ; implicit-def: $vgpr10
 ; SDAG-NEXT:    ; implicit-def: $vgpr8
 ; SDAG-NEXT:    ; implicit-def: $vgpr9
@@ -1138,7 +1133,6 @@ define i128 @fptoui_f32_to_i128(float %x) {
 ; SDAG-NEXT:  ; %bb.2: ; %fp-to-i-if-end9
 ; SDAG-NEXT:    v_cndmask_b32_e64 v0, 0, 1, vcc
 ; SDAG-NEXT:    v_add_co_u32_e64 v9, s[4:5], -1, v0
-; SDAG-NEXT:    v_addc_co_u32_e64 v11, s[4:5], 0, -1, s[4:5]
 ; SDAG-NEXT:    s_mov_b64 s[4:5], 0x95
 ; SDAG-NEXT:    v_and_b32_e32 v0, 0x7fffff, v4
 ; SDAG-NEXT:    v_cmp_lt_u64_e64 s[4:5], s[4:5], v[5:6]
@@ -1164,24 +1158,24 @@ define i128 @fptoui_f32_to_i128(float %x) {
 ; SDAG-NEXT:    v_cndmask_b32_e64 v2, v2, v0, s[4:5]
 ; SDAG-NEXT:    v_lshlrev_b64 v[0:1], v4, v[6:7]
 ; SDAG-NEXT:    v_cndmask_b32_e64 v2, 0, v2, s[6:7]
-; SDAG-NEXT:    v_cndmask_b32_e64 v13, 0, v0, s[4:5]
-; SDAG-NEXT:    v_cndmask_b32_e64 v12, 0, v1, s[4:5]
-; SDAG-NEXT:    v_mad_u64_u32 v[0:1], s[4:5], v13, v10, 0
-; SDAG-NEXT:    v_mul_lo_u32 v14, v8, v2
-; SDAG-NEXT:    v_mul_lo_u32 v15, v10, v3
+; SDAG-NEXT:    v_cndmask_b32_e64 v12, 0, v0, s[4:5]
+; SDAG-NEXT:    v_cndmask_b32_e64 v11, 0, v1, s[4:5]
+; SDAG-NEXT:    v_mad_u64_u32 v[0:1], s[4:5], v12, v10, 0
+; SDAG-NEXT:    v_mul_lo_u32 v13, v8, v2
+; SDAG-NEXT:    v_mul_lo_u32 v14, v10, v3
 ; SDAG-NEXT:    v_mov_b32_e32 v6, v1
-; SDAG-NEXT:    v_mad_u64_u32 v[4:5], s[4:5], v12, v10, v[6:7]
+; SDAG-NEXT:    v_mad_u64_u32 v[4:5], s[4:5], v11, v10, v[6:7]
 ; SDAG-NEXT:    v_mad_u64_u32 v[2:3], s[4:5], v10, v2, 0
 ; SDAG-NEXT:    v_mov_b32_e32 v6, v5
 ; SDAG-NEXT:    v_mov_b32_e32 v5, v7
-; SDAG-NEXT:    v_mad_u64_u32 v[4:5], s[4:5], v13, v8, v[4:5]
-; SDAG-NEXT:    v_add3_u32 v3, v3, v15, v14
-; SDAG-NEXT:    v_mad_u64_u32 v[1:2], s[4:5], v9, v13, v[2:3]
+; SDAG-NEXT:    v_mad_u64_u32 v[4:5], s[4:5], v12, v8, v[4:5]
+; SDAG-NEXT:    v_add3_u32 v3, v3, v14, v13
+; SDAG-NEXT:    v_mad_u64_u32 v[1:2], s[4:5], v9, v12, v[2:3]
 ; SDAG-NEXT:    v_add_co_u32_e64 v5, s[4:5], v6, v5
 ; SDAG-NEXT:    v_addc_co_u32_e64 v6, s[4:5], 0, 0, s[4:5]
-; SDAG-NEXT:    v_mul_lo_u32 v3, v9, v12
-; SDAG-NEXT:    v_mul_lo_u32 v7, v11, v13
-; SDAG-NEXT:    v_mad_u64_u32 v[5:6], s[4:5], v12, v8, v[5:6]
+; SDAG-NEXT:    v_mul_lo_u32 v3, v9, v11
+; SDAG-NEXT:    v_mul_lo_u32 v7, v9, v12
+; SDAG-NEXT:    v_mad_u64_u32 v[5:6], s[4:5], v11, v8, v[5:6]
 ; SDAG-NEXT:    ; implic...
[truncated]

@RKSimon RKSimon requested a review from topperc October 31, 2024 10:55
Copy link
Contributor

@jayfoad jayfoad left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense to me. One naming nit inline.

@RKSimon RKSimon force-pushed the dag-srl-multiuse-demandedbits branch from 80415a4 to e9e1696 Compare October 31, 2024 13:36
… demanding known sign bits

Check to see if we are only demanding (shifted) signbits from a SRL node that are also signbits in the source node.

We can't demand any upper zero bits that the SRL will shift in (up to max shift amount), and the lower demanded bits bound must already be all signbits.
@RKSimon RKSimon force-pushed the dag-srl-multiuse-demandedbits branch from e9e1696 to c2688e5 Compare October 31, 2024 15:12
@RKSimon RKSimon merged commit 9fb4bc5 into llvm:main Oct 31, 2024
6 of 7 checks passed
@RKSimon RKSimon deleted the dag-srl-multiuse-demandedbits branch October 31, 2024 16:40
smallp-o-p pushed a commit to smallp-o-p/llvm-project that referenced this pull request Nov 3, 2024
… demanding known sign bits (llvm#114389)

Check to see if we are only demanding (shifted) signbits from a SRL node that are also signbits in the source node.

We can't demand any upper zero bits that the SRL will shift in (up to max shift amount), and the lower demanded bits bound must already be all signbits.
NoumanAmir657 pushed a commit to NoumanAmir657/llvm-project that referenced this pull request Nov 4, 2024
… demanding known sign bits (llvm#114389)

Check to see if we are only demanding (shifted) signbits from a SRL node that are also signbits in the source node.

We can't demand any upper zero bits that the SRL will shift in (up to max shift amount), and the lower demanded bits bound must already be all signbits.
RKSimon added a commit to RKSimon/llvm-project that referenced this pull request Nov 4, 2024
…known sign bits

Check to see if we are only demanding (shifted) signbits from a SRL node that are also signbits in the source node.

We can't demand any upper zero bits that the SRL will shift in (up to max shift amount), and the lower demanded bits bound must already be all signbits.

Same fold as llvm#114389 which added this for SimplifyMultipleUseDemandedBits
RKSimon added a commit that referenced this pull request Nov 4, 2024
…known sign bits (#114805)

Check to see if we are only demanding (shifted) signbits from a SRL node that are also signbits in the source node.

We can't demand any upper zero bits that the SRL will shift in (up to max shift amount), and the lower demanded bits bound must already be all signbits.

Same fold as #114389 which added this for SimplifyMultipleUseDemandedBits
PhilippRados pushed a commit to PhilippRados/llvm-project that referenced this pull request Nov 6, 2024
…known sign bits (llvm#114805)

Check to see if we are only demanding (shifted) signbits from a SRL node that are also signbits in the source node.

We can't demand any upper zero bits that the SRL will shift in (up to max shift amount), and the lower demanded bits bound must already be all signbits.

Same fold as llvm#114389 which added this for SimplifyMultipleUseDemandedBits
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants