-
Notifications
You must be signed in to change notification settings - Fork 14.5k
[RISCV][TTI] Model partial reduce of ext for zvqdotq #146788
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
This is the RISCV follow up to f575b18 to leverage the new infrastructure recently added.
@llvm/pr-subscribers-llvm-transforms @llvm/pr-subscribers-backend-risc-v Author: Philip Reames (preames) ChangesThis is the RISCV follow up to f575b18 to leverage the new infrastructure recently added. Patch is 42.54 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/146788.diff 2 Files Affected:
diff --git a/llvm/lib/Target/RISCV/RISCVTargetTransformInfo.cpp b/llvm/lib/Target/RISCV/RISCVTargetTransformInfo.cpp
index 67a51c12b508e..f84823dca3ad3 100644
--- a/llvm/lib/Target/RISCV/RISCVTargetTransformInfo.cpp
+++ b/llvm/lib/Target/RISCV/RISCVTargetTransformInfo.cpp
@@ -303,16 +303,29 @@ InstructionCost RISCVTTIImpl::getPartialReductionCost(
// zve32x is broken for partial_reduce_umla, but let's make sure we
// don't generate them.
if (!ST->hasStdExtZvqdotq() || ST->getELen() < 64 ||
- Opcode != Instruction::Add || !BinOp || *BinOp != Instruction::Mul ||
- InputTypeA != InputTypeB || !InputTypeA->isIntegerTy(8) ||
+ Opcode != Instruction::Add || !InputTypeA->isIntegerTy(8) ||
!AccumType->isIntegerTy(32) || !VF.isKnownMultipleOf(4))
return InstructionCost::getInvalid();
+ // We support both the plain dot product idiom, and the use of dotproduct
+ // to compute a a reduction of an extended value.
+ if (BinOp && (*BinOp != Instruction::Mul || InputTypeA != InputTypeB))
+ return InstructionCost::getInvalid();
+
+ InstructionCost IntMatCost = 0;
+ if (!BinOp) {
+ // Cost to produce one vmv.v.i -- since the constant is shared across any
+ // unrolled copies, don't need to scale by LT.first.
+ Type *Tp = VectorType::get(InputTypeA, VF);
+ std::pair<InstructionCost, MVT> LT = getTypeLegalizationCost(Tp);
+ IntMatCost = getRISCVInstructionCost(RISCV::VMV_V_I, LT.second, CostKind);
+ }
+
Type *Tp = VectorType::get(AccumType, VF.divideCoefficientBy(4));
std::pair<InstructionCost, MVT> LT = getTypeLegalizationCost(Tp);
// Note: Asuming all vqdot* variants are equal cost
- return LT.first *
- getRISCVInstructionCost(RISCV::VQDOT_VV, LT.second, CostKind);
+ return IntMatCost + LT.first * getRISCVInstructionCost(RISCV::VQDOT_VV,
+ LT.second, CostKind);
}
bool RISCVTTIImpl::shouldExpandReduction(const IntrinsicInst *II) const {
diff --git a/llvm/test/Transforms/LoopVectorize/RISCV/partial-reduce.ll b/llvm/test/Transforms/LoopVectorize/RISCV/partial-reduce.ll
new file mode 100644
index 0000000000000..83475796abc6c
--- /dev/null
+++ b/llvm/test/Transforms/LoopVectorize/RISCV/partial-reduce.ll
@@ -0,0 +1,663 @@
+; NOTE: Assertions have been autogenerated by utils/update_test_checks.py UTC_ARGS: --check-globals none --filter-out-after "^scalar.ph:" --version 4
+; RUN: opt -passes=loop-vectorize -mattr=+v -S < %s | FileCheck %s --check-prefixes=CHECK,V
+; RUN: opt -passes=loop-vectorize -mattr=+v,+experimental-zvqdotq -S < %s | FileCheck %s --check-prefixes=CHECK,ZVQDOTQ
+; RUN: opt -passes=loop-vectorize -mattr=+v -scalable-vectorization=off -S < %s | FileCheck %s --check-prefixes=FIXED,FIXED-V
+; RUN: opt -passes=loop-vectorize -mattr=+v,+experimental-zvqdotq -scalable-vectorization=off -S < %s | FileCheck %s --check-prefixes=FIXED,FIXED-ZVQDOTQ
+
+target triple = "riscv64-none-unknown-elf"
+
+; == Partial reductions with add of an extend
+
+define i32 @zext_add_reduc_i8_i32(ptr %a) {
+; V-LABEL: define i32 @zext_add_reduc_i8_i32(
+; V-SAME: ptr [[A:%.*]]) #[[ATTR0:[0-9]+]] {
+; V-NEXT: entry:
+; V-NEXT: [[TMP0:%.*]] = call i64 @llvm.vscale.i64()
+; V-NEXT: [[TMP1:%.*]] = mul nuw i64 [[TMP0]], 4
+; V-NEXT: [[MIN_ITERS_CHECK:%.*]] = icmp ult i64 1025, [[TMP1]]
+; V-NEXT: br i1 [[MIN_ITERS_CHECK]], label [[SCALAR_PH:%.*]], label [[VECTOR_PH:%.*]]
+; V: vector.ph:
+; V-NEXT: [[TMP2:%.*]] = call i64 @llvm.vscale.i64()
+; V-NEXT: [[TMP3:%.*]] = mul nuw i64 [[TMP2]], 4
+; V-NEXT: [[N_MOD_VF:%.*]] = urem i64 1025, [[TMP3]]
+; V-NEXT: [[N_VEC:%.*]] = sub i64 1025, [[N_MOD_VF]]
+; V-NEXT: [[TMP4:%.*]] = call i64 @llvm.vscale.i64()
+; V-NEXT: [[TMP5:%.*]] = mul nuw i64 [[TMP4]], 4
+; V-NEXT: br label [[VECTOR_BODY:%.*]]
+; V: vector.body:
+; V-NEXT: [[INDEX:%.*]] = phi i64 [ 0, [[VECTOR_PH]] ], [ [[INDEX_NEXT:%.*]], [[VECTOR_BODY]] ]
+; V-NEXT: [[VEC_PHI:%.*]] = phi <vscale x 4 x i32> [ zeroinitializer, [[VECTOR_PH]] ], [ [[TMP9:%.*]], [[VECTOR_BODY]] ]
+; V-NEXT: [[TMP6:%.*]] = getelementptr i8, ptr [[A]], i64 [[INDEX]]
+; V-NEXT: [[TMP7:%.*]] = getelementptr i8, ptr [[TMP6]], i32 0
+; V-NEXT: [[WIDE_LOAD:%.*]] = load <vscale x 4 x i8>, ptr [[TMP7]], align 1
+; V-NEXT: [[TMP8:%.*]] = zext <vscale x 4 x i8> [[WIDE_LOAD]] to <vscale x 4 x i32>
+; V-NEXT: [[TMP9]] = add <vscale x 4 x i32> [[TMP8]], [[VEC_PHI]]
+; V-NEXT: [[INDEX_NEXT]] = add nuw i64 [[INDEX]], [[TMP5]]
+; V-NEXT: [[TMP10:%.*]] = icmp eq i64 [[INDEX_NEXT]], [[N_VEC]]
+; V-NEXT: br i1 [[TMP10]], label [[MIDDLE_BLOCK:%.*]], label [[VECTOR_BODY]], !llvm.loop [[LOOP0:![0-9]+]]
+; V: middle.block:
+; V-NEXT: [[TMP11:%.*]] = call i32 @llvm.vector.reduce.add.nxv4i32(<vscale x 4 x i32> [[TMP9]])
+; V-NEXT: [[CMP_N:%.*]] = icmp eq i64 1025, [[N_VEC]]
+; V-NEXT: br i1 [[CMP_N]], label [[FOR_EXIT:%.*]], label [[SCALAR_PH]]
+; V: scalar.ph:
+;
+; ZVQDOTQ-LABEL: define i32 @zext_add_reduc_i8_i32(
+; ZVQDOTQ-SAME: ptr [[A:%.*]]) #[[ATTR0:[0-9]+]] {
+; ZVQDOTQ-NEXT: entry:
+; ZVQDOTQ-NEXT: [[TMP0:%.*]] = call i64 @llvm.vscale.i64()
+; ZVQDOTQ-NEXT: [[TMP1:%.*]] = mul nuw i64 [[TMP0]], 4
+; ZVQDOTQ-NEXT: [[MIN_ITERS_CHECK:%.*]] = icmp ult i64 1025, [[TMP1]]
+; ZVQDOTQ-NEXT: br i1 [[MIN_ITERS_CHECK]], label [[SCALAR_PH:%.*]], label [[VECTOR_PH:%.*]]
+; ZVQDOTQ: vector.ph:
+; ZVQDOTQ-NEXT: [[TMP2:%.*]] = call i64 @llvm.vscale.i64()
+; ZVQDOTQ-NEXT: [[TMP3:%.*]] = mul nuw i64 [[TMP2]], 4
+; ZVQDOTQ-NEXT: [[N_MOD_VF:%.*]] = urem i64 1025, [[TMP3]]
+; ZVQDOTQ-NEXT: [[N_VEC:%.*]] = sub i64 1025, [[N_MOD_VF]]
+; ZVQDOTQ-NEXT: [[TMP4:%.*]] = call i64 @llvm.vscale.i64()
+; ZVQDOTQ-NEXT: [[TMP5:%.*]] = mul nuw i64 [[TMP4]], 4
+; ZVQDOTQ-NEXT: br label [[VECTOR_BODY:%.*]]
+; ZVQDOTQ: vector.body:
+; ZVQDOTQ-NEXT: [[INDEX:%.*]] = phi i64 [ 0, [[VECTOR_PH]] ], [ [[INDEX_NEXT:%.*]], [[VECTOR_BODY]] ]
+; ZVQDOTQ-NEXT: [[VEC_PHI:%.*]] = phi <vscale x 1 x i32> [ zeroinitializer, [[VECTOR_PH]] ], [ [[PARTIAL_REDUCE:%.*]], [[VECTOR_BODY]] ]
+; ZVQDOTQ-NEXT: [[TMP6:%.*]] = getelementptr i8, ptr [[A]], i64 [[INDEX]]
+; ZVQDOTQ-NEXT: [[TMP7:%.*]] = getelementptr i8, ptr [[TMP6]], i32 0
+; ZVQDOTQ-NEXT: [[WIDE_LOAD:%.*]] = load <vscale x 4 x i8>, ptr [[TMP7]], align 1
+; ZVQDOTQ-NEXT: [[TMP8:%.*]] = zext <vscale x 4 x i8> [[WIDE_LOAD]] to <vscale x 4 x i32>
+; ZVQDOTQ-NEXT: [[PARTIAL_REDUCE]] = call <vscale x 1 x i32> @llvm.experimental.vector.partial.reduce.add.nxv1i32.nxv4i32(<vscale x 1 x i32> [[VEC_PHI]], <vscale x 4 x i32> [[TMP8]])
+; ZVQDOTQ-NEXT: [[INDEX_NEXT]] = add nuw i64 [[INDEX]], [[TMP5]]
+; ZVQDOTQ-NEXT: [[TMP9:%.*]] = icmp eq i64 [[INDEX_NEXT]], [[N_VEC]]
+; ZVQDOTQ-NEXT: br i1 [[TMP9]], label [[MIDDLE_BLOCK:%.*]], label [[VECTOR_BODY]], !llvm.loop [[LOOP0:![0-9]+]]
+; ZVQDOTQ: middle.block:
+; ZVQDOTQ-NEXT: [[TMP10:%.*]] = call i32 @llvm.vector.reduce.add.nxv1i32(<vscale x 1 x i32> [[PARTIAL_REDUCE]])
+; ZVQDOTQ-NEXT: [[CMP_N:%.*]] = icmp eq i64 1025, [[N_VEC]]
+; ZVQDOTQ-NEXT: br i1 [[CMP_N]], label [[FOR_EXIT:%.*]], label [[SCALAR_PH]]
+; ZVQDOTQ: scalar.ph:
+;
+; FIXED-V-LABEL: define i32 @zext_add_reduc_i8_i32(
+; FIXED-V-SAME: ptr [[A:%.*]]) #[[ATTR0:[0-9]+]] {
+; FIXED-V-NEXT: entry:
+; FIXED-V-NEXT: br i1 false, label [[SCALAR_PH:%.*]], label [[VECTOR_PH:%.*]]
+; FIXED-V: vector.ph:
+; FIXED-V-NEXT: br label [[VECTOR_BODY:%.*]]
+; FIXED-V: vector.body:
+; FIXED-V-NEXT: [[INDEX:%.*]] = phi i64 [ 0, [[VECTOR_PH]] ], [ [[INDEX_NEXT:%.*]], [[VECTOR_BODY]] ]
+; FIXED-V-NEXT: [[VEC_PHI:%.*]] = phi <8 x i32> [ zeroinitializer, [[VECTOR_PH]] ], [ [[TMP5:%.*]], [[VECTOR_BODY]] ]
+; FIXED-V-NEXT: [[VEC_PHI1:%.*]] = phi <8 x i32> [ zeroinitializer, [[VECTOR_PH]] ], [ [[TMP6:%.*]], [[VECTOR_BODY]] ]
+; FIXED-V-NEXT: [[TMP0:%.*]] = getelementptr i8, ptr [[A]], i64 [[INDEX]]
+; FIXED-V-NEXT: [[TMP1:%.*]] = getelementptr i8, ptr [[TMP0]], i32 0
+; FIXED-V-NEXT: [[TMP2:%.*]] = getelementptr i8, ptr [[TMP0]], i32 8
+; FIXED-V-NEXT: [[WIDE_LOAD:%.*]] = load <8 x i8>, ptr [[TMP1]], align 1
+; FIXED-V-NEXT: [[WIDE_LOAD2:%.*]] = load <8 x i8>, ptr [[TMP2]], align 1
+; FIXED-V-NEXT: [[TMP3:%.*]] = zext <8 x i8> [[WIDE_LOAD]] to <8 x i32>
+; FIXED-V-NEXT: [[TMP4:%.*]] = zext <8 x i8> [[WIDE_LOAD2]] to <8 x i32>
+; FIXED-V-NEXT: [[TMP5]] = add <8 x i32> [[TMP3]], [[VEC_PHI]]
+; FIXED-V-NEXT: [[TMP6]] = add <8 x i32> [[TMP4]], [[VEC_PHI1]]
+; FIXED-V-NEXT: [[INDEX_NEXT]] = add nuw i64 [[INDEX]], 16
+; FIXED-V-NEXT: [[TMP7:%.*]] = icmp eq i64 [[INDEX_NEXT]], 1024
+; FIXED-V-NEXT: br i1 [[TMP7]], label [[MIDDLE_BLOCK:%.*]], label [[VECTOR_BODY]], !llvm.loop [[LOOP0:![0-9]+]]
+; FIXED-V: middle.block:
+; FIXED-V-NEXT: [[BIN_RDX:%.*]] = add <8 x i32> [[TMP6]], [[TMP5]]
+; FIXED-V-NEXT: [[TMP8:%.*]] = call i32 @llvm.vector.reduce.add.v8i32(<8 x i32> [[BIN_RDX]])
+; FIXED-V-NEXT: br i1 false, label [[FOR_EXIT:%.*]], label [[SCALAR_PH]]
+; FIXED-V: scalar.ph:
+;
+; FIXED-ZVQDOTQ-LABEL: define i32 @zext_add_reduc_i8_i32(
+; FIXED-ZVQDOTQ-SAME: ptr [[A:%.*]]) #[[ATTR0:[0-9]+]] {
+; FIXED-ZVQDOTQ-NEXT: entry:
+; FIXED-ZVQDOTQ-NEXT: br i1 false, label [[SCALAR_PH:%.*]], label [[VECTOR_PH:%.*]]
+; FIXED-ZVQDOTQ: vector.ph:
+; FIXED-ZVQDOTQ-NEXT: br label [[VECTOR_BODY:%.*]]
+; FIXED-ZVQDOTQ: vector.body:
+; FIXED-ZVQDOTQ-NEXT: [[INDEX:%.*]] = phi i64 [ 0, [[VECTOR_PH]] ], [ [[INDEX_NEXT:%.*]], [[VECTOR_BODY]] ]
+; FIXED-ZVQDOTQ-NEXT: [[VEC_PHI:%.*]] = phi <2 x i32> [ zeroinitializer, [[VECTOR_PH]] ], [ [[PARTIAL_REDUCE:%.*]], [[VECTOR_BODY]] ]
+; FIXED-ZVQDOTQ-NEXT: [[VEC_PHI1:%.*]] = phi <2 x i32> [ zeroinitializer, [[VECTOR_PH]] ], [ [[PARTIAL_REDUCE3:%.*]], [[VECTOR_BODY]] ]
+; FIXED-ZVQDOTQ-NEXT: [[TMP0:%.*]] = getelementptr i8, ptr [[A]], i64 [[INDEX]]
+; FIXED-ZVQDOTQ-NEXT: [[TMP1:%.*]] = getelementptr i8, ptr [[TMP0]], i32 0
+; FIXED-ZVQDOTQ-NEXT: [[TMP2:%.*]] = getelementptr i8, ptr [[TMP0]], i32 8
+; FIXED-ZVQDOTQ-NEXT: [[WIDE_LOAD:%.*]] = load <8 x i8>, ptr [[TMP1]], align 1
+; FIXED-ZVQDOTQ-NEXT: [[WIDE_LOAD2:%.*]] = load <8 x i8>, ptr [[TMP2]], align 1
+; FIXED-ZVQDOTQ-NEXT: [[TMP3:%.*]] = zext <8 x i8> [[WIDE_LOAD]] to <8 x i32>
+; FIXED-ZVQDOTQ-NEXT: [[TMP4:%.*]] = zext <8 x i8> [[WIDE_LOAD2]] to <8 x i32>
+; FIXED-ZVQDOTQ-NEXT: [[PARTIAL_REDUCE]] = call <2 x i32> @llvm.experimental.vector.partial.reduce.add.v2i32.v8i32(<2 x i32> [[VEC_PHI]], <8 x i32> [[TMP3]])
+; FIXED-ZVQDOTQ-NEXT: [[PARTIAL_REDUCE3]] = call <2 x i32> @llvm.experimental.vector.partial.reduce.add.v2i32.v8i32(<2 x i32> [[VEC_PHI1]], <8 x i32> [[TMP4]])
+; FIXED-ZVQDOTQ-NEXT: [[INDEX_NEXT]] = add nuw i64 [[INDEX]], 16
+; FIXED-ZVQDOTQ-NEXT: [[TMP5:%.*]] = icmp eq i64 [[INDEX_NEXT]], 1024
+; FIXED-ZVQDOTQ-NEXT: br i1 [[TMP5]], label [[MIDDLE_BLOCK:%.*]], label [[VECTOR_BODY]], !llvm.loop [[LOOP0:![0-9]+]]
+; FIXED-ZVQDOTQ: middle.block:
+; FIXED-ZVQDOTQ-NEXT: [[BIN_RDX:%.*]] = add <2 x i32> [[PARTIAL_REDUCE3]], [[PARTIAL_REDUCE]]
+; FIXED-ZVQDOTQ-NEXT: [[TMP6:%.*]] = call i32 @llvm.vector.reduce.add.v2i32(<2 x i32> [[BIN_RDX]])
+; FIXED-ZVQDOTQ-NEXT: br i1 false, label [[FOR_EXIT:%.*]], label [[SCALAR_PH]]
+; FIXED-ZVQDOTQ: scalar.ph:
+;
+entry:
+ br label %for.body
+
+for.body: ; preds = %for.body, %entry
+ %iv = phi i64 [ 0, %entry ], [ %iv.next, %for.body ]
+ %accum = phi i32 [ 0, %entry ], [ %add, %for.body ]
+ %gep.a = getelementptr i8, ptr %a, i64 %iv
+ %load.a = load i8, ptr %gep.a, align 1
+ %ext.a = zext i8 %load.a to i32
+ %add = add i32 %ext.a, %accum
+ %iv.next = add i64 %iv, 1
+ %exitcond.not = icmp eq i64 %iv.next, 1025
+ br i1 %exitcond.not, label %for.exit, label %for.body
+
+for.exit: ; preds = %for.body
+ ret i32 %add
+}
+
+define i64 @zext_add_reduc_i8_i64(ptr %a) {
+; CHECK-LABEL: define i64 @zext_add_reduc_i8_i64(
+; CHECK-SAME: ptr [[A:%.*]]) #[[ATTR0:[0-9]+]] {
+; CHECK-NEXT: entry:
+; CHECK-NEXT: [[TMP0:%.*]] = call i64 @llvm.vscale.i64()
+; CHECK-NEXT: [[TMP1:%.*]] = mul nuw i64 [[TMP0]], 2
+; CHECK-NEXT: [[MIN_ITERS_CHECK:%.*]] = icmp ult i64 1025, [[TMP1]]
+; CHECK-NEXT: br i1 [[MIN_ITERS_CHECK]], label [[SCALAR_PH:%.*]], label [[VECTOR_PH:%.*]]
+; CHECK: vector.ph:
+; CHECK-NEXT: [[TMP2:%.*]] = call i64 @llvm.vscale.i64()
+; CHECK-NEXT: [[TMP3:%.*]] = mul nuw i64 [[TMP2]], 2
+; CHECK-NEXT: [[N_MOD_VF:%.*]] = urem i64 1025, [[TMP3]]
+; CHECK-NEXT: [[N_VEC:%.*]] = sub i64 1025, [[N_MOD_VF]]
+; CHECK-NEXT: [[TMP4:%.*]] = call i64 @llvm.vscale.i64()
+; CHECK-NEXT: [[TMP5:%.*]] = mul nuw i64 [[TMP4]], 2
+; CHECK-NEXT: br label [[VECTOR_BODY:%.*]]
+; CHECK: vector.body:
+; CHECK-NEXT: [[INDEX:%.*]] = phi i64 [ 0, [[VECTOR_PH]] ], [ [[INDEX_NEXT:%.*]], [[VECTOR_BODY]] ]
+; CHECK-NEXT: [[VEC_PHI:%.*]] = phi <vscale x 2 x i64> [ zeroinitializer, [[VECTOR_PH]] ], [ [[TMP9:%.*]], [[VECTOR_BODY]] ]
+; CHECK-NEXT: [[TMP6:%.*]] = getelementptr i8, ptr [[A]], i64 [[INDEX]]
+; CHECK-NEXT: [[TMP7:%.*]] = getelementptr i8, ptr [[TMP6]], i32 0
+; CHECK-NEXT: [[WIDE_LOAD:%.*]] = load <vscale x 2 x i8>, ptr [[TMP7]], align 1
+; CHECK-NEXT: [[TMP8:%.*]] = zext <vscale x 2 x i8> [[WIDE_LOAD]] to <vscale x 2 x i64>
+; CHECK-NEXT: [[TMP9]] = add <vscale x 2 x i64> [[TMP8]], [[VEC_PHI]]
+; CHECK-NEXT: [[INDEX_NEXT]] = add nuw i64 [[INDEX]], [[TMP5]]
+; CHECK-NEXT: [[TMP10:%.*]] = icmp eq i64 [[INDEX_NEXT]], [[N_VEC]]
+; CHECK-NEXT: br i1 [[TMP10]], label [[MIDDLE_BLOCK:%.*]], label [[VECTOR_BODY]], !llvm.loop [[LOOP4:![0-9]+]]
+; CHECK: middle.block:
+; CHECK-NEXT: [[TMP11:%.*]] = call i64 @llvm.vector.reduce.add.nxv2i64(<vscale x 2 x i64> [[TMP9]])
+; CHECK-NEXT: [[CMP_N:%.*]] = icmp eq i64 1025, [[N_VEC]]
+; CHECK-NEXT: br i1 [[CMP_N]], label [[FOR_EXIT:%.*]], label [[SCALAR_PH]]
+; CHECK: scalar.ph:
+;
+; FIXED-LABEL: define i64 @zext_add_reduc_i8_i64(
+; FIXED-SAME: ptr [[A:%.*]]) #[[ATTR0:[0-9]+]] {
+; FIXED-NEXT: entry:
+; FIXED-NEXT: br i1 false, label [[SCALAR_PH:%.*]], label [[VECTOR_PH:%.*]]
+; FIXED: vector.ph:
+; FIXED-NEXT: br label [[VECTOR_BODY:%.*]]
+; FIXED: vector.body:
+; FIXED-NEXT: [[INDEX:%.*]] = phi i64 [ 0, [[VECTOR_PH]] ], [ [[INDEX_NEXT:%.*]], [[VECTOR_BODY]] ]
+; FIXED-NEXT: [[VEC_PHI:%.*]] = phi <4 x i64> [ zeroinitializer, [[VECTOR_PH]] ], [ [[TMP5:%.*]], [[VECTOR_BODY]] ]
+; FIXED-NEXT: [[VEC_PHI1:%.*]] = phi <4 x i64> [ zeroinitializer, [[VECTOR_PH]] ], [ [[TMP6:%.*]], [[VECTOR_BODY]] ]
+; FIXED-NEXT: [[TMP0:%.*]] = getelementptr i8, ptr [[A]], i64 [[INDEX]]
+; FIXED-NEXT: [[TMP1:%.*]] = getelementptr i8, ptr [[TMP0]], i32 0
+; FIXED-NEXT: [[TMP2:%.*]] = getelementptr i8, ptr [[TMP0]], i32 4
+; FIXED-NEXT: [[WIDE_LOAD:%.*]] = load <4 x i8>, ptr [[TMP1]], align 1
+; FIXED-NEXT: [[WIDE_LOAD2:%.*]] = load <4 x i8>, ptr [[TMP2]], align 1
+; FIXED-NEXT: [[TMP3:%.*]] = zext <4 x i8> [[WIDE_LOAD]] to <4 x i64>
+; FIXED-NEXT: [[TMP4:%.*]] = zext <4 x i8> [[WIDE_LOAD2]] to <4 x i64>
+; FIXED-NEXT: [[TMP5]] = add <4 x i64> [[TMP3]], [[VEC_PHI]]
+; FIXED-NEXT: [[TMP6]] = add <4 x i64> [[TMP4]], [[VEC_PHI1]]
+; FIXED-NEXT: [[INDEX_NEXT]] = add nuw i64 [[INDEX]], 8
+; FIXED-NEXT: [[TMP7:%.*]] = icmp eq i64 [[INDEX_NEXT]], 1024
+; FIXED-NEXT: br i1 [[TMP7]], label [[MIDDLE_BLOCK:%.*]], label [[VECTOR_BODY]], !llvm.loop [[LOOP4:![0-9]+]]
+; FIXED: middle.block:
+; FIXED-NEXT: [[BIN_RDX:%.*]] = add <4 x i64> [[TMP6]], [[TMP5]]
+; FIXED-NEXT: [[TMP8:%.*]] = call i64 @llvm.vector.reduce.add.v4i64(<4 x i64> [[BIN_RDX]])
+; FIXED-NEXT: br i1 false, label [[FOR_EXIT:%.*]], label [[SCALAR_PH]]
+; FIXED: scalar.ph:
+;
+entry:
+ br label %for.body
+
+for.body: ; preds = %for.body, %entry
+ %iv = phi i64 [ 0, %entry ], [ %iv.next, %for.body ]
+ %accum = phi i64 [ 0, %entry ], [ %add, %for.body ]
+ %gep.a = getelementptr i8, ptr %a, i64 %iv
+ %load.a = load i8, ptr %gep.a, align 1
+ %ext.a = zext i8 %load.a to i64
+ %add = add i64 %ext.a, %accum
+ %iv.next = add i64 %iv, 1
+ %exitcond.not = icmp eq i64 %iv.next, 1025
+ br i1 %exitcond.not, label %for.exit, label %for.body
+
+for.exit: ; preds = %for.body
+ ret i64 %add
+}
+
+
+define i64 @zext_add_reduc_i16_i64(ptr %a) {
+; CHECK-LABEL: define i64 @zext_add_reduc_i16_i64(
+; CHECK-SAME: ptr [[A:%.*]]) #[[ATTR0]] {
+; CHECK-NEXT: entry:
+; CHECK-NEXT: [[TMP0:%.*]] = call i64 @llvm.vscale.i64()
+; CHECK-NEXT: [[TMP1:%.*]] = mul nuw i64 [[TMP0]], 2
+; CHECK-NEXT: [[MIN_ITERS_CHECK:%.*]] = icmp ult i64 1025, [[TMP1]]
+; CHECK-NEXT: br i1 [[MIN_ITERS_CHECK]], label [[SCALAR_PH:%.*]], label [[VECTOR_PH:%.*]]
+; CHECK: vector.ph:
+; CHECK-NEXT: [[TMP2:%.*]] = call i64 @llvm.vscale.i64()
+; CHECK-NEXT: [[TMP3:%.*]] = mul nuw i64 [[TMP2]], 2
+; CHECK-NEXT: [[N_MOD_VF:%.*]] = urem i64 1025, [[TMP3]]
+; CHECK-NEXT: [[N_VEC:%.*]] = sub i64 1025, [[N_MOD_VF]]
+; CHECK-NEXT: [[TMP4:%.*]] = call i64 @llvm.vscale.i64()
+; CHECK-NEXT: [[TMP5:%.*]] = mul nuw i64 [[TMP4]], 2
+; CHECK-NEXT: br label [[VECTOR_BODY:%.*]]
+; CHECK: vector.body:
+; CHECK-NEXT: [[INDEX:%.*]] = phi i64 [ 0, [[VECTOR_PH]] ], [ [[INDEX_NEXT:%.*]], [[VECTOR_BODY]] ]
+; CHECK-NEXT: [[VEC_PHI:%.*]] = phi <vscale x 2 x i64> [ zeroinitializer, [[VECTOR_PH]] ], [ [[TMP9:%.*]], [[VECTOR_BODY]] ]
+; CHECK-NEXT: [[TMP6:%.*]] = getelementptr i16, ptr [[A]], i64 [[INDEX]]
+; CHECK-NEXT: [[TMP7:%.*]] = getelementptr i16, ptr [[TMP6]], i32 0
+; CHECK-NEXT: [[WIDE_LOAD:%.*]] = load <vscale x 2 x i16>, ptr [[TMP7]], align 2
+; CHECK-NEXT: [[TMP8:%.*]] = zext <vscale x 2 x i16> [[WIDE_LOAD]] to <vscale x 2 x i64>
+; CHECK-NEXT: [[TMP9]] = add <vscale x 2 x i64> [[TMP8]], [[VEC_PHI]]
+; CHECK-NEXT: [[INDEX_NEXT]] = add nuw i64 [[INDEX]], [[TMP5]]
+; CHECK-NEXT: [[TMP10:%.*]] = icmp eq i64 [[INDEX_NEXT]], [[N_VEC]]
+; CHECK-NEXT: br i1 [[TMP10]], label [[MIDDLE_BLOCK:%.*]], label [[VECTOR_BODY]], !llvm.loop [[LOOP6:![0-9]+]]
+; CHECK: middle.block:
+; CHECK-NEXT: [[TMP11:%.*]] = call i64 @llvm.vector.reduce.add.nxv2i64(<vscale x 2 x i64> [[TMP9]])
+; CHECK-NEXT: [[CMP_N:%.*]] = icmp eq i64 1025, [[N_VEC]]
+; CHECK-NEXT: br i1 [[CMP_N]], label [[FOR_EXIT:%.*]], label [[SCALAR_PH]]
+; CHECK: scalar.ph:
+;
+; FIXED-LABEL: define i64 @zext_add_reduc_i16_i64(
+; FIXED-SAME: ptr [[A:%.*]]) #[[ATTR0]] {
+; FIXED-NEXT: entry:
+; FIXED-NEXT: br i1 false, label [[SCALAR_PH:%.*]], label [[VECTOR_PH:%.*]]
+; FIXED: vector.ph:
+; FIXED-NEXT: br label [[VECTOR_BODY:%.*]]
+; FIXED: vector.body:
+; FIXED-NEXT: [[INDEX:%.*]] = phi i64 [ 0, [[VECTOR_PH]] ], [ [[INDEX_NEXT:%.*]], [[VECTOR_BODY]] ]
+; FIXED-NEXT: [[VEC_PHI:%.*]] = phi <4 x i64> [ zeroinitializer, [[VECTOR_PH]] ], [ [[TMP5:%.*]], [[VECTOR_BODY]] ]
+; FIXED-NEXT: [[VEC_PHI1:%.*]] = phi <4 x i64> [ zeroinitializer, [[VECTOR_PH]] ], [ [[TMP6:%.*]], [[VECTOR_BODY]] ]
+; FIXED-NEXT: [[TMP0:%.*]] = getelementptr i16, ptr [[A]], i64 [[INDEX]]
+; FIXED-NEXT: [[TMP1:%.*]] = getelementptr i16, ptr [[TMP0]], i32 0
+; FIXED-NEXT: [[TMP2:%.*]] = getelementptr i16, ptr [[TMP0]], i32 4
+; FIXED-NEXT: [[WIDE_LOAD:%.*]] = load <4 x i16>, ptr [[TMP1]], align 2
+; FIXED-NEXT: [[WIDE_LOAD2:%.*]] = load <4 x i16>, ptr [[TMP2]], align 2
+; FIXED-NEXT: [[TMP3:%.*]] = zext <4 x i16> [[WIDE_LOAD]] to <4 x i64>
+; FIXED-NEXT: [[TMP4:%.*]] = zext <4 x i16> [[WIDE_LOAD2]] to <4 x i64>
+; FIXED-NEXT: [[TMP5]] = add <4 x i64> [[TMP3]], [[VEC_PHI]]
+; FIXED-NEXT: [[TMP6]] = add <4 x i64> [[TMP4]], [[VEC_PHI1]]
+; FIXED-NEXT: [[INDEX_NEXT]] = add nuw i64 [[INDEX]], 8
+; FIXED-NEXT: [[TMP7:%.*]] = icmp eq i64 [[INDEX_NEXT]], 1024
+; FIXED-NEXT: br i1 [[TMP7]], label [[MIDDLE_BLOCK:%.*]], label [[VECTOR_BODY]], !llvm.loop [[LOOP6:![0-9]+]]
+; FIXED: middle.block:
+; FIXED-NEXT: [[BIN_RDX:%.*]] = add <4 x i64> [[TMP6]], [[TMP5]]
+; FIXED-NEXT: [[TMP8:%.*]] = call i64 @llvm.vector.reduce.add.v4i64(<4 x i64> [[BIN_RDX]])
+; FIXED-NEXT: br i1 false, label [[FOR_EXIT:%.*]], label [[SCALAR_PH]]
+; FIXED: scalar.ph:...
[truncated]
|
// unrolled copies, don't need to scale by LT.first. | ||
Type *Tp = VectorType::get(InputTypeA, VF); | ||
std::pair<InstructionCost, MVT> LT = getTypeLegalizationCost(Tp); | ||
IntMatCost = getRISCVInstructionCost(RISCV::VMV_V_I, LT.second, CostKind); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not really familiar with RISCV lowering, but wouldn't the materialisation in practice be free since only the loop vectoriser uses getPartialReductionCost
and for vectorised loops the constant should be hoisted out?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The current lowering will expand the constant using an SEW=e8 vmv.v.x, and that can't be folded into the .vx form of the instruction. That materialization will probably be hoisted out for loops with low register pressure, but won't for loops with high register pressure. (Or more accurately, it might be sunk back in.) I went with the more conservative costing for the moment; I think we can revisit this later if we find the lower cost would actually influence the profitable choice. (At least so far, it doesn't seem to.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it worth also having some loopvec cost model tests too, using LV's debug output?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd really rather not check debug output. Doing so results in pretty fragile tests. One of the ideas I was considering was to update the cost analysis path to do the pattern match so that we can test the costing changes that way, but I'd like that to not be on critical path for this change if you're okay with that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure if it is worth it, but it can work and not be terrible in some cases, when using a tight filter for recipes we are checking costs for (e.g. like llvm/test/Transforms/LoopVectorize/X86/CostModel/gather-i16-with-i8-index.ll
). But if so it should probably be a completely separate test from checking the code-gen changes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My preference remains to not test vectorizer debug output. I don't think the complexity is justified here. I'll do so if this is a blocker for the approval on the review?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure that's fine. I won't block the PR for that. I was just thinking that it might be useful to add something in a different, small test file. Anyway, that can always be done in a separate PR if you think it's useful.
This is the RISCV follow up to f575b18 to leverage the new infrastructure recently added.