-
Notifications
You must be signed in to change notification settings - Fork 13.6k
[RISCV] Match strided vector bases in RISCVGatherScatterLowering #93972
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
@llvm/pr-subscribers-backend-risc-v Author: Luke Lau (lukel97) ChangesCurrently we only match GEPs with a scalar base pointer, but a common pattern that's emitted from the loop vectorizer is a strided vector base plus some sort of scalar offset:
This is common for accesses into a struct e.g. f[i].b below:
This patch handles this case in RISCVGatherScatterLowering by recursing on the base pointer if it's a vector. With this we can convert roughly 80% of the indexed loads and stores emitted to strided loads and stores on SPEC CPU 2017, -O3 -march=rva22u64_v Full diff: https://github.com/llvm/llvm-project/pull/93972.diff 2 Files Affected:
diff --git a/llvm/lib/Target/RISCV/RISCVGatherScatterLowering.cpp b/llvm/lib/Target/RISCV/RISCVGatherScatterLowering.cpp
index f0bd25f167d80..f7cca854d2767 100644
--- a/llvm/lib/Target/RISCV/RISCVGatherScatterLowering.cpp
+++ b/llvm/lib/Target/RISCV/RISCVGatherScatterLowering.cpp
@@ -349,6 +349,22 @@ RISCVGatherScatterLowering::determineBaseAndStride(Instruction *Ptr,
SmallVector<Value *, 2> Ops(GEP->operands());
+ // If the base pointer is a vector, check if it's strided.
+ if (GEP->getPointerOperand()->getType()->isVectorTy()) {
+ auto [BaseBase, Stride] = determineBaseAndStride(
+ cast<Instruction>(GEP->getPointerOperand()), Builder);
+ // If GEP's offset is scalar then we can add it to the base pointer's base.
+ auto IsScalar = [](Value *Idx) { return !Idx->getType()->isVectorTy(); };
+ if (BaseBase && all_of(GEP->indices(), IsScalar)) {
+ Builder.SetInsertPoint(GEP);
+ SmallVector<Value *> Indices(GEP->indices());
+ Value *OffsetBase =
+ Builder.CreateGEP(GEP->getSourceElementType(), BaseBase, Indices, "",
+ GEP->isInBounds());
+ return {OffsetBase, Stride};
+ }
+ }
+
// Base pointer needs to be a scalar.
Value *ScalarBase = Ops[0];
if (ScalarBase->getType()->isVectorTy()) {
diff --git a/llvm/test/CodeGen/RISCV/rvv/strided-load-store.ll b/llvm/test/CodeGen/RISCV/rvv/strided-load-store.ll
index 4feecbbdef94f..53b20161cdaea 100644
--- a/llvm/test/CodeGen/RISCV/rvv/strided-load-store.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/strided-load-store.ll
@@ -301,10 +301,8 @@ define void @constant_stride(<vscale x 1 x i64> %x, ptr %p, i64 %stride) {
define <vscale x 1 x i64> @vector_base_scalar_offset(ptr %p, i64 %offset) {
; CHECK-LABEL: @vector_base_scalar_offset(
-; CHECK-NEXT: [[STEP:%.*]] = call <vscale x 1 x i64> @llvm.experimental.stepvector.nxv1i64()
-; CHECK-NEXT: [[PTRS1:%.*]] = getelementptr i64, ptr [[P:%.*]], <vscale x 1 x i64> [[STEP]]
-; CHECK-NEXT: [[PTRS2:%.*]] = getelementptr i64, <vscale x 1 x ptr> [[PTRS1]], i64 [[OFFSET:%.*]]
-; CHECK-NEXT: [[X:%.*]] = call <vscale x 1 x i64> @llvm.masked.gather.nxv1i64.nxv1p0(<vscale x 1 x ptr> [[PTRS2]], i32 8, <vscale x 1 x i1> shufflevector (<vscale x 1 x i1> insertelement (<vscale x 1 x i1> poison, i1 true, i64 0), <vscale x 1 x i1> poison, <vscale x 1 x i32> zeroinitializer), <vscale x 1 x i64> poison)
+; CHECK-NEXT: [[TMP1:%.*]] = getelementptr i64, ptr [[P:%.*]], i64 [[OFFSET:%.*]]
+; CHECK-NEXT: [[X:%.*]] = call <vscale x 1 x i64> @llvm.riscv.masked.strided.load.nxv1i64.p0.i64(<vscale x 1 x i64> poison, ptr [[TMP1]], i64 8, <vscale x 1 x i1> shufflevector (<vscale x 1 x i1> insertelement (<vscale x 1 x i1> poison, i1 true, i64 0), <vscale x 1 x i1> poison, <vscale x 1 x i32> zeroinitializer))
; CHECK-NEXT: ret <vscale x 1 x i64> [[X]]
;
%step = call <vscale x 1 x i64> @llvm.experimental.stepvector.nxv1i64()
|
// If the base pointer is a vector, check if it's strided. | ||
if (GEP->getPointerOperand()->getType()->isVectorTy()) { | ||
auto [BaseBase, Stride] = determineBaseAndStride( | ||
cast<Instruction>(GEP->getPointerOperand()), Builder); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this might be pedantic, but where did we check if the pointer operand is an instruction.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Whoops, we didn't
I do understand that this is more general peephole optimization, but more accurate one will be to teach vectorizer to emit strided intrinsics, like (vp.strided.load)[https://llvm.org/docs/LangRef.html#llvm-experimental-vp-strided-load-intrinsic]. Or, otherwise, provide indexes analysis that cost model can use to accurately estimate such accesses. JFYI: we have a plan to upstream support of strided accesses for EVL vectorization by leveraging these intrinsics. |
@@ -349,6 +349,22 @@ RISCVGatherScatterLowering::determineBaseAndStride(Instruction *Ptr, | |||
|
|||
SmallVector<Value *, 2> Ops(GEP->operands()); | |||
|
|||
// If the base pointer is a vector, check if it's strided. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please move this down into the if (!ScalarBase) case below, we'd rather find a splatable base if possible.
Please perform the operand check before doing the recursive call. The unwind from the recursion may generate code.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tried moving this down into the !ScalarBase case but it caused us to miss the splat_base_scalar_offset test case.
The code below bails if GEP doesn't have a vector offset, so putting it up here catches the case where we have a splat base + scalar offset.
Builder.SetInsertPoint(GEP); | ||
SmallVector<Value *> Indices(GEP->indices()); | ||
Value *OffsetBase = | ||
Builder.CreateGEP(GEP->getSourceElementType(), BaseBase, Indices, "", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
GEP->takeName?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The old GEP might still lie around if it has another user, do we still need to take the name?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, might be good to give it a name based on the prior one, but not critical.
; CHECK-NEXT: [[PTRS1:%.*]] = getelementptr i64, ptr [[P:%.*]], <vscale x 1 x i64> [[STEP]] | ||
; CHECK-NEXT: [[PTRS2:%.*]] = getelementptr i64, <vscale x 1 x ptr> [[PTRS1]], i64 [[OFFSET:%.*]] | ||
; CHECK-NEXT: [[X:%.*]] = call <vscale x 1 x i64> @llvm.masked.gather.nxv1i64.nxv1p0(<vscale x 1 x ptr> [[PTRS2]], i32 8, <vscale x 1 x i1> shufflevector (<vscale x 1 x i1> insertelement (<vscale x 1 x i1> poison, i1 true, i64 0), <vscale x 1 x i1> poison, <vscale x 1 x i32> zeroinitializer), <vscale x 1 x i64> poison) | ||
; CHECK-NEXT: [[TMP1:%.*]] = getelementptr i64, ptr [[P:%.*]], i64 [[OFFSET:%.*]] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please add a test case or two with a base we can't match, and one with a matchable base, but non matchable indices to cover the bug noted in my comment above.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added in 458a315
I was discussing this offline with @preames and we're in agreement here, relying on RISCVGatherScatterLowering to catch these widened loads and stores is fragile. But teaching the loop vectorizer to emit llvm.experimental.vp.strided.{load,store} is a bigger piece of work so this is just a stopgap in the meantime. Also there's still 2054 v[ls][uo]xei instructions emitted on SPEC after this patch, and I presume some of those are strided acceses that are still slipping through the cracks. |
Currently we only match GEPs with a scalar base pointer, but a common pattern that's emitted from the loop vectorizer is a strided vector base plus some sort of scalar offset: %base = getelementptr i64, ptr %p, <vscale x 1 x i64> %step %gep = getelementptr i64, <vscale x 1 x ptr> %base, i64 %offset This is common for accesses into a struct e.g. f[i].b below: struct F { int a; char b; }; void foo(struct F *f) { for (int i = 0; i < 1024; i += 2) { f[i].a++; f[i].b++; } } This patch handles this case in RISCVGatherScatterLowering by recursing on the base pointer if it's a vector. With this we can convert roughly 80% of the indexed loads and stores emitted to strided loads and stores on SPEC CPU 2017, -O3 -march=rva22u64_v
…hout passing the checks
979bd56
to
aa6a40d
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Currently we only match GEPs with a scalar base pointer, but a common pattern that's emitted from the loop vectorizer is a strided vector base plus some sort of scalar offset:
This is common for accesses into a struct e.g. f[i].b below:
This patch handles this case in RISCVGatherScatterLowering by recursing on the base pointer if it's a vector.
With this we can convert roughly 80% of the indexed loads and stores emitted to strided loads and stores on SPEC CPU 2017, -O3 -march=rva22u64_v