Skip to content

[RISCV] Match strided vector bases in RISCVGatherScatterLowering #93972

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 4 commits into from
Jun 3, 2024

Conversation

lukel97
Copy link
Contributor

@lukel97 lukel97 commented May 31, 2024

Currently we only match GEPs with a scalar base pointer, but a common pattern that's emitted from the loop vectorizer is a strided vector base plus some sort of scalar offset:

%base = getelementptr i64, ptr %p, <vscale x 1 x i64> %step
%gep = getelementptr i64, <vscale x 1 x ptr> %base, i64 %offset

This is common for accesses into a struct e.g. f[i].b below:

struct F { int a; char b; };

void foo(struct F *f) {
  for (int i = 0; i < 1024; i += 2) {
    f[i].a++;
    f[i].b++;
  }
}

This patch handles this case in RISCVGatherScatterLowering by recursing on the base pointer if it's a vector.

With this we can convert roughly 80% of the indexed loads and stores emitted to strided loads and stores on SPEC CPU 2017, -O3 -march=rva22u64_v

@llvmbot
Copy link
Member

llvmbot commented May 31, 2024

@llvm/pr-subscribers-backend-risc-v

Author: Luke Lau (lukel97)

Changes

Currently we only match GEPs with a scalar base pointer, but a common pattern that's emitted from the loop vectorizer is a strided vector base plus some sort of scalar offset:

%base = getelementptr i64, ptr %p, &lt;vscale x 1 x i64&gt; %step
%gep = getelementptr i64, &lt;vscale x 1 x ptr&gt; %base, i64 %offset

This is common for accesses into a struct e.g. f[i].b below:

struct F { int a; char b; };

void foo(struct F *f) {
  for (int i = 0; i &lt; 1024; i += 2) {
    f[i].a++;
    f[i].b++;
  }
}

This patch handles this case in RISCVGatherScatterLowering by recursing on the base pointer if it's a vector.

With this we can convert roughly 80% of the indexed loads and stores emitted to strided loads and stores on SPEC CPU 2017, -O3 -march=rva22u64_v


Full diff: https://github.com/llvm/llvm-project/pull/93972.diff

2 Files Affected:

  • (modified) llvm/lib/Target/RISCV/RISCVGatherScatterLowering.cpp (+16)
  • (modified) llvm/test/CodeGen/RISCV/rvv/strided-load-store.ll (+2-4)
diff --git a/llvm/lib/Target/RISCV/RISCVGatherScatterLowering.cpp b/llvm/lib/Target/RISCV/RISCVGatherScatterLowering.cpp
index f0bd25f167d80..f7cca854d2767 100644
--- a/llvm/lib/Target/RISCV/RISCVGatherScatterLowering.cpp
+++ b/llvm/lib/Target/RISCV/RISCVGatherScatterLowering.cpp
@@ -349,6 +349,22 @@ RISCVGatherScatterLowering::determineBaseAndStride(Instruction *Ptr,
 
   SmallVector<Value *, 2> Ops(GEP->operands());
 
+  // If the base pointer is a vector, check if it's strided.
+  if (GEP->getPointerOperand()->getType()->isVectorTy()) {
+    auto [BaseBase, Stride] = determineBaseAndStride(
+        cast<Instruction>(GEP->getPointerOperand()), Builder);
+    // If GEP's offset is scalar then we can add it to the base pointer's base.
+    auto IsScalar = [](Value *Idx) { return !Idx->getType()->isVectorTy(); };
+    if (BaseBase && all_of(GEP->indices(), IsScalar)) {
+      Builder.SetInsertPoint(GEP);
+      SmallVector<Value *> Indices(GEP->indices());
+      Value *OffsetBase =
+          Builder.CreateGEP(GEP->getSourceElementType(), BaseBase, Indices, "",
+                            GEP->isInBounds());
+      return {OffsetBase, Stride};
+    }
+  }
+
   // Base pointer needs to be a scalar.
   Value *ScalarBase = Ops[0];
   if (ScalarBase->getType()->isVectorTy()) {
diff --git a/llvm/test/CodeGen/RISCV/rvv/strided-load-store.ll b/llvm/test/CodeGen/RISCV/rvv/strided-load-store.ll
index 4feecbbdef94f..53b20161cdaea 100644
--- a/llvm/test/CodeGen/RISCV/rvv/strided-load-store.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/strided-load-store.ll
@@ -301,10 +301,8 @@ define void @constant_stride(<vscale x 1 x i64> %x, ptr %p, i64 %stride) {
 
 define <vscale x 1 x i64> @vector_base_scalar_offset(ptr %p, i64 %offset) {
 ; CHECK-LABEL: @vector_base_scalar_offset(
-; CHECK-NEXT:    [[STEP:%.*]] = call <vscale x 1 x i64> @llvm.experimental.stepvector.nxv1i64()
-; CHECK-NEXT:    [[PTRS1:%.*]] = getelementptr i64, ptr [[P:%.*]], <vscale x 1 x i64> [[STEP]]
-; CHECK-NEXT:    [[PTRS2:%.*]] = getelementptr i64, <vscale x 1 x ptr> [[PTRS1]], i64 [[OFFSET:%.*]]
-; CHECK-NEXT:    [[X:%.*]] = call <vscale x 1 x i64> @llvm.masked.gather.nxv1i64.nxv1p0(<vscale x 1 x ptr> [[PTRS2]], i32 8, <vscale x 1 x i1> shufflevector (<vscale x 1 x i1> insertelement (<vscale x 1 x i1> poison, i1 true, i64 0), <vscale x 1 x i1> poison, <vscale x 1 x i32> zeroinitializer), <vscale x 1 x i64> poison)
+; CHECK-NEXT:    [[TMP1:%.*]] = getelementptr i64, ptr [[P:%.*]], i64 [[OFFSET:%.*]]
+; CHECK-NEXT:    [[X:%.*]] = call <vscale x 1 x i64> @llvm.riscv.masked.strided.load.nxv1i64.p0.i64(<vscale x 1 x i64> poison, ptr [[TMP1]], i64 8, <vscale x 1 x i1> shufflevector (<vscale x 1 x i1> insertelement (<vscale x 1 x i1> poison, i1 true, i64 0), <vscale x 1 x i1> poison, <vscale x 1 x i32> zeroinitializer))
 ; CHECK-NEXT:    ret <vscale x 1 x i64> [[X]]
 ;
   %step = call <vscale x 1 x i64> @llvm.experimental.stepvector.nxv1i64()

// If the base pointer is a vector, check if it's strided.
if (GEP->getPointerOperand()->getType()->isVectorTy()) {
auto [BaseBase, Stride] = determineBaseAndStride(
cast<Instruction>(GEP->getPointerOperand()), Builder);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this might be pedantic, but where did we check if the pointer operand is an instruction.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Whoops, we didn't

@nikolaypanchenko
Copy link
Contributor

I do understand that this is more general peephole optimization, but more accurate one will be to teach vectorizer to emit strided intrinsics, like (vp.strided.load)[https://llvm.org/docs/LangRef.html#llvm-experimental-vp-strided-load-intrinsic]. Or, otherwise, provide indexes analysis that cost model can use to accurately estimate such accesses.

JFYI: we have a plan to upstream support of strided accesses for EVL vectorization by leveraging these intrinsics.

@@ -349,6 +349,22 @@ RISCVGatherScatterLowering::determineBaseAndStride(Instruction *Ptr,

SmallVector<Value *, 2> Ops(GEP->operands());

// If the base pointer is a vector, check if it's strided.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please move this down into the if (!ScalarBase) case below, we'd rather find a splatable base if possible.

Please perform the operand check before doing the recursive call. The unwind from the recursion may generate code.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried moving this down into the !ScalarBase case but it caused us to miss the splat_base_scalar_offset test case.

The code below bails if GEP doesn't have a vector offset, so putting it up here catches the case where we have a splat base + scalar offset.

Builder.SetInsertPoint(GEP);
SmallVector<Value *> Indices(GEP->indices());
Value *OffsetBase =
Builder.CreateGEP(GEP->getSourceElementType(), BaseBase, Indices, "",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

GEP->takeName?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The old GEP might still lie around if it has another user, do we still need to take the name?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, might be good to give it a name based on the prior one, but not critical.

; CHECK-NEXT: [[PTRS1:%.*]] = getelementptr i64, ptr [[P:%.*]], <vscale x 1 x i64> [[STEP]]
; CHECK-NEXT: [[PTRS2:%.*]] = getelementptr i64, <vscale x 1 x ptr> [[PTRS1]], i64 [[OFFSET:%.*]]
; CHECK-NEXT: [[X:%.*]] = call <vscale x 1 x i64> @llvm.masked.gather.nxv1i64.nxv1p0(<vscale x 1 x ptr> [[PTRS2]], i32 8, <vscale x 1 x i1> shufflevector (<vscale x 1 x i1> insertelement (<vscale x 1 x i1> poison, i1 true, i64 0), <vscale x 1 x i1> poison, <vscale x 1 x i32> zeroinitializer), <vscale x 1 x i64> poison)
; CHECK-NEXT: [[TMP1:%.*]] = getelementptr i64, ptr [[P:%.*]], i64 [[OFFSET:%.*]]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add a test case or two with a base we can't match, and one with a matchable base, but non matchable indices to cover the bug noted in my comment above.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added in 458a315

@lukel97
Copy link
Contributor Author

lukel97 commented May 31, 2024

I do understand that this is more general peephole optimization, but more accurate one will be to teach vectorizer to emit strided intrinsics, like (vp.strided.load)[https://llvm.org/docs/LangRef.html#llvm-experimental-vp-strided-load-intrinsic]. Or, otherwise, provide indexes analysis that cost model can use to accurately estimate such accesses.

JFYI: we have a plan to upstream support of strided accesses for EVL vectorization by leveraging these intrinsics.

I was discussing this offline with @preames and we're in agreement here, relying on RISCVGatherScatterLowering to catch these widened loads and stores is fragile. But teaching the loop vectorizer to emit llvm.experimental.vp.strided.{load,store} is a bigger piece of work so this is just a stopgap in the meantime.

Also there's still 2054 v[ls][uo]xei instructions emitted on SPEC after this patch, and I presume some of those are strided acceses that are still slipping through the cracks.

lukel97 added 3 commits May 31, 2024 18:59
Currently we only match GEPs with a scalar base pointer, but a common pattern that's emitted from the loop vectorizer is a strided vector base plus some sort of scalar offset:

    %base = getelementptr i64, ptr %p, <vscale x 1 x i64> %step
    %gep = getelementptr i64, <vscale x 1 x ptr> %base, i64 %offset

This is common for accesses into a struct e.g. f[i].b below:

    struct F { int a; char b; };

    void foo(struct F *f) {
      for (int i = 0; i < 1024; i += 2) {
        f[i].a++;
        f[i].b++;
      }
    }

This patch handles this case in RISCVGatherScatterLowering by recursing on the base pointer if it's a vector.

With this we can convert roughly 80% of the indexed loads and stores emitted to strided loads and stores on SPEC CPU 2017, -O3 -march=rva22u64_v
@lukel97 lukel97 force-pushed the gather-scatter-base-offset branch from 979bd56 to aa6a40d Compare May 31, 2024 18:07
Copy link
Collaborator

@preames preames left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@lukel97 lukel97 merged commit 910098e into llvm:main Jun 3, 2024
7 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants