-
Notifications
You must be signed in to change notification settings - Fork 13.6k
[mlir][vector] Prevent folding non memref-type gather into maskedload #135371
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[mlir][vector] Prevent folding non memref-type gather into maskedload #135371
Conversation
Thank you for submitting a Pull Request (PR) to the LLVM Project! This PR will be automatically labeled and the relevant teams will be notified. If you wish to, you can add reviewers by using the "Reviewers" section on this page. If this is not working for you, it is probably because you do not have write permissions for the repository. In which case you can instead tag reviewers by name in a comment by using If you have received no comments on your PR for a week, you can request a review by "ping"ing the PR by adding a comment “Ping”. The common courtesy "ping" rate is once a week. Please remember that you are asking for valuable time from other developers. If you have further questions, they may be answered by the LLVM GitHub User Guide. You can also ask questions in a comment on this PR, on the LLVM Discord or on the forums. |
@llvm/pr-subscribers-mlir-vector Author: Sagar Kulkarni (sagarkulkarni19) ChangesThis patch fixes an issue in the FoldContiguousGather pattern which was incorrectly folding vector.gather operations with contiguous indices into vector.maskedload operations regardless of the base operand type. While vector.gather operations can work on both tensor and memref types, vector.maskedload operations are only valid for memref types. The pattern was incorrectly lowering a tensor-based gather into a masked-load, which is invalid. This fix adds a type check to ensure the pattern only applies to memref-based gather operations. Full diff: https://github.com/llvm/llvm-project/pull/135371.diff 2 Files Affected:
diff --git a/mlir/lib/Dialect/Vector/IR/VectorOps.cpp b/mlir/lib/Dialect/Vector/IR/VectorOps.cpp
index 98d98f067de14..8955438b57343 100644
--- a/mlir/lib/Dialect/Vector/IR/VectorOps.cpp
+++ b/mlir/lib/Dialect/Vector/IR/VectorOps.cpp
@@ -5340,6 +5340,9 @@ class FoldContiguousGather final : public OpRewritePattern<GatherOp> {
using OpRewritePattern::OpRewritePattern;
LogicalResult matchAndRewrite(GatherOp op,
PatternRewriter &rewriter) const override {
+ if (!op.getBase().getType().isa<MemRefType>())
+ return failure();
+
if (failed(isZeroBasedContiguousSeq(op.getIndexVec())))
return failure();
diff --git a/mlir/test/Dialect/Vector/canonicalize.mlir b/mlir/test/Dialect/Vector/canonicalize.mlir
index b7db8ec834be7..7d9223696712d 100644
--- a/mlir/test/Dialect/Vector/canonicalize.mlir
+++ b/mlir/test/Dialect/Vector/canonicalize.mlir
@@ -3149,6 +3149,18 @@ func.func @contiguous_gather_step(%base: memref<?xf32>,
// -----
+// CHECK-LABEL: @dont_fold_tensor_type_contiguous_gather
+func.func @dont_fold_tensor_type_contiguous_gather(%base: tensor<8xf32>, %mask: vector<4xi1>, %pass_thru: vector<4xf32>) -> vector<4xf32> {
+ %c0 = arith.constant 0 : index
+ %indices = arith.constant dense<[0, 1, 2, 3]> : vector<4xindex>
+ // CHECK: vector.gather
+ // CHECK-NOT: vector.maskedload
+ %0 = vector.gather %base[%c0][%indices], %mask, %pass_thru : tensor<8xf32>, vector<4xindex>, vector<4xi1>, vector<4xf32> into vector<4xf32>
+ return %0 : vector<4xf32>
+}
+
+// -----
+
// CHECK-LABEL: @gather_broadcast(
// TODO: Broadcast is not supported yet
// CHECK: %[[R:.*]] = vector.gather
|
@llvm/pr-subscribers-mlir Author: Sagar Kulkarni (sagarkulkarni19) ChangesThis patch fixes an issue in the FoldContiguousGather pattern which was incorrectly folding vector.gather operations with contiguous indices into vector.maskedload operations regardless of the base operand type. While vector.gather operations can work on both tensor and memref types, vector.maskedload operations are only valid for memref types. The pattern was incorrectly lowering a tensor-based gather into a masked-load, which is invalid. This fix adds a type check to ensure the pattern only applies to memref-based gather operations. Full diff: https://github.com/llvm/llvm-project/pull/135371.diff 2 Files Affected:
diff --git a/mlir/lib/Dialect/Vector/IR/VectorOps.cpp b/mlir/lib/Dialect/Vector/IR/VectorOps.cpp
index 98d98f067de14..8955438b57343 100644
--- a/mlir/lib/Dialect/Vector/IR/VectorOps.cpp
+++ b/mlir/lib/Dialect/Vector/IR/VectorOps.cpp
@@ -5340,6 +5340,9 @@ class FoldContiguousGather final : public OpRewritePattern<GatherOp> {
using OpRewritePattern::OpRewritePattern;
LogicalResult matchAndRewrite(GatherOp op,
PatternRewriter &rewriter) const override {
+ if (!op.getBase().getType().isa<MemRefType>())
+ return failure();
+
if (failed(isZeroBasedContiguousSeq(op.getIndexVec())))
return failure();
diff --git a/mlir/test/Dialect/Vector/canonicalize.mlir b/mlir/test/Dialect/Vector/canonicalize.mlir
index b7db8ec834be7..7d9223696712d 100644
--- a/mlir/test/Dialect/Vector/canonicalize.mlir
+++ b/mlir/test/Dialect/Vector/canonicalize.mlir
@@ -3149,6 +3149,18 @@ func.func @contiguous_gather_step(%base: memref<?xf32>,
// -----
+// CHECK-LABEL: @dont_fold_tensor_type_contiguous_gather
+func.func @dont_fold_tensor_type_contiguous_gather(%base: tensor<8xf32>, %mask: vector<4xi1>, %pass_thru: vector<4xf32>) -> vector<4xf32> {
+ %c0 = arith.constant 0 : index
+ %indices = arith.constant dense<[0, 1, 2, 3]> : vector<4xindex>
+ // CHECK: vector.gather
+ // CHECK-NOT: vector.maskedload
+ %0 = vector.gather %base[%c0][%indices], %mask, %pass_thru : tensor<8xf32>, vector<4xindex>, vector<4xi1>, vector<4xf32> into vector<4xf32>
+ return %0 : vector<4xf32>
+}
+
+// -----
+
// CHECK-LABEL: @gather_broadcast(
// TODO: Broadcast is not supported yet
// CHECK: %[[R:.*]] = vector.gather
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a weird discrepancy between vector ops I wasn't aware of. Fix itself is LGTM.
Please also wait for @banach-space or @dcaballe review.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The fix makes sense to me — thanks for taking care of it! I've left a few nits, but otherwise LGTM.
@dcaballe and I were actually just talking about the “duality” of read/write ops in the Vector dialect — we agree it's something we should clean up and this PR confirms that.
@@ -5340,6 +5340,9 @@ class FoldContiguousGather final : public OpRewritePattern<GatherOp> { | |||
using OpRewritePattern::OpRewritePattern; | |||
LogicalResult matchAndRewrite(GatherOp op, | |||
PatternRewriter &rewriter) const override { | |||
if (!op.getBase().getType().isa<MemRefType>()) | |||
return failure(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[nit] Could you use notifyMatchFailure
? Thanks!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point, done.
@@ -3149,6 +3149,18 @@ func.func @contiguous_gather_step(%base: memref<?xf32>, | |||
|
|||
// ----- | |||
|
|||
// CHECK-LABEL: @dont_fold_tensor_type_contiguous_gather | |||
func.func @dont_fold_tensor_type_contiguous_gather(%base: tensor<8xf32>, %mask: vector<4xi1>, %pass_thru: vector<4xf32>) -> vector<4xf32> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
no_fold
is shorter thandont_fold
:)- The naming format seems to be
@contiguous_gather_{other_stuff}
, so@dont_fold_tensor_type_contiguous_gather
->@no_fold_contiguous_gather_tensor
(or something similar)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changed it to @no_fold_contiguous_gather_tensor
.
%indices = arith.constant dense<[0, 1, 2, 3]> : vector<4xindex> | ||
// CHECK: vector.gather | ||
// CHECK-NOT: vector.maskedload | ||
%0 = vector.gather %base[%c0][%indices], %mask, %pass_thru : tensor<8xf32>, vector<4xindex>, vector<4xi1>, vector<4xf32> into vector<4xf32> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[nit] Would you mind splitting into two lines like we did above?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
Side comment: gather takes |
This patch fixes an issue in the FoldContiguousGather pattern which was incorrectly folding vector.gather operations with contiguous indices into vector.maskedload operations regardless of the base operand type. While vector.gather operations can work on both tensor and memref types, vector.maskedload operations are only valid for memref types. The pattern was incorrectly lowering a tensor-based gather into a masked-load, which is invalid. This fix adds a type check to ensure the pattern only applies to memref-based gather operations.
1e50115
to
34b56da
Compare
@banach-space I've addressed the comments. Lmk if it looks good. @Hardcode84 @banach-space Can one of you help me merge it in since I do not have merge permission. Thanks! |
@sagarkulkarni19 Congratulations on having your first Pull Request (PR) merged into the LLVM Project! Your changes will be combined with recent changes from other authors, then tested by our build bots. If there is a problem with a build, you may receive a report in an email or a comment on this PR. Please check whether problems have been caused by your change specifically, as the builds can include changes from many authors. It is not uncommon for your change to be included in a build that fails due to someone else's changes, or infrastructure issues. How to do this, and the rest of the post-merge process, is covered in detail here. If your change does cause a problem, it may be reverted, or you can revert it yourself. This is a normal part of LLVM development. You can fix your changes and open a new PR to merge them again. If you don't get any reports, no action is required from you. Your changes are working as expected, well done! |
* origin/main: [mlir][vector] Prevent folding non memref-type gather into maskedload (llvm#135371) [mlir][SMT] remove custom forall/exists builder because of asan memory leak [bazel] Fix a typo (llvm#135460) [bazel] Add support for SMT Dialect (llvm#135454) [clang] ASTImporter: fix SubstNonTypeTemplateParmExpr source location (llvm#135450) [RISCV] Don't fold offsets into auipc if offset is larger than the reference global variable. (llvm#135297) [gn] port d1fd977 [NFC][LLVM] Apply std::move to object being pushed back in findSymbolCommon (llvm#135290) [AMDGPU] Teach iterative schedulers about IGLP (llvm#134953)
This was introduced in: llvm#135371
…135512) This was introduced in: llvm/llvm-project#135371
Thanks for pointing this out: #135749 |
…llvm#135371) This patch fixes an issue in the FoldContiguousGather pattern which was incorrectly folding vector.gather operations with contiguous indices into vector.maskedload operations regardless of the base operand type. While vector.gather operations can work on both tensor and memref types, vector.maskedload operations are only valid for memref types. The pattern was incorrectly lowering a tensor-based gather into a masked-load, which is invalid. This fix adds a type check to ensure the pattern only applies to memref-based gather operations. Co-authored-by: Sagar Kulkarni <[email protected]>
This was introduced in: llvm#135371
This patch fixes an issue in the FoldContiguousGather pattern which was incorrectly folding vector.gather operations with contiguous indices into vector.maskedload operations regardless of the base operand type.
While vector.gather operations can work on both tensor and memref types, vector.maskedload operations are only valid for memref types. The pattern was incorrectly lowering a tensor-based gather into a masked-load, which is invalid.
This fix adds a type check to ensure the pattern only applies to memref-based gather operations.