Skip to content

[mlir][linalg] Add tests for PadOp #110271

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Sep 28, 2024

Conversation

banach-space
Copy link
Contributor

@banach-space banach-space commented Sep 27, 2024

Adds 3 tests for the logic to pad Linalg ops. Specifically, for
transformation under the transform.structured.pad TD Op.

For @zero_pad_static I simply took an existing test and added
check-lines. According to comments, it should fail. However, when I
tried it, it actually worked. Indeed, it triggers an important edge
cases - padding by 0 when all the shapes are static.

@zero_pad_dynamic exercises similar case, but some dimensions in the
input tensors are made dynamic - that's added to improve the test
coverage. Note that in this case we are padding the static dim.

Finally, @negative_no_ub_estimate is similar to @zero_pad_dynamic,
but we are trying to pad a dynamic dim instead. This fails as it's
impossible to compute the padded shape.

Adds 3 tests for the logic to pad Linalg ops. Specifically, for
transformation under the `transform.structured.pad` TD Op.

For `@zero_pad_static` I simply took an existing test and added
check-lines. According to comments, it should fail. However, when I
tried it, it actually worked. Indeed, it triggers an important edge
cases - padding by 0 when all the shapes are static.

`@zero_pad_dynamic` exercises similar case, but some dimensions in the
input tensors are made dynamic - that's to improve the test coverage.

Finally, `@@negative_no_ub_estimate` is similar to the above, but
attempting to pad a dimension for which it's impossible to get an upper
bound (hence the transformation fails).
@llvmbot
Copy link
Member

llvmbot commented Sep 27, 2024

@llvm/pr-subscribers-mlir-linalg

@llvm/pr-subscribers-mlir

Author: Andrzej Warzyński (banach-space)

Changes

Adds 3 tests for the logic to pad Linalg ops. Specifically, for
transformation under the transform.structured.pad TD Op.

For @<!-- -->zero_pad_static I simply took an existing test and added
check-lines. According to comments, it should fail. However, when I
tried it, it actually worked. Indeed, it triggers an important edge
cases - padding by 0 when all the shapes are static.

@<!-- -->zero_pad_dynamic exercises similar case, but some dimensions in the
input tensors are made dynamic - that's to improve the test coverage.

Finally, @@<!-- -->negative_no_ub_estimate is similar to the above, but
attempting to pad a dimension for which it's impossible to get an upper
bound (hence the transformation fails).


Full diff: https://github.com/llvm/llvm-project/pull/110271.diff

1 Files Affected:

  • (modified) mlir/test/Dialect/Linalg/transform-op-pad.mlir (+86-8)
diff --git a/mlir/test/Dialect/Linalg/transform-op-pad.mlir b/mlir/test/Dialect/Linalg/transform-op-pad.mlir
index 47bb5ddf4afc3e..120a525f3bdae9 100644
--- a/mlir/test/Dialect/Linalg/transform-op-pad.mlir
+++ b/mlir/test/Dialect/Linalg/transform-op-pad.mlir
@@ -209,12 +209,26 @@ module attributes {transform.with_named_sequence} {
 
 // -----
 
-// CHECK-LABEL: @pad(
-func.func @pad(%arg0: tensor<24x12xf32>,
-               %arg1: tensor<12x25xf32>,
-               %arg2: tensor<24x25xf32>) -> tensor<24x25xf32> {
-  // This is attached to an error that is silenceable and is not reported by this transform
-  //   {{when applied to this op}}
+// With all padded being static, there's nothing to pad. However, with the
+// `nofold` attribute set (see `pack_paddings`), the corresponding pad Ops are
+// preserved.
+
+// CHECK-LABEL: @zero_pad_static(
+func.func @zero_pad_static(%arg0: tensor<24x12xf32>,
+                           %arg1: tensor<12x25xf32>,
+                           %arg2: tensor<24x25xf32>) -> tensor<24x25xf32> {
+
+// CHECK-SAME:      %[[ARG_0:.*]]: tensor<24x12xf32>,
+// CHECK-SAME:      %[[ARG_1:.*]]: tensor<12x25xf32>,
+// CHECK-SAME:      %[[ARG_2:.*]]: tensor<24x25xf32>) -> tensor<24x25xf32> {
+
+// CHECK:           %[[PAD_ARG_0:.*]] = tensor.pad %[[ARG_0]] nofold low[0, 0] high[0, 0]
+// CHECK:           %[[PAD_ARG_1:.*]] = tensor.pad %[[ARG_1]] nofold low[0, 0] high[0, 0]
+// CHECK-NOT:       tensor.pad
+
+// CHECK:           %[[MATMUL:.*]] = linalg.matmul
+// CHECK-SAME:      ins(%[[PAD_ARG_0:.*]], %[[PAD_ARG_1:.*]] : tensor<24x12xf32>, tensor<12x25xf32>)
+// CHECK-SAME:      outs(%[[ARG_2]]
   %0 = linalg.matmul ins(%arg0, %arg1 : tensor<24x12xf32>, tensor<12x25xf32>) outs(%arg2 : tensor<24x25xf32>) -> tensor<24x25xf32>
   func.return %0 : tensor<24x25xf32>
 }
@@ -222,8 +236,6 @@ func.func @pad(%arg0: tensor<24x12xf32>,
 module attributes {transform.with_named_sequence} {
   transform.named_sequence @__transform_main(%arg1: !transform.any_op {transform.readonly}) {
     %0 = transform.structured.match ops{["linalg.matmul"]} in %arg1 : (!transform.any_op) -> !transform.any_op
-    // This error is silenceable and is not reported by this transform
-    //   {{transform.structured.pad failed to apply}}
     %padded, %pad, %copy_back = transform.structured.pad %0 {
       padding_values=[0.0 : f32, 0.0 : f32, 0.0 : f32],
       padding_dimensions=[0, 1, 2],
@@ -235,6 +247,72 @@ module attributes {transform.with_named_sequence} {
 
 // -----
 
+// With all padded dims being static, there's nothing to pad. However, with the
+// `nofold` attribute set (see `pack_paddings`), the corresponding pad Ops are
+// preserved. Same as above, but some dims are now dynamic.
+
+// CHECK-LABEL: @zero_pad_dynamic(
+func.func @zero_pad_dynamic(%arg0: tensor<?x12xf32>,
+                            %arg1: tensor<12x?xf32>,
+                            %arg2: tensor<?x?xf32>) -> tensor<?x?xf32> {
+
+// CHECK-SAME:      %[[ARG_0:.*]]: tensor<?x12xf32>,
+// CHECK-SAME:      %[[ARG_1:.*]]: tensor<12x?xf32>,
+// CHECK-SAME:      %[[ARG_2:.*]]: tensor<?x?xf32>) -> tensor<?x?xf32> {
+
+// CHECK:           %[[PAD_ARG_0:.*]] = tensor.pad %[[ARG_0]] nofold low[0, 0] high[0, 0]
+// CHECK:           %[[PAD_ARG_1:.*]] = tensor.pad %[[ARG_1]] nofold low[0, 0] high[0, 0]
+// CHECK:           %[[PAD_ARG_2:.*]] = tensor.pad %[[ARG_2]] nofold low[0, 0] high[0, 0]
+
+// CHECK:           %[[MATMUL:.*]] = linalg.matmul
+// CHECK-SAME:      ins(%[[PAD_ARG_0:.*]], %[[PAD_ARG_1:.*]] : tensor<?x12xf32>, tensor<12x?xf32>)
+// CHECK-SAME:      outs(%[[PAD_ARG_2]]
+  %0 = linalg.matmul ins(%arg0, %arg1 : tensor<?x12xf32>, tensor<12x?xf32>) outs(%arg2 : tensor<?x?xf32>) -> tensor<?x?xf32>
+  func.return %0 : tensor<?x?xf32>
+}
+
+module attributes {transform.with_named_sequence} {
+  transform.named_sequence @__transform_main(%arg1: !transform.any_op {transform.readonly}) {
+    %0 = transform.structured.match ops{["linalg.matmul"]} in %arg1 : (!transform.any_op) -> !transform.any_op
+    %padded, %pad, %copy_back = transform.structured.pad %0 {
+      padding_values=[0.0 : f32, 0.0 : f32, 0.0 : f32],
+      // Note - only the static dim is padded
+      padding_dimensions=[2],
+      pack_paddings=[1, 1, 1]
+    } : (!transform.any_op) -> (!transform.any_op, !transform.any_op, !transform.any_op)
+    transform.yield
+  }
+}
+
+// -----
+
+// Impossible to get a bound for padding - fails
+
+func.func @negative_no_ub_estimate(%arg0: tensor<?x12xf32>,
+                                   %arg1: tensor<12x?xf32>,
+                                   %arg2: tensor<?x?xf32>) -> tensor<?x?xf32> {
+
+  // expected-note @below {{target op}}
+  %0 = linalg.matmul ins(%arg0, %arg1 : tensor<?x12xf32>, tensor<12x?xf32>) outs(%arg2 : tensor<?x?xf32>) -> tensor<?x?xf32>
+  func.return %0 : tensor<?x?xf32>
+}
+
+module attributes {transform.with_named_sequence} {
+  transform.named_sequence @__transform_main(%arg1: !transform.any_op {transform.readonly}) {
+    %0 = transform.structured.match ops{["linalg.matmul"]} in %arg1 : (!transform.any_op) -> !transform.any_op
+    // expected-error @below {{ailed to pad op}}
+    %padded, %pad, %copy_back = transform.structured.pad %0 {
+      padding_values=[0.0 : f32, 0.0 : f32, 0.0 : f32],
+      // Note - attempting to pad non-static dim
+      padding_dimensions=[1],
+      pack_paddings=[1, 1, 1]
+    } : (!transform.any_op) -> (!transform.any_op, !transform.any_op, !transform.any_op)
+    transform.yield
+  }
+}
+
+// -----
+
 // Check that the padding can be applied even when the output argument of the
 // linalg op is not produced by an empty op or an extract_slice op.
 

@banach-space banach-space merged commit 75e08a5 into llvm:main Sep 28, 2024
11 checks passed
@banach-space banach-space deleted the andrzej/pad_op_add_tests branch September 28, 2024 09:32
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants