Skip to content

[mlir][td] Rename pack_paddings in structured.pad #111036

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Oct 15, 2024

Conversation

banach-space
Copy link
Contributor

The pack_paddings attribute in the structure.pad TD Op is used to set
the nofold attribute in the generated tensor.pad Op. The current name
is confusing and suggests that there's a relation with the tensor.pack
Op. This patch renames it as nofold_flags to better match the actual
usage.

The pack_paddings attribute in the structure.pad TD Op is used to set
the `nofold` attribute in the generated tensor.pad Op. The current name
is confusing and suggests that there's a relation with the tensor.pack
Op. This patch renames it as `nofold_flags` to better match the actual
usage.
@llvmbot
Copy link
Member

llvmbot commented Oct 3, 2024

@llvm/pr-subscribers-mlir-linalg

@llvm/pr-subscribers-mlir

Author: Andrzej Warzyński (banach-space)

Changes

The pack_paddings attribute in the structure.pad TD Op is used to set
the nofold attribute in the generated tensor.pad Op. The current name
is confusing and suggests that there's a relation with the tensor.pack
Op. This patch renames it as nofold_flags to better match the actual
usage.


Full diff: https://github.com/llvm/llvm-project/pull/111036.diff

10 Files Affected:

  • (modified) mlir/include/mlir/Dialect/Linalg/TransformOps/LinalgTransformOps.td (+1-1)
  • (modified) mlir/include/mlir/Dialect/Linalg/Transforms/Transforms.h (+3-3)
  • (modified) mlir/lib/Dialect/Linalg/TransformOps/LinalgTransformOps.cpp (+13-13)
  • (modified) mlir/lib/Dialect/Linalg/Transforms/Padding.cpp (+3-3)
  • (modified) mlir/python/mlir/dialects/transform/structured.py (+2-2)
  • (modified) mlir/test/Dialect/Linalg/matmul-shared-memory-padding.mlir (+2-2)
  • (modified) mlir/test/Dialect/Linalg/pad-to-specific-memory-space.mlir (+2-2)
  • (modified) mlir/test/Dialect/Linalg/transform-op-pad.mlir (+13-13)
  • (modified) mlir/test/Dialect/Linalg/transform-ops-invalid.mlir (+2-2)
  • (modified) mlir/test/python/dialects/transform_structured_ext.py (+3-3)
diff --git a/mlir/include/mlir/Dialect/Linalg/TransformOps/LinalgTransformOps.td b/mlir/include/mlir/Dialect/Linalg/TransformOps/LinalgTransformOps.td
index a997502c34299c..5dbc4856319d27 100644
--- a/mlir/include/mlir/Dialect/Linalg/TransformOps/LinalgTransformOps.td
+++ b/mlir/include/mlir/Dialect/Linalg/TransformOps/LinalgTransformOps.td
@@ -1023,7 +1023,7 @@ def PadOp : Op<Transform_Dialect, "structured.pad",
          Variadic<TransformAnyParamTypeOrAnyHandle>:$pad_to_multiple_of,
          DefaultValuedOptionalAttr<DenseI64ArrayAttr, "{}">:
                           $static_pad_to_multiple_of,
-         DefaultValuedAttr<I64ArrayAttr, "{}">:$pack_paddings,
+         DefaultValuedAttr<I64ArrayAttr, "{}">:$nofold_flags,
          DefaultValuedAttr<
           TypedArrayAttrBase<I64ArrayAttr, "array of arrays of i64">,
           "{}">:$transpose_paddings,
diff --git a/mlir/include/mlir/Dialect/Linalg/Transforms/Transforms.h b/mlir/include/mlir/Dialect/Linalg/Transforms/Transforms.h
index cc12ed7cfa6b54..8ac08b01cfb0b0 100644
--- a/mlir/include/mlir/Dialect/Linalg/Transforms/Transforms.h
+++ b/mlir/include/mlir/Dialect/Linalg/Transforms/Transforms.h
@@ -296,9 +296,9 @@ struct LinalgPaddingOptions {
   }
   /// A flag for every operand to mark the PadOp as nofold which enables
   /// packing for statically shaped operands.
-  SmallVector<bool> packPaddings;
+  SmallVector<bool> nofoldFlags;
   LinalgPaddingOptions &setPackPaddings(ArrayRef<bool> pp) {
-    packPaddings.assign(pp.begin(), pp.end());
+    nofoldFlags.assign(pp.begin(), pp.end());
     return *this;
   }
   /// A number of loops to hoist the PadOp out for every operand.
@@ -530,7 +530,7 @@ void peelLoops(RewriterBase &rewriter, ArrayRef<scf::ForOp> loops);
 ///
 /// * "options.padToMultipleOf" indicates that each padding dimension should be
 ///   padded to the specified multiple.
-/// * Use "options.paddingValues" and "options.packPaddings" to set padding
+/// * Use "options.paddingValues" and "options.nofoldFlags" to set padding
 ///   value and nofold attribute of the created tensor::PadOps, respectively.
 /// * The unpadded results (extracted slice of the cloned operation) are
 ///   returned via `replacements`.
diff --git a/mlir/lib/Dialect/Linalg/TransformOps/LinalgTransformOps.cpp b/mlir/lib/Dialect/Linalg/TransformOps/LinalgTransformOps.cpp
index 3b7b367d3cf2d5..c101303da55b9d 100644
--- a/mlir/lib/Dialect/Linalg/TransformOps/LinalgTransformOps.cpp
+++ b/mlir/lib/Dialect/Linalg/TransformOps/LinalgTransformOps.cpp
@@ -1718,7 +1718,7 @@ transform::PackTransposeOp::apply(transform::TransformRewriter &rewriter,
 void transform::PadOp::build(OpBuilder &b, OperationState &result, Value target,
                              ArrayRef<int64_t> paddingDimensions,
                              ArrayRef<int64_t> padToMultipleOf,
-                             ArrayRef<int64_t> packPaddings,
+                             ArrayRef<int64_t> nofoldFlags,
                              ArrayRef<Attribute> transposePaddings,
                              StringRef copyBackOp) {
   auto resultType = transform::AnyOpType::get(b.getContext());
@@ -1733,7 +1733,7 @@ void transform::PadOp::build(OpBuilder &b, OperationState &result, Value target,
                (padToMultipleOf.empty()
                     ? DenseI64ArrayAttr()
                     : b.getDenseI64ArrayAttr(padToMultipleOf)),
-               /*packPaddings=*/b.getI64ArrayAttr(packPaddings),
+               /*nofoldFlags=*/b.getI64ArrayAttr(nofoldFlags),
                /*transposePaddings=*/b.getArrayAttr(transposePaddings),
                /*copyBackOp=*/b.getStringAttr(copyBackOp));
 }
@@ -1741,7 +1741,7 @@ void transform::PadOp::build(OpBuilder &b, OperationState &result, Value target,
 void transform::PadOp::build(OpBuilder &b, OperationState &result, Value target,
                              ArrayRef<int64_t> paddingDimensions,
                              ArrayRef<OpFoldResult> mixedPadToMultipleOf,
-                             ArrayRef<int64_t> packPaddings,
+                             ArrayRef<int64_t> nofoldFlags,
                              ArrayRef<Attribute> transposePaddings,
                              StringRef copyBackOp) {
   auto resultType = transform::AnyOpType::get(b.getContext());
@@ -1757,7 +1757,7 @@ void transform::PadOp::build(OpBuilder &b, OperationState &result, Value target,
                /*paddingDimensions=*/b.getI64ArrayAttr(paddingDimensions),
                /*padToMultipleOf=*/dynamicPadToMultipleOf,
                /*padToMultipleOf=*/staticPadToMultipleOf,
-               /*packPaddings=*/b.getI64ArrayAttr(packPaddings),
+               /*nofoldFlags=*/b.getI64ArrayAttr(nofoldFlags),
                /*transposePaddings=*/b.getArrayAttr(transposePaddings),
                /*copyBackOp=*/b.getStringAttr(copyBackOp));
 }
@@ -1791,10 +1791,10 @@ transform::PadOp::apply(transform::TransformRewriter &rewriter,
     }
 
     // Convert the integer packing flags to booleans.
-    SmallVector<bool> packPaddings;
+    SmallVector<bool> nofoldFlags;
     for (int64_t packPadding :
-         extractFromIntegerArrayAttr<int64_t>(getPackPaddings()))
-      packPaddings.push_back(static_cast<bool>(packPadding));
+         extractFromIntegerArrayAttr<int64_t>(getNofoldFlags()))
+      nofoldFlags.push_back(static_cast<bool>(packPadding));
 
     // Convert the padding values to attributes.
     SmallVector<Attribute> paddingValues;
@@ -1852,7 +1852,7 @@ transform::PadOp::apply(transform::TransformRewriter &rewriter,
 
     options.padToMultipleOf = padToMultipleOf;
     options.paddingValues = paddingValues;
-    options.packPaddings = packPaddings;
+    options.nofoldFlags = nofoldFlags;
     if (getCopyBackOp() ==
         bufferization::MaterializeInDestinationOp::getOperationName()) {
       options.copyBackOp = LinalgPaddingOptions::CopyBackOp::
@@ -1898,14 +1898,14 @@ transform::PadOp::apply(transform::TransformRewriter &rewriter,
 }
 
 LogicalResult transform::PadOp::verify() {
-  SmallVector<int64_t> packPaddings =
-      extractFromIntegerArrayAttr<int64_t>(getPackPaddings());
-  if (any_of(packPaddings, [](int64_t packPadding) {
+  SmallVector<int64_t> nofoldFlags =
+      extractFromIntegerArrayAttr<int64_t>(getNofoldFlags());
+  if (any_of(nofoldFlags, [](int64_t packPadding) {
         return packPadding != 0 && packPadding != 1;
       })) {
     return emitOpError()
-           << "expects pack_paddings to contain booleans (0/1), found "
-           << getPackPaddings();
+           << "expects nofold_flags to contain booleans (0/1), found "
+           << getNofoldFlags();
   }
 
   SmallVector<int64_t> paddingDimensions =
diff --git a/mlir/lib/Dialect/Linalg/Transforms/Padding.cpp b/mlir/lib/Dialect/Linalg/Transforms/Padding.cpp
index a066c44408915e..9a685f6dc96acc 100644
--- a/mlir/lib/Dialect/Linalg/Transforms/Padding.cpp
+++ b/mlir/lib/Dialect/Linalg/Transforms/Padding.cpp
@@ -88,7 +88,7 @@ static LogicalResult computePaddedShape(linalg::LinalgOp opToPad,
 }
 
 /// Pad the `opOperand` in the "paddingDimensions" using the padding value and
-/// the nofold flag found in "paddingValues" and "packPaddings", respectively.
+/// the nofold flag found in "paddingValues" and "nofoldFlags", respectively.
 ///
 /// Exit early and return the `opOperand` value if it already has the requested
 /// shape. i.e.:
@@ -117,8 +117,8 @@ static FailureOr<Value> padOperandToSmallestStaticBoundingBox(
 
   // Return the unpadded operand if padding to a static shape is not needed and
   // if the nofold flag is not set.
-  bool nofold = opOperand->getOperandNumber() < options.packPaddings.size()
-                    ? options.packPaddings[opOperand->getOperandNumber()]
+  bool nofold = opOperand->getOperandNumber() < options.nofoldFlags.size()
+                    ? bool(options.nofoldFlags[opOperand->getOperandNumber()])
                     : false;
   if (!nofold && alreadyHasRequestedShape)
     return opOperand->get();
diff --git a/mlir/python/mlir/dialects/transform/structured.py b/mlir/python/mlir/dialects/transform/structured.py
index 41051c0d5b2ffb..f6111f516f8c3d 100644
--- a/mlir/python/mlir/dialects/transform/structured.py
+++ b/mlir/python/mlir/dialects/transform/structured.py
@@ -377,7 +377,7 @@ def __init__(
         pad_to_multiple_of: Optional[Union[DynamicIndexList, ArrayAttr]] = None,
         padding_values: Optional[Union[ArrayAttr, Sequence[Attribute]]] = None,
         padding_dimensions: OptionalIntList = None,
-        pack_paddings: OptionalIntList = None,
+        nofold_flags: OptionalIntList = None,
         transpose_paddings: Optional[
             Union[ArrayAttr, Sequence[Union[ArrayAttr, IntOrAttrList]]]
         ] = None,
@@ -407,7 +407,7 @@ def __init__(
             padding_values=padding_values,
             padding_dimensions=padding_dimensions,
             static_pad_to_multiple_of=static_pad_to_multiple_of,
-            pack_paddings=pack_paddings,
+            nofold_flags=nofold_flags,
             transpose_paddings=transpose_paddings,
             copy_back_op=copy_back_op,
             loc=loc,
diff --git a/mlir/test/Dialect/Linalg/matmul-shared-memory-padding.mlir b/mlir/test/Dialect/Linalg/matmul-shared-memory-padding.mlir
index 9c223737750a9b..8a3bb1bc52dc5d 100644
--- a/mlir/test/Dialect/Linalg/matmul-shared-memory-padding.mlir
+++ b/mlir/test/Dialect/Linalg/matmul-shared-memory-padding.mlir
@@ -59,7 +59,7 @@ module attributes {transform.with_named_sequence} {
     // Pad linalg.matmul.
     %padded, %pad, %copy_back = transform.structured.pad %tiled_linalg_op
         {padding_values=[0.0 : f32, 0.0 : f32, 0.0 : f32],
-         padding_dimensions=[0, 1, 2], pack_paddings=[1, 1, 1],
+         padding_dimensions=[0, 1, 2], nofold_flags=[1, 1, 1],
          copy_back_op = "linalg.copy"}
         : (!transform.any_op) -> (!transform.any_op, !transform.any_op, !transform.any_op)
 
@@ -180,7 +180,7 @@ module attributes {transform.with_named_sequence} {
     // Pad linalg.matmul.
     %padded, %pad, %copy_back = transform.structured.pad %tiled_linalg_op
         {padding_values=[0.0 : f32, 0.0 : f32, 0.0 : f32],
-         padding_dimensions=[0, 1, 2], pack_paddings=[1, 1, 1],
+         padding_dimensions=[0, 1, 2], nofold_flags=[1, 1, 1],
          copy_back_op = "linalg.copy"}
         : (!transform.any_op) -> (!transform.any_op, !transform.any_op, !transform.any_op)
 
diff --git a/mlir/test/Dialect/Linalg/pad-to-specific-memory-space.mlir b/mlir/test/Dialect/Linalg/pad-to-specific-memory-space.mlir
index 5e5657980ba120..373ed5f0d7908a 100644
--- a/mlir/test/Dialect/Linalg/pad-to-specific-memory-space.mlir
+++ b/mlir/test/Dialect/Linalg/pad-to-specific-memory-space.mlir
@@ -54,7 +54,7 @@ module attributes {transform.with_named_sequence} {
     %padded, %pad, %copy_back = transform.structured.pad %0 {
       padding_values=[0.0 : f32, 0.0 : f32, 0.0 : f32],
       padding_dimensions=[0, 1, 2],
-      pack_paddings=[1, 1, 1]
+      nofold_flags=[1, 1, 1]
     } : (!transform.any_op) -> (!transform.any_op, !transform.any_op, !transform.any_op)
     %buffer, %new_ops = transform.structured.bufferize_to_allocation %pad {memory_space = 3, emit_dealloc} : !transform.any_op
     %2 = transform.bufferization.one_shot_bufferize %arg1 {bufferize_function_boundaries=true} : (!transform.any_op) -> !transform.any_op
@@ -115,7 +115,7 @@ module attributes {transform.with_named_sequence} {
     %padded, %pad, %copy_back = transform.structured.pad %0 {
       padding_values=[0.0 : f32, 0.0 : f32, 0.0 : f32],
       padding_dimensions=[0, 1, 2],
-      pack_paddings=[1, 1, 1]
+      nofold_flags=[1, 1, 1]
     } : (!transform.any_op) -> (!transform.any_op, !transform.any_op, !transform.any_op)
     transform.structured.vectorize %pad vector_sizes [10, 12] : !transform.any_op
     %vector_write = transform.structured.match ops{["vector.transfer_write"]} in %arg1 : (!transform.any_op) -> !transform.any_op
diff --git a/mlir/test/Dialect/Linalg/transform-op-pad.mlir b/mlir/test/Dialect/Linalg/transform-op-pad.mlir
index 120a525f3bdae9..ab2711545405ea 100644
--- a/mlir/test/Dialect/Linalg/transform-op-pad.mlir
+++ b/mlir/test/Dialect/Linalg/transform-op-pad.mlir
@@ -39,7 +39,7 @@ module attributes {transform.with_named_sequence} {
     %padded, %pad, %copy_back = transform.structured.pad %0 {
       padding_values=[0.0 : f32, 0.0 : f32, 0.0 : f32],
       padding_dimensions=[0, 1, 2],
-      pack_paddings=[1, 1, 0]
+      nofold_flags=[1, 1, 0]
     } : (!transform.any_op) -> (!transform.any_op, !transform.any_op, !transform.op<"bufferization.materialize_in_destination">)
     %p = transform.num_associations %copy_back : (!transform.op<"bufferization.materialize_in_destination">) -> !transform.param<i64>
     // expected-remark @below {{1}}
@@ -76,7 +76,7 @@ module attributes {transform.with_named_sequence} {
     %padded, %pad, %copy_back = transform.structured.pad %0 pad_to_multiple_of [2, 2, 1] {
       padding_values=[0.0 : f32, 0.0 : f32, 0.0 : f32],
       padding_dimensions=[0, 1, 2],
-      pack_paddings=[1, 1, 0]
+      nofold_flags=[1, 1, 0]
     } : (!transform.any_op) -> (!transform.any_op, !transform.any_op, !transform.any_op)
     transform.yield
   }
@@ -112,7 +112,7 @@ module attributes {transform.with_named_sequence} {
     %padded, %pad, %copy_back = transform.structured.pad %0 pad_to_multiple_of [%c2, 2, 1] {
       padding_values=[0.0 : f32, 0.0 : f32, 0.0 : f32],
       padding_dimensions=[0, 1, 2],
-      pack_paddings=[1, 1, 0]
+      nofold_flags=[1, 1, 0]
     } : (!transform.any_op, !transform.param<i64>) -> (!transform.any_op, !transform.any_op, !transform.any_op)
     transform.yield
   }
@@ -155,7 +155,7 @@ module attributes {transform.with_named_sequence} {
     %padded, %pad, %copy_back = transform.structured.pad %0 {
       padding_values=[0.0 : f32, 0.0 : f32, 0.0 : f32],
       padding_dimensions=[0, 1, 2],
-      pack_paddings=[1, 1, 0]
+      nofold_flags=[1, 1, 0]
     } : (!transform.any_op) -> (!transform.any_op, !transform.any_op, !transform.any_op)
     transform.yield
   }
@@ -178,7 +178,7 @@ module attributes {transform.with_named_sequence} {
     %padded, %pad, %copy_back = transform.structured.pad %0 {
       padding_values=[0: i32, 0.0 : f32, 0.0 : f32],
       padding_dimensions=[0, 1, 2],
-      pack_paddings=[1, 1, 0]
+      nofold_flags=[1, 1, 0]
     } : (!transform.any_op) -> (!transform.any_op, !transform.any_op, !transform.any_op)
     transform.yield
   }
@@ -201,7 +201,7 @@ module attributes {transform.with_named_sequence} {
     %padded, %pad, %copy_back = transform.structured.pad %0 {
       padding_values=["{foo}", 0.0 : f32, 0.0 : f32],
       padding_dimensions=[0, 1, 2],
-      pack_paddings=[1, 1, 0]
+      nofold_flags=[1, 1, 0]
     } : (!transform.any_op) -> (!transform.any_op, !transform.any_op, !transform.any_op)
     transform.yield
   }
@@ -210,7 +210,7 @@ module attributes {transform.with_named_sequence} {
 // -----
 
 // With all padded being static, there's nothing to pad. However, with the
-// `nofold` attribute set (see `pack_paddings`), the corresponding pad Ops are
+// `nofold` attribute set (see `nofold_flags`), the corresponding pad Ops are
 // preserved.
 
 // CHECK-LABEL: @zero_pad_static(
@@ -239,7 +239,7 @@ module attributes {transform.with_named_sequence} {
     %padded, %pad, %copy_back = transform.structured.pad %0 {
       padding_values=[0.0 : f32, 0.0 : f32, 0.0 : f32],
       padding_dimensions=[0, 1, 2],
-      pack_paddings=[1, 1, 0]
+      nofold_flags=[1, 1, 0]
     } : (!transform.any_op) -> (!transform.any_op, !transform.any_op, !transform.any_op)
     transform.yield
   }
@@ -248,7 +248,7 @@ module attributes {transform.with_named_sequence} {
 // -----
 
 // With all padded dims being static, there's nothing to pad. However, with the
-// `nofold` attribute set (see `pack_paddings`), the corresponding pad Ops are
+// `nofold` attribute set (see `nofold_flags`), the corresponding pad Ops are
 // preserved. Same as above, but some dims are now dynamic.
 
 // CHECK-LABEL: @zero_pad_dynamic(
@@ -278,7 +278,7 @@ module attributes {transform.with_named_sequence} {
       padding_values=[0.0 : f32, 0.0 : f32, 0.0 : f32],
       // Note - only the static dim is padded
       padding_dimensions=[2],
-      pack_paddings=[1, 1, 1]
+      nofold_flags=[1, 1, 1]
     } : (!transform.any_op) -> (!transform.any_op, !transform.any_op, !transform.any_op)
     transform.yield
   }
@@ -305,7 +305,7 @@ module attributes {transform.with_named_sequence} {
       padding_values=[0.0 : f32, 0.0 : f32, 0.0 : f32],
       // Note - attempting to pad non-static dim
       padding_dimensions=[1],
-      pack_paddings=[1, 1, 1]
+      nofold_flags=[1, 1, 1]
     } : (!transform.any_op) -> (!transform.any_op, !transform.any_op, !transform.any_op)
     transform.yield
   }
@@ -362,7 +362,7 @@ module attributes {transform.with_named_sequence} {
     %padded, %pad, %copy_back = transform.structured.pad %0 {
       padding_values=[0.0 : f32, 0.0 : f32, 0.0 : f32],
       padding_dimensions=[0, 1, 2],
-      pack_paddings=[1, 1, 1]
+      nofold_flags=[1, 1, 1]
     } : (!transform.any_op) -> (!transform.any_op, !transform.any_op, !transform.any_op)
     transform.yield
   }
@@ -414,7 +414,7 @@ module attributes {transform.with_named_sequence} {
     %padded, %pad, %copy_back = transform.structured.pad %0 {
       padding_values=[0.0 : f32, 0.0 : f32, 0.0 : f32],
       padding_dimensions=[0, 1, 2],
-      pack_paddings=[1, 1, 1]
+      nofold_flags=[1, 1, 1]
     } : (!transform.any_op) -> (!transform.any_op, !transform.any_op, !transform.any_op)
     transform.yield
   }
diff --git a/mlir/test/Dialect/Linalg/transform-ops-invalid.mlir b/mlir/test/Dialect/Linalg/transform-ops-invalid.mlir
index e86d4962530a9a..a30b56c7c58e8f 100644
--- a/mlir/test/Dialect/Linalg/transform-ops-invalid.mlir
+++ b/mlir/test/Dialect/Linalg/transform-ops-invalid.mlir
@@ -18,8 +18,8 @@ transform.sequence failures(propagate) {
 
 transform.sequence failures(propagate) {
 ^bb0(%arg0: !transform.any_op):
-  // expected-error@below {{expects pack_paddings to contain booleans (0/1), found [1, 7]}}
-  transform.structured.pad %arg0 {pack_paddings=[1, 7]} : (!transform.any_op) -> (!transform.any_op, !transform.any_op, !transform.any_op)
+  // expected-error@below {{expects nofold_flags to contain booleans (0/1), found [1, 7]}}
+  transform.structured.pad %arg0 {nofold_flags=[1, 7]} : (!transform.any_op) -> (!transform.any_op, !transform.any_op, !transform.any_op)
 }
 
 // -----
diff --git a/mlir/test/python/dialects/transform_structured_ext.py b/mlir/test/python/dialects/transform_structured_ext.py
index 3ea73e8beea368..d029b3bfa6b118 100644
--- a/mlir/test/python/dialects/transform_structured_ext.py
+++ b/mlir/test/python/dialects/transform_structured_ext.py
@@ -304,7 +304,7 @@ def testPadOpNoArgs(target):
     # CHECK: transform.sequence
     # CHECK: transform.structured.pad
     # CHECK-NOT: copy_back_op
-    # CHECK-NOT: pack_paddings
+    # CHECK-NOT: nofold_flags
     # CHECK-NOT: pad_to_multiple_of
     # CHECK-NOT: padding_dimensions
     # CHECK-NOT: padding_values
@@ -319,7 +319,7 @@ def testPadOpArgs(target):
         pad_to_multiple_of=[128],
         padding_values=[FloatAttr.get_f32(42.0), StringAttr.get("0")],
         padding_dimensions=Attribute.parse("[1]"),
-        pack_paddings=[0],
+        nofold_flags=[0],
         transpose_paddings=[[1, Attribute.parse("0")], Attribute.parse("[0, 1]")],
         copy_back_op="linalg.copy",
     )
@@ -328,7 +328,7 @@ def testPadOpArgs(target):
     # CHECK: transform.structured.pad
     # CHECK-DAG: pad_to_multiple_of [128]
     # CHECK-DAG: copy_back_op = "linalg.copy"
-    # CHECK-DAG: pack_paddings = [0]
+    # CHECK-DAG: nofold_flags = [0]
     # CHECK-DAG: padding_dimensions = [1]
     # CHECK-DAG: padding_values = [4.200000e+01 : f32, "0"]
     # CHECK-DAG: transpose_paddings = {{\[}}[1, 0], [0, 1]]

Copy link
Contributor

@javedabsar1 javedabsar1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The existing comments actually in two places state that packPaddings ... nofold attribute AND flag for every operand to mark the PadOp as nofold which ....

So thanks for this.

Copy link
Contributor

@hanhanW hanhanW left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oops, sorry that I missed the PR. LGTM, thanks!

@@ -1757,7 +1757,7 @@ void transform::PadOp::build(OpBuilder &b, OperationState &result, Value target,
/*paddingDimensions=*/b.getI64ArrayAttr(paddingDimensions),
/*padToMultipleOf=*/dynamicPadToMultipleOf,
/*padToMultipleOf=*/staticPadToMultipleOf,
/*packPaddings=*/b.getI64ArrayAttr(packPaddings),
/*nofoldFlags=*/b.getI64ArrayAttr(nofoldFlags),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[no action required] side note: most of the function arguments are redundant to me because the variable name already spells it.

@banach-space banach-space merged commit a758bcd into llvm:main Oct 15, 2024
12 checks passed
@banach-space banach-space deleted the andrzej/rename_pack_paddings branch October 15, 2024 18:25
DanielCChen pushed a commit to DanielCChen/llvm-project that referenced this pull request Oct 16, 2024
The pack_paddings attribute in the structure.pad TD Op is used to set
the `nofold` attribute in the generated tensor.pad Op. The current name
is confusing and suggests that there's a relation with the tensor.pack
Op. This patch renames it as `nofold_flags` to better match the actual
usage.
yzhang93 added a commit that referenced this pull request Oct 16, 2024
The pack_paddings attribute has been renamed to nofold_flags in
#111036. There are still some
`packPadding` remaining unchanged. This PR rename those to keep
consistent.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants