Skip to content

[mlir][sparse] rename sparse_tensor.(un)pack to sparse_tensor.(dis)as… #67717

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Sep 28, 2023

Conversation

PeimingLiu
Copy link
Member

…semble

Pack/Unpack are overridden in many other places, rename the operations to avoid confusion.

@llvmbot
Copy link
Member

llvmbot commented Sep 28, 2023

@llvm/pr-subscribers-mlir
@llvm/pr-subscribers-mlir-gpu

@llvm/pr-subscribers-mlir-sparse

Changes

…semble

Pack/Unpack are overridden in many other places, rename the operations to avoid confusion.


Patch is 28.49 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/67717.diff

13 Files Affected:

  • (modified) mlir/include/mlir/Dialect/SparseTensor/IR/SparseTensorOps.td (+6-6)
  • (modified) mlir/lib/Dialect/SparseTensor/IR/SparseTensorDialect.cpp (+2-2)
  • (modified) mlir/lib/Dialect/SparseTensor/Transforms/BufferizableOpInterfaceImpl.cpp (+10-9)
  • (modified) mlir/lib/Dialect/SparseTensor/Transforms/SparseGPUCodegen.cpp (+4-3)
  • (modified) mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorCodegen.cpp (+8-6)
  • (modified) mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorConversion.cpp (+4-4)
  • (modified) mlir/test/Dialect/SparseTensor/GPU/gpu_spgemm_lib.mlir (+1-1)
  • (modified) mlir/test/Dialect/SparseTensor/invalid.mlir (+7-7)
  • (modified) mlir/test/Dialect/SparseTensor/pack_copy.mlir (+2-2)
  • (modified) mlir/test/Dialect/SparseTensor/roundtrip.mlir (+4-4)
  • (modified) mlir/test/Dialect/SparseTensor/sparse_pack.mlir (+2-2)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_pack.mlir (+8-8)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_pack_libgen.mlir (+4-4)
diff --git a/mlir/include/mlir/Dialect/SparseTensor/IR/SparseTensorOps.td b/mlir/include/mlir/Dialect/SparseTensor/IR/SparseTensorOps.td
index e2a2c09c5e9a01c..c566f674f7d4011 100644
--- a/mlir/include/mlir/Dialect/SparseTensor/IR/SparseTensorOps.td
+++ b/mlir/include/mlir/Dialect/SparseTensor/IR/SparseTensorOps.td
@@ -53,14 +53,14 @@ def SparseTensor_NewOp : SparseTensor_Op<"new", [Pure]>,
   let assemblyFormat = "$source attr-dict `:` type($source) `to` type($result)";
 }
 
-def SparseTensor_PackOp : SparseTensor_Op<"pack", [Pure]>,
+def SparseTensor_AssembleOp : SparseTensor_Op<"assemble", [Pure]>,
     Arguments<(ins TensorOf<[AnyType]>:$values,
                    Variadic<TensorOf<[AnySignlessIntegerOrIndex]>>:$levels)>,
     Results<(outs AnySparseTensor: $result)> {
   let summary = "Returns a sparse tensor from the given values, levels";
 
   let description = [{
-    Packs the values and per-level coordinate or postion arrays into a sparse tensor.
+    Assembles the values and per-level coordinate or postion arrays into a sparse tensor.
     The order and types of provided levels must be consistent with the actual storage
     layout of the returned sparse tensor described below.
 
@@ -87,7 +87,7 @@ def SparseTensor_PackOp : SparseTensor_Op<"pack", [Pure]>,
     ```mlir
     %values      = arith.constant dense<[ 1.1,   2.2,   3.3 ]> : tensor<3xf64>
     %coordinates = arith.constant dense<[[0,0], [1,2], [1,3]]> : tensor<3x2xindex>
-    %st = sparse_tensor.pack %values, %coordinates
+    %st = sparse_tensor.assemble %values, %coordinates
         : tensor<3xf64>, tensor<3x2xindex> to tensor<3x4xf64, #COO>
     // yields COO format |1.1, 0.0, 0.0, 0.0|
     //     of 3x4 matrix |0.0, 0.0, 2.2, 3.3|
@@ -102,7 +102,7 @@ def SparseTensor_PackOp : SparseTensor_Op<"pack", [Pure]>,
   let hasVerifier = 1;
 }
 
-def SparseTensor_UnpackOp : SparseTensor_Op<"unpack", [Pure, SameVariadicResultSize]>,
+def SparseTensor_DisassembleOp : SparseTensor_Op<"disassemble", [Pure, SameVariadicResultSize]>,
     Arguments<(ins AnySparseTensor:$tensor,
                    TensorOf<[AnyType]>:$out_values,
                    Variadic<TensorOf<[AnySignlessIntegerOrIndex]>>:$out_levels)>,
@@ -113,7 +113,7 @@ def SparseTensor_UnpackOp : SparseTensor_Op<"unpack", [Pure, SameVariadicResultS
   let summary = "Returns the (values, coordinates) pair unpacked from the input tensor";
 
   let description = [{
-    The unpack operation is the inverse of `sparse_tensor::pack`.  It returns
+    The disassemble operation is the inverse of `sparse_tensor::assemble`.  It returns
     the values and per-level position and coordinate array to the user
     from the sparse tensor along with the actual length of the memory used in
     each returned buffer. This operation can be used for returning an
@@ -132,7 +132,7 @@ def SparseTensor_UnpackOp : SparseTensor_Op<"unpack", [Pure, SameVariadicResultS
     //    of 3x4 matrix |0.0, 0.0, 2.2, 3.3|
     //                  |0.0, 0.0, 0.0, 0.0|
     %v, %p, %c, %v_len, %p_len, %c_len =
-        sparse_tensor.unpack %sp : tensor<3x4xf64, #COO>
+        sparse_tensor.disassemble %sp : tensor<3x4xf64, #COO>
           outs(%od, %op, %oi : tensor<3xf64>, tensor<2xindex>, tensor<3x2xindex>)
                             -> tensor<3xf64>, (tensor<2xindex>, tensor<3x2xindex>), index, (index, index)
     // %v = arith.constant dense<[ 1.1,   2.2,   3.3 ]> : tensor<3xf64>
diff --git a/mlir/lib/Dialect/SparseTensor/IR/SparseTensorDialect.cpp b/mlir/lib/Dialect/SparseTensor/IR/SparseTensorDialect.cpp
index 1c75df41e33daa4..b962dda20cfe64a 100644
--- a/mlir/lib/Dialect/SparseTensor/IR/SparseTensorDialect.cpp
+++ b/mlir/lib/Dialect/SparseTensor/IR/SparseTensorDialect.cpp
@@ -974,14 +974,14 @@ static LogicalResult verifyPackUnPack(Operation *op, bool requiresStaticShape,
   return success();
 }
 
-LogicalResult PackOp::verify() {
+LogicalResult AssembleOp::verify() {
   const auto valuesTp = getRankedTensorType(getValues());
   const auto lvlsTp = getLevels().getTypes();
   const auto resTp = getSparseTensorType(getResult());
   return verifyPackUnPack(*this, true, resTp, valuesTp, lvlsTp);
 }
 
-LogicalResult UnpackOp::verify() {
+LogicalResult DisassembleOp::verify() {
   if (getOutValues().getType() != getRetValues().getType())
     return emitError("output values and return value type mismatch");
 
diff --git a/mlir/lib/Dialect/SparseTensor/Transforms/BufferizableOpInterfaceImpl.cpp b/mlir/lib/Dialect/SparseTensor/Transforms/BufferizableOpInterfaceImpl.cpp
index 89c6495a3112ad0..d54cd9ad8cdbe7a 100644
--- a/mlir/lib/Dialect/SparseTensor/Transforms/BufferizableOpInterfaceImpl.cpp
+++ b/mlir/lib/Dialect/SparseTensor/Transforms/BufferizableOpInterfaceImpl.cpp
@@ -122,11 +122,11 @@ struct NewOpInterface
   bool bufferizesToAllocation(Operation *op, Value value) const { return true; }
 };
 
-struct PackOpInterface
-    : public SparseBufferizableOpInterfaceExternalModel<PackOpInterface,
-                                                        sparse_tensor::PackOp> {
+struct AssembleOpInterface
+    : public SparseBufferizableOpInterfaceExternalModel<
+          AssembleOpInterface, sparse_tensor::AssembleOp> {
   bool bufferizesToAllocation(Operation *op, Value value) const {
-    // PackOp reuses all the buffers instead of allocating new ones
+    // AssembleOp reuses all the buffers instead of allocating new ones
     return false;
   }
 
@@ -143,7 +143,7 @@ struct PackOpInterface
   AliasingValueList getAliasingValues(Operation *op, OpOperand &opOperand,
                                       const AnalysisState &state) const {
     assert(op->getNumResults() == 1);
-    // PackOp reuses the input tensors as values/coordinates instead of
+    // AssembleOp reuses the input tensors as values/coordinates instead of
     // creating new ones when packing into a COO format.
     return {{op->getOpResult(0), BufferRelation::Equivalent}};
   }
@@ -154,8 +154,9 @@ struct PackOpInterface
   }
 };
 
-struct UnpackOpInterface : public SparseBufferizableOpInterfaceExternalModel<
-                               UnpackOpInterface, sparse_tensor::UnpackOp> {
+struct DisassembleOpInterface
+    : public SparseBufferizableOpInterfaceExternalModel<
+          DisassembleOpInterface, sparse_tensor::DisassembleOp> {
   bool bufferizesToAllocation(Operation *op, Value value) const {
     // The output buffer is pre-allocated by the user.
     return false;
@@ -326,8 +327,8 @@ void mlir::sparse_tensor::registerBufferizableOpInterfaceExternalModels(
     sparse_tensor::InsertOp::attachInterface<InsertOpInterface>(*ctx);
     sparse_tensor::NumberOfEntriesOp::attachInterface<
         NumberOfEntriesOpInterface>(*ctx);
-    sparse_tensor::PackOp::attachInterface<PackOpInterface>(*ctx);
-    sparse_tensor::UnpackOp::attachInterface<UnpackOpInterface>(*ctx);
+    sparse_tensor::AssembleOp::attachInterface<AssembleOpInterface>(*ctx);
+    sparse_tensor::DisassembleOp::attachInterface<DisassembleOpInterface>(*ctx);
     sparse_tensor::ToCoordinatesBufferOp::attachInterface<
         ToCoordinatesBufferOpInterface>(*ctx);
     sparse_tensor::ToCoordinatesOp::attachInterface<ToCoordinatesOpInterface>(
diff --git a/mlir/lib/Dialect/SparseTensor/Transforms/SparseGPUCodegen.cpp b/mlir/lib/Dialect/SparseTensor/Transforms/SparseGPUCodegen.cpp
index 91b346c8a9b4c4d..ea48d7ec23250e2 100644
--- a/mlir/lib/Dialect/SparseTensor/Transforms/SparseGPUCodegen.cpp
+++ b/mlir/lib/Dialect/SparseTensor/Transforms/SparseGPUCodegen.cpp
@@ -795,10 +795,10 @@ rewriteSpGEMM(PatternRewriter &rewriter, linalg::GenericOp op, bool enableRT,
   Value rowC = e1.getResult(0);
   token = e1.getAsyncToken();
   auto e2 = genAllocBuffer(rewriter, loc, cTp.getCrdType(), zero, token);
-  Value colC = e2.getResult(0);  // no free needed
+  Value colC = e2.getResult(0); // no free needed
   token = e2.getAsyncToken();
   auto e3 = genAllocBuffer(rewriter, loc, dnCType, zero, token);
-  Value valC = e3.getResult(0);  // no free needed
+  Value valC = e3.getResult(0); // no free needed
   token = e3.getAsyncToken();
   Operation *spGenC =
       genSpMat(rewriter, loc, spmatHandleTp, tokenTp, token, szm, szn, zero,
@@ -900,7 +900,8 @@ rewriteSpGEMM(PatternRewriter &rewriter, linalg::GenericOp op, bool enableRT,
   Value vt = rewriter.create<bufferization::ToTensorOp>(loc, valH);
   Value rt = rewriter.create<bufferization::ToTensorOp>(loc, rowH);
   Value ct = rewriter.create<bufferization::ToTensorOp>(loc, colH);
-  rewriter.replaceOpWithNewOp<PackOp>(op, c.getType(), vt, ValueRange{rt, ct});
+  rewriter.replaceOpWithNewOp<AssembleOp>(op, c.getType(), vt,
+                                          ValueRange{rt, ct});
   return success();
 }
 
diff --git a/mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorCodegen.cpp b/mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorCodegen.cpp
index 8a0ec1c14928305..3a3ea311c49d988 100644
--- a/mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorCodegen.cpp
+++ b/mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorCodegen.cpp
@@ -1244,10 +1244,10 @@ class SparseNumberOfEntriesConverter
   }
 };
 
-struct SparsePackOpConverter : public OpConversionPattern<PackOp> {
+struct SparseAssembleOpConverter : public OpConversionPattern<AssembleOp> {
   using OpConversionPattern::OpConversionPattern;
   LogicalResult
-  matchAndRewrite(PackOp op, OpAdaptor adaptor,
+  matchAndRewrite(AssembleOp op, OpAdaptor adaptor,
                   ConversionPatternRewriter &rewriter) const override {
     Location loc = op.getLoc();
     const auto stt = getSparseTensorType(op.getResult());
@@ -1347,13 +1347,15 @@ struct SparsePackOpConverter : public OpConversionPattern<PackOp> {
   }
 };
 
-struct SparseUnpackOpConverter : public OpConversionPattern<UnpackOp> {
+struct SparseDisassembleOpConverter
+    : public OpConversionPattern<DisassembleOp> {
   using OpConversionPattern::OpConversionPattern;
-  SparseUnpackOpConverter(TypeConverter &typeConverter, MLIRContext *context)
+  SparseDisassembleOpConverter(TypeConverter &typeConverter,
+                               MLIRContext *context)
       : OpConversionPattern(typeConverter, context) {}
 
   LogicalResult
-  matchAndRewrite(UnpackOp op, OpAdaptor adaptor,
+  matchAndRewrite(DisassembleOp op, OpAdaptor adaptor,
                   ConversionPatternRewriter &rewriter) const override {
     auto desc = getDescriptorFromTensorTuple(adaptor.getTensor());
     Location loc = op.getLoc();
@@ -1571,7 +1573,7 @@ struct SparseNewOpConverter : public OpConversionPattern<NewOp> {
 void mlir::populateSparseTensorCodegenPatterns(
     TypeConverter &typeConverter, RewritePatternSet &patterns,
     bool createSparseDeallocs, bool enableBufferInitialization) {
-  patterns.add<SparsePackOpConverter, SparseUnpackOpConverter,
+  patterns.add<SparseAssembleOpConverter, SparseDisassembleOpConverter,
                SparseReturnConverter, SparseCallConverter, SparseDimOpConverter,
                SparseCastConverter, SparseExtractSliceConverter,
                SparseTensorLoadConverter, SparseExpandConverter,
diff --git a/mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorConversion.cpp b/mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorConversion.cpp
index ad8a8043ce8dbb4..37f6971cf4df1a2 100644
--- a/mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorConversion.cpp
+++ b/mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorConversion.cpp
@@ -1493,15 +1493,15 @@ class SparseTensorOutConverter : public OpConversionPattern<OutOp> {
 };
 
 /// Sparse conversion rule for the sparse_tensor.pack operator.
-class SparseTensorPackConverter : public OpConversionPattern<PackOp> {
+class SparseTensorAssembleConverter : public OpConversionPattern<AssembleOp> {
 public:
   using OpConversionPattern::OpConversionPattern;
   LogicalResult
-  matchAndRewrite(PackOp op, OpAdaptor adaptor,
+  matchAndRewrite(AssembleOp op, OpAdaptor adaptor,
                   ConversionPatternRewriter &rewriter) const override {
     const Location loc = op->getLoc();
     const auto dstTp = getSparseTensorType(op.getResult());
-    // PackOps always returns a static shaped tensor result.
+    // AssembleOps always returns a static shaped tensor result.
     assert(dstTp.hasStaticDimShape());
     SmallVector<Value> dimSizes = getDimSizes(rewriter, loc, dstTp);
     Value dst =
@@ -1546,7 +1546,7 @@ void mlir::populateSparseTensorConversionPatterns(
            SparseTensorToValuesConverter, SparseNumberOfEntriesConverter,
            SparseTensorLoadConverter, SparseTensorInsertConverter,
            SparseTensorExpandConverter, SparseTensorCompressConverter,
-           SparseTensorOutConverter, SparseTensorPackConverter>(
+           SparseTensorOutConverter, SparseTensorAssembleConverter>(
           typeConverter, patterns.getContext());
   patterns.add<SparseTensorConvertConverter>(typeConverter,
                                              patterns.getContext(), options);
diff --git a/mlir/test/Dialect/SparseTensor/GPU/gpu_spgemm_lib.mlir b/mlir/test/Dialect/SparseTensor/GPU/gpu_spgemm_lib.mlir
index 147e8132c92e451..a5d4ee2b55f546e 100644
--- a/mlir/test/Dialect/SparseTensor/GPU/gpu_spgemm_lib.mlir
+++ b/mlir/test/Dialect/SparseTensor/GPU/gpu_spgemm_lib.mlir
@@ -86,7 +86,7 @@
 // CHECK:           %[[VAL_a2:.*]] = bufferization.to_tensor %[[VAL_83]] : memref<?xf32>
 // CHECK:           %[[VAL_a3:.*]] = bufferization.to_tensor %[[VAL_81]] : memref<?xindex>
 // CHECK:           %[[VAL_a4:.*]] = bufferization.to_tensor %[[VAL_82]] : memref<?xindex>
-// CHECK:           %[[VAL_a5:.*]] = sparse_tensor.pack %[[VAL_a2]], %[[VAL_a3]], %[[VAL_a4]] : tensor<?xf32>, tensor<?xindex>, tensor<?xindex> to tensor<8x8xf32, #{{.*}}>
+// CHECK:           %[[VAL_a5:.*]] = sparse_tensor.assemble %[[VAL_a2]], %[[VAL_a3]], %[[VAL_a4]] : tensor<?xf32>, tensor<?xindex>, tensor<?xindex> to tensor<8x8xf32, #{{.*}}>
 // CHECK:           return %[[VAL_a5]] : tensor<8x8xf32, #{{.*}}>
 // CHECK:         }
 func.func @matmulCSR(%A: tensor<8x8xf32, #CSR>,
diff --git a/mlir/test/Dialect/SparseTensor/invalid.mlir b/mlir/test/Dialect/SparseTensor/invalid.mlir
index 2a13f208fa225d3..3c33ba42f4d388f 100644
--- a/mlir/test/Dialect/SparseTensor/invalid.mlir
+++ b/mlir/test/Dialect/SparseTensor/invalid.mlir
@@ -13,7 +13,7 @@ func.func @invalid_new_dense(%arg0: !llvm.ptr<i8>) -> tensor<32xf32> {
 func.func @non_static_pack_ret(%values: tensor<6xf64>, %pos: tensor<2xi32>, %coordinates: tensor<6x1xi32>)
                             -> tensor<?xf64, #SparseVector> {
   // expected-error@+1 {{the sparse-tensor must have static shape}}
-  %0 = sparse_tensor.pack %values, %pos, %coordinates
+  %0 = sparse_tensor.assemble %values, %pos, %coordinates
      : tensor<6xf64>, tensor<2xi32>, tensor<6x1xi32> to tensor<?xf64, #SparseVector>
   return %0 : tensor<?xf64, #SparseVector>
 }
@@ -25,7 +25,7 @@ func.func @non_static_pack_ret(%values: tensor<6xf64>, %pos: tensor<2xi32>, %coo
 func.func @invalid_pack_type(%values: tensor<6xf64>, %pos: tensor<2xi32>, %coordinates: tensor<6x1xi32>)
                             -> tensor<100xf32, #SparseVector> {
   // expected-error@+1 {{input/output element-types don't match}}
-  %0 = sparse_tensor.pack %values, %pos, %coordinates
+  %0 = sparse_tensor.assemble %values, %pos, %coordinates
      : tensor<6xf64>, tensor<2xi32>, tensor<6x1xi32> to tensor<100xf32, #SparseVector>
   return %0 : tensor<100xf32, #SparseVector>
 }
@@ -37,7 +37,7 @@ func.func @invalid_pack_type(%values: tensor<6xf64>, %pos: tensor<2xi32>, %coord
 func.func @invalid_pack_type(%values: tensor<6xf64>, %pos: tensor<2xi32>, %coordinates: tensor<6x3xi32>)
                             -> tensor<100x2xf64, #SparseVector> {
   // expected-error@+1 {{input/output trailing COO level-ranks don't match}}
-  %0 = sparse_tensor.pack %values, %pos, %coordinates
+  %0 = sparse_tensor.assemble %values, %pos, %coordinates
      : tensor<6xf64>, tensor<2xi32>, tensor<6x3xi32> to tensor<100x2xf64, #SparseVector>
   return %0 : tensor<100x2xf64, #SparseVector>
 }
@@ -49,7 +49,7 @@ func.func @invalid_pack_type(%values: tensor<6xf64>, %pos: tensor<2xi32>, %coord
 func.func @invalid_pack_mis_position(%values: tensor<6xf64>, %coordinates: tensor<6xi32>)
                                      -> tensor<2x100xf64, #CSR> {
   // expected-error@+1 {{inconsistent number of fields between input/output}}
-  %0 = sparse_tensor.pack %values, %coordinates
+  %0 = sparse_tensor.assemble %values, %coordinates
      : tensor<6xf64>, tensor<6xi32> to tensor<2x100xf64, #CSR>
   return %0 : tensor<2x100xf64, #CSR>
 }
@@ -60,7 +60,7 @@ func.func @invalid_pack_mis_position(%values: tensor<6xf64>, %coordinates: tenso
 
 func.func @invalid_unpack_type(%sp: tensor<100xf32, #SparseVector>, %values: tensor<6xf64>, %pos: tensor<2xi32>, %coordinates: tensor<6x1xi32>) {
   // expected-error@+1 {{input/output element-types don't match}}
-  %rv, %rp, %rc, %vl, %pl, %cl = sparse_tensor.unpack %sp : tensor<100xf32, #SparseVector>
+  %rv, %rp, %rc, %vl, %pl, %cl = sparse_tensor.disassemble %sp : tensor<100xf32, #SparseVector>
                   outs(%values, %pos, %coordinates : tensor<6xf64>, tensor<2xi32>, tensor<6x1xi32>)
                   -> tensor<6xf64>, (tensor<2xi32>, tensor<6x1xi32>), index, (index, index)
   return
@@ -72,7 +72,7 @@ func.func @invalid_unpack_type(%sp: tensor<100xf32, #SparseVector>, %values: ten
 
 func.func @invalid_unpack_type(%sp: tensor<100x2xf64, #SparseVector>, %values: tensor<6xf64>, %pos: tensor<2xi32>, %coordinates: tensor<6x3xi32>) {
   // expected-error@+1 {{input/output trailing COO level-ranks don't match}}
-  %rv, %rp, %rc, %vl, %pl, %cl = sparse_tensor.unpack %sp : tensor<100x2xf64, #SparseVector>
+  %rv, %rp, %rc, %vl, %pl, %cl = sparse_tensor.disassemble %sp : tensor<100x2xf64, #SparseVector>
                   outs(%values, %pos, %coordinates : tensor<6xf64>, tensor<2xi32>, tensor<6x3xi32>)
                   -> tensor<6xf64>, (tensor<2xi32>, tensor<6x3xi32>), index, (index, index)
   return
@@ -84,7 +84,7 @@ func.func @invalid_unpack_type(%sp: tensor<100x2xf64, #SparseVector>, %values: t
 
 func.func @invalid_unpack_mis_position(%sp: tensor<2x100xf64, #CSR>, %values: tensor<6xf64>, %coordinates: tensor<6xi32>) {
   // expected-error@+1 {{inconsistent number of fields between input/output}}
-  %rv, %rc, %vl, %pl = sparse_tensor.unpack %sp : tensor<2x100xf64, #CSR>
+  %rv, %rc, %vl, %pl = sparse_tensor.disassemble %sp : tensor<2x100xf64, #CSR>
              outs(%values, %coordinates : tensor<6xf64>, tensor<6xi32>)
              -> tensor<6xf64>, (tensor<6xi32>), index, (index)
   return
diff --git a/mlir/test/Dialect/SparseTensor/pack_copy.mlir b/mlir/test/Dialect/SparseTensor/pack_copy.mlir
index aee7793671c903b..e60f9bb7149b320 100644
--- a/mlir/test/Dialect/SparseTensor/pack_copy.mlir
+++ b/mlir/test/Dialect/SparseTensor/pack_copy.mlir
@@ -35,7 +35,7 @@ func.func @foo(%arg0: tensor<3xf64>  {bufferization.writable = false},
     //
     // Pack the buffers into a sparse tensors.
     //
-    %pack = sparse_tensor.pack %arg0, %arg2, %arg1
+    %pack = sparse_tensor.assemble %arg0, %arg2, %arg1
       : tensor<3xf64>,
         tensor<11xi32>,
         tensor<3xi32> to tensor<10x10xf64, #CSR>
@@ -76,7 +76,7 @@ func.func @bar(%arg0: tensor<3xf64>  {bufferization.writable = true},
     //
     // Pack the buffers into a sparse tensors.
     //
-    %pack = sparse_tensor.pack %arg0, %arg2, %arg1
+    %pack = sparse_tensor.assemble %arg0, %arg2, %arg1
       : tensor<3xf64>,
         tensor<11xi32>,
         tensor<3xi32> to tensor<10x10xf64, #CSR>
diff --git a/mlir/test/Dialect/SparseTensor/roundtrip.mlir b/mlir/test/Dialect/SparseTensor/roundtrip.mlir
index 33471497cc69767..317a0735fb8bf84 100644
--- a/mlir/test/Dialect/SparseTensor/roundtrip.mlir
+++ b/mlir/test/Dialect/SparseTensor/roundtrip.mlir
@@ -19,11 +19,11 @@ func.func @sparse_new(%arg0: !llvm.ptr<i8>) -> tensor<128xf64, #SparseVector> {
 // CHECK-SAME: %[[D:.*]]: tensor<6xf64>,
 // CHECK-SAME: %[[P:.*]]: tensor<2xi32>,
 // CHECK-SAME: %[[I:.*]]: tensor<6x1xi32>)
-//       CHECK: %[[R:.*]] = sparse_tensor.pack %[[D]], %[[P]], %[[I]]
+//       CHECK: %[[R:.*]] = sparse_tensor.assemble %[[D]], %[[P]], %[[I]]
 //       CHECK: return %[[R]] : tensor<100xf64, #{{.*}}>
 func.func @sparse_pack(%data: tensor<6xf64>, %pos: tensor<2xi32>, %index: tensor<6x1xi32>)
                             -> tensor<100xf64, #Sp...
[truncated]

@PeimingLiu PeimingLiu merged commit 6ca47eb into llvm:main Sep 28, 2023
@PeimingLiu PeimingLiu deleted the peiming-clean branch September 28, 2023 18:01
legrosbuffle pushed a commit to legrosbuffle/llvm-project that referenced this pull request Sep 29, 2023
llvm#67717)

…semble

Pack/Unpack are overridden in many other places, rename the operations
to avoid confusion.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
mlir:gpu mlir:sparse Sparse compiler in MLIR mlir
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants