Skip to content

[mlir][sparse][tensor] replace bufferization with empty tensor #66450

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Sep 15, 2023

Conversation

aartbik
Copy link
Contributor

@aartbik aartbik commented Sep 15, 2023

Rationale:
A bufferization.alloc_tensor can be directly replaced
with tensor.empty since these are more or less semantically
equivalent. The latter is considered a bit more "pure"
with respect to SSA semantics.

Rationale:
A bufferization.alloc_tensor can be directly replaced
with tensor.empty since these are more or less semantically
equivalent. The latter is considered a bit more "pure"
with respect to SSA semantics.
@llvmbot
Copy link
Member

llvmbot commented Sep 15, 2023

@llvm/pr-subscribers-mlir-sparse
@llvm/pr-subscribers-mlir-gpu

@llvm/pr-subscribers-mlir

Changes Rationale: A bufferization.alloc_tensor can be directly replaced with tensor.empty since these are more or less semantically equivalent. The latter is considered a bit more "pure" with respect to SSA semantics.

--

Patch is 78.42 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/66450.diff

48 Files Affected:

  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/dense_output.mlir (+1-1)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/dense_output_bf16.mlir (+1-1)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/dense_output_f16.mlir (+1-1)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/dual_sparse_conv_2d.mlir (+4-4)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_abs.mlir (+2-2)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_binary.mlir (+11-11)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_cmp.mlir (+2-2)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_codegen_dim.mlir (+2-2)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_complex32.mlir (+2-2)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_complex64.mlir (+2-2)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_complex_ops.mlir (+7-7)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_conv_1d_nwc_wcf.mlir (+3-3)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_conv_2d.mlir (+5-5)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_conv_2d_nchw_fchw.mlir (+1-1)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_conv_2d_nhwc_hwcf.mlir (+4-4)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_conv_3d.mlir (+4-4)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_conv_3d_ndhwc_dhwcf.mlir (+3-3)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_dot.mlir (+1-1)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_expand.mlir (+1-1)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_filter_conv2d.mlir (+1-1)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_index.mlir (+5-5)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_index_dense.mlir (+2-2)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_insert_1d.mlir (+2-2)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_insert_2d.mlir (+5-5)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_insert_3d.mlir (+4-4)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_matmul.mlir (+2-2)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_matmul_slice.mlir (+5-5)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_matrix_ops.mlir (+3-3)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_out_mult_elt.mlir (+1-1)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_out_reduction.mlir (+1-1)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_pooling_nhwc.mlir (+1-1)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_re_im.mlir (+2-2)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_reduce_custom.mlir (+2-2)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_reduce_custom_prod.mlir (+2-2)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_sampled_mm_fusion.mlir (+2-2)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_scf_nested.mlir (+2-2)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_select.mlir (+2-2)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_semiring_select.mlir (+1-1)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_sign.mlir (+1-1)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_strided_conv_2d_nhwc_hwcf.mlir (+1-1)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_sum_c32.mlir (+1-1)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_tensor_mul.mlir (+1-1)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_tensor_ops.mlir (+1-1)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_transpose.mlir (+2-2)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_transpose_coo.mlir (+1-1)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_unary.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_vector_ops.mlir (+4-4)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/GPU/CUDA/sparse-gemm-lib.mlir (+1-1)
diff --git a/mlir/test/Integration/Dialect/SparseTensor/CPU/dense_output.mlir b/mlir/test/Integration/Dialect/SparseTensor/CPU/dense_output.mlir
index 89bf215a2c7788b..4ef8b29ee4e1a84 100644
--- a/mlir/test/Integration/Dialect/SparseTensor/CPU/dense_output.mlir
+++ b/mlir/test/Integration/Dialect/SparseTensor/CPU/dense_output.mlir
@@ -71,7 +71,7 @@ module {
     %c2 = arith.constant 2.0 : f64
     %d0 = tensor.dim %arga, %c0 : tensor<?x?xf64, #SparseMatrix>
     %d1 = tensor.dim %arga, %c1 : tensor<?x?xf64, #SparseMatrix>
-    %init = bufferization.alloc_tensor(%d0, %d1) : tensor<?x?xf64, #DenseMatrix>
+    %init = tensor.empty(%d0, %d1) : tensor<?x?xf64, #DenseMatrix>
     %0 = linalg.generic #trait_assign
        ins(%arga: tensor<?x?xf64, #SparseMatrix>)
       outs(%init: tensor<?x?xf64, #DenseMatrix>) {
diff --git a/mlir/test/Integration/Dialect/SparseTensor/CPU/dense_output_bf16.mlir b/mlir/test/Integration/Dialect/SparseTensor/CPU/dense_output_bf16.mlir
index 420d3d8c6232744..317c7af990f78c4 100644
--- a/mlir/test/Integration/Dialect/SparseTensor/CPU/dense_output_bf16.mlir
+++ b/mlir/test/Integration/Dialect/SparseTensor/CPU/dense_output_bf16.mlir
@@ -48,7 +48,7 @@ module {
                         %argb: tensor<?xbf16, #SparseVector>) -> tensor<?xbf16, #DenseVector> {
     %c = arith.constant 0 : index
     %d = tensor.dim %arga, %c : tensor<?xbf16, #SparseVector>
-    %xv = bufferization.alloc_tensor (%d) : tensor<?xbf16, #DenseVector>
+    %xv = tensor.empty (%d) : tensor<?xbf16, #DenseVector>
     %0 = linalg.generic #trait_vec_op
        ins(%arga, %argb: tensor<?xbf16, #SparseVector>, tensor<?xbf16, #SparseVector>)
         outs(%xv: tensor<?xbf16, #DenseVector>) {
diff --git a/mlir/test/Integration/Dialect/SparseTensor/CPU/dense_output_f16.mlir b/mlir/test/Integration/Dialect/SparseTensor/CPU/dense_output_f16.mlir
index 96ea972bd6b5f0e..7c8510d8fbabc92 100644
--- a/mlir/test/Integration/Dialect/SparseTensor/CPU/dense_output_f16.mlir
+++ b/mlir/test/Integration/Dialect/SparseTensor/CPU/dense_output_f16.mlir
@@ -49,7 +49,7 @@ module {
                         %argb: tensor<?xf16, #SparseVector>) -> tensor<?xf16, #DenseVector> {
     %c = arith.constant 0 : index
     %d = tensor.dim %arga, %c : tensor<?xf16, #SparseVector>
-    %xv = bufferization.alloc_tensor (%d) : tensor<?xf16, #DenseVector>
+    %xv = tensor.empty (%d) : tensor<?xf16, #DenseVector>
     %0 = linalg.generic #trait_vec_op
        ins(%arga, %argb: tensor<?xf16, #SparseVector>, tensor<?xf16, #SparseVector>)
         outs(%xv: tensor<?xf16, #DenseVector>) {
diff --git a/mlir/test/Integration/Dialect/SparseTensor/CPU/dual_sparse_conv_2d.mlir b/mlir/test/Integration/Dialect/SparseTensor/CPU/dual_sparse_conv_2d.mlir
index 0488f5186a4a77d..6cf99cf45997d43 100644
--- a/mlir/test/Integration/Dialect/SparseTensor/CPU/dual_sparse_conv_2d.mlir
+++ b/mlir/test/Integration/Dialect/SparseTensor/CPU/dual_sparse_conv_2d.mlir
@@ -51,7 +51,7 @@ module {
 
   func.func @conv2d_all_sparse_DCSR(%input:  tensor<8x8xi32, #DCSR>,
                %filter: tensor<3x3xi32, #DCSR>) -> tensor<6x6xi32, #DCSR> {
-    %s = bufferization.alloc_tensor() : tensor<6x6xi32, #DCSR>
+    %s = tensor.empty() : tensor<6x6xi32, #DCSR>
     %0 = linalg.conv_2d
       ins  (%input, %filter: tensor<8x8xi32, #DCSR>, tensor<3x3xi32, #DCSR>)
       outs (%s: tensor<6x6xi32, #DCSR>) -> tensor<6x6xi32, #DCSR>
@@ -60,7 +60,7 @@ module {
 
   func.func @conv2d_all_sparse_CSR(%input:  tensor<8x8xi32, #CSR>,
                %filter: tensor<3x3xi32, #CSR>) -> tensor<6x6xi32, #CSR> {
-    %s = bufferization.alloc_tensor() : tensor<6x6xi32, #CSR>
+    %s = tensor.empty() : tensor<6x6xi32, #CSR>
     %0 = linalg.conv_2d
       ins  (%input, %filter: tensor<8x8xi32, #CSR>, tensor<3x3xi32, #CSR>)
       outs (%s: tensor<6x6xi32, #CSR>) -> tensor<6x6xi32, #CSR>
@@ -69,7 +69,7 @@ module {
 
   func.func @conv2d_all_sparse_CD(%input:  tensor<8x8xi32, #CDR>,
                %filter: tensor<3x3xi32, #CDR>) -> tensor<6x6xi32, #CDR> {
-    %s = bufferization.alloc_tensor() : tensor<6x6xi32, #CDR>
+    %s = tensor.empty() : tensor<6x6xi32, #CDR>
     %0 = linalg.conv_2d
       ins  (%input, %filter: tensor<8x8xi32, #CDR>, tensor<3x3xi32, #CDR>)
       outs (%s: tensor<6x6xi32, #CDR>) -> tensor<6x6xi32, #CDR>
@@ -78,7 +78,7 @@ module {
 
   func.func @conv2d_all_sparse_CSC(%input:  tensor<8x8xi32, #CSC>,
                %filter: tensor<3x3xi32, #CSC>) -> tensor<6x6xi32, #CSC> {
-    %s = bufferization.alloc_tensor() : tensor<6x6xi32, #CSC>
+    %s = tensor.empty() : tensor<6x6xi32, #CSC>
     %0 = linalg.conv_2d
       ins  (%input, %filter: tensor<8x8xi32, #CSC>, tensor<3x3xi32, #CSC>)
       outs (%s: tensor<6x6xi32, #CSC>) -> tensor<6x6xi32, #CSC>
diff --git a/mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_abs.mlir b/mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_abs.mlir
index 584906034d2d20e..71054e456e49475 100644
--- a/mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_abs.mlir
+++ b/mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_abs.mlir
@@ -46,7 +46,7 @@ module {
                              -> tensor<?xf64, #SparseVector> {
     %c0 = arith.constant 0 : index
     %d = tensor.dim %arg0, %c0 : tensor<?xf64, #SparseVector>
-    %xin = bufferization.alloc_tensor(%d) : tensor<?xf64, #SparseVector>
+    %xin = tensor.empty(%d) : tensor<?xf64, #SparseVector>
     %0 = linalg.generic #trait_op
       ins(%arg0: tensor<?xf64, #SparseVector>)
       outs(%xin: tensor<?xf64, #SparseVector>) {
@@ -61,7 +61,7 @@ module {
                              -> tensor<?xi32, #SparseVector> {
     %c0 = arith.constant 0 : index
     %d = tensor.dim %arg0, %c0 : tensor<?xi32, #SparseVector>
-    %xin = bufferization.alloc_tensor(%d) : tensor<?xi32, #SparseVector>
+    %xin = tensor.empty(%d) : tensor<?xi32, #SparseVector>
     %0 = linalg.generic #trait_op
       ins(%arg0: tensor<?xi32, #SparseVector>)
       outs(%xin: tensor<?xi32, #SparseVector>) {
diff --git a/mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_binary.mlir b/mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_binary.mlir
index 917f8a4838f4de5..826bf0da0ec81f3 100644
--- a/mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_binary.mlir
+++ b/mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_binary.mlir
@@ -73,7 +73,7 @@ module {
                         %argb: tensor<?xi32, #SparseVector>) -> tensor<?xi32, #SparseVector> {
     %c = arith.constant 0 : index
     %d = tensor.dim %arga, %c : tensor<?xi32, #SparseVector>
-    %xv = bufferization.alloc_tensor(%d) : tensor<?xi32, #SparseVector>
+    %xv = tensor.empty(%d) : tensor<?xi32, #SparseVector>
     %0 = linalg.generic #trait_vec_op
        ins(%arga, %argb: tensor<?xi32, #SparseVector>, tensor<?xi32, #SparseVector>)
         outs(%xv: tensor<?xi32, #SparseVector>) {
@@ -97,7 +97,7 @@ module {
                         %argb: tensor<?xf64>) -> tensor<?xf64, #SparseVector> {
     %c = arith.constant 0 : index
     %d = tensor.dim %arga, %c : tensor<?xf64, #SparseVector>
-    %xv = bufferization.alloc_tensor(%d) : tensor<?xf64, #SparseVector>
+    %xv = tensor.empty(%d) : tensor<?xf64, #SparseVector>
     %0 = linalg.generic #trait_vec_op
        ins(%arga, %argb: tensor<?xf64, #SparseVector>, tensor<?xf64>)
         outs(%xv: tensor<?xf64, #SparseVector>) {
@@ -121,7 +121,7 @@ module {
                             %argb: tensor<?xf64, #SparseVector>) -> tensor<?xf64, #SparseVector> {
     %c = arith.constant 0 : index
     %d = tensor.dim %arga, %c : tensor<?xf64, #SparseVector>
-    %xv = bufferization.alloc_tensor(%d) : tensor<?xf64, #SparseVector>
+    %xv = tensor.empty(%d) : tensor<?xf64, #SparseVector>
     %0 = linalg.generic #trait_vec_op
        ins(%arga, %argb: tensor<?xf64, #SparseVector>, tensor<?xf64, #SparseVector>)
         outs(%xv: tensor<?xf64, #SparseVector>) {
@@ -139,7 +139,7 @@ module {
   func.func @vector_index(%arga: tensor<?xf64, #SparseVector>) -> tensor<?xi32, #SparseVector> {
     %c = arith.constant 0 : index
     %d = tensor.dim %arga, %c : tensor<?xf64, #SparseVector>
-    %xv = bufferization.alloc_tensor(%d) : tensor<?xi32, #SparseVector>
+    %xv = tensor.empty(%d) : tensor<?xi32, #SparseVector>
     %0 = linalg.generic #trait_vec_scale
        ins(%arga: tensor<?xf64, #SparseVector>)
         outs(%xv: tensor<?xi32, #SparseVector>) {
@@ -166,7 +166,7 @@ module {
     %c1 = arith.constant 1 : index
     %d0 = tensor.dim %arga, %c0 : tensor<?x?xf64, #DCSR>
     %d1 = tensor.dim %arga, %c1 : tensor<?x?xf64, #DCSR>
-    %xv = bufferization.alloc_tensor(%d0, %d1) : tensor<?x?xf64, #DCSR>
+    %xv = tensor.empty(%d0, %d1) : tensor<?x?xf64, #DCSR>
     %0 = linalg.generic #trait_mat_op
        ins(%arga, %argb: tensor<?x?xf64, #DCSR>, tensor<?x?xf64, #DCSR>)
         outs(%xv: tensor<?x?xf64, #DCSR>) {
@@ -191,7 +191,7 @@ module {
   // Tensor addition (use semi-ring binary operation).
   func.func @add_tensor_1(%A: tensor<4x4xf64, #DCSR>,
                           %B: tensor<4x4xf64, #DCSR>) -> tensor<4x4xf64, #DCSR> {
-    %C = bufferization.alloc_tensor() : tensor<4x4xf64, #DCSR>
+    %C = tensor.empty() : tensor<4x4xf64, #DCSR>
     %0 = linalg.generic #trait_mat_op
       ins(%A, %B: tensor<4x4xf64, #DCSR>,
                   tensor<4x4xf64, #DCSR>)
@@ -213,7 +213,7 @@ module {
   // Same as @add_tensor_1, but use sparse_tensor.yield instead of identity to yield value.
   func.func @add_tensor_2(%A: tensor<4x4xf64, #DCSR>,
                           %B: tensor<4x4xf64, #DCSR>) -> tensor<4x4xf64, #DCSR> {
-    %C = bufferization.alloc_tensor() : tensor<4x4xf64, #DCSR>
+    %C = tensor.empty() : tensor<4x4xf64, #DCSR>
     %0 = linalg.generic #trait_mat_op
       ins(%A, %B: tensor<4x4xf64, #DCSR>,
                   tensor<4x4xf64, #DCSR>)
@@ -241,7 +241,7 @@ module {
   // Performs triangular add/sub operation (using semi-ring binary op).
   func.func @triangular(%A: tensor<4x4xf64, #DCSR>,
                         %B: tensor<4x4xf64, #DCSR>) -> tensor<4x4xf64, #DCSR> {
-    %C = bufferization.alloc_tensor() : tensor<4x4xf64, #DCSR>
+    %C = tensor.empty() : tensor<4x4xf64, #DCSR>
     %0 = linalg.generic #trait_mat_op
       ins(%A, %B: tensor<4x4xf64, #DCSR>,
                   tensor<4x4xf64, #DCSR>)
@@ -274,7 +274,7 @@ module {
   // Perform sub operation (using semi-ring binary op) with a constant threshold.
   func.func @sub_with_thres(%A: tensor<4x4xf64, #DCSR>,
                             %B: tensor<4x4xf64, #DCSR>) -> tensor<4x4xf64, #DCSR> {
-    %C = bufferization.alloc_tensor() : tensor<4x4xf64, #DCSR>
+    %C = tensor.empty() : tensor<4x4xf64, #DCSR>
     // Defines out-block constant bounds.
     %thres_out_up = arith.constant 2.0 : f64
     %thres_out_lo = arith.constant -2.0 : f64
@@ -323,7 +323,7 @@ module {
   // Performs isEqual only on intersecting elements.
   func.func @intersect_equal(%A: tensor<4x4xf64, #DCSR>,
                              %B: tensor<4x4xf64, #DCSR>) -> tensor<4x4xi8, #DCSR> {
-    %C = bufferization.alloc_tensor() : tensor<4x4xi8, #DCSR>
+    %C = tensor.empty() : tensor<4x4xi8, #DCSR>
     %0 = linalg.generic #trait_mat_op
       ins(%A, %B: tensor<4x4xf64, #DCSR>,
                   tensor<4x4xf64, #DCSR>)
@@ -346,7 +346,7 @@ module {
   // Keeps values on left, negate value on right, ignore value when overlapping.
   func.func @only_left_right(%A: tensor<4x4xf64, #DCSR>,
                              %B: tensor<4x4xf64, #DCSR>) -> tensor<4x4xf64, #DCSR> {
-    %C = bufferization.alloc_tensor() : tensor<4x4xf64, #DCSR>
+    %C = tensor.empty() : tensor<4x4xf64, #DCSR>
     %0 = linalg.generic #trait_mat_op
       ins(%A, %B: tensor<4x4xf64, #DCSR>,
                   tensor<4x4xf64, #DCSR>)
diff --git a/mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_cmp.mlir b/mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_cmp.mlir
index f6c72581153bfac..87ab88b8d9de99c 100644
--- a/mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_cmp.mlir
+++ b/mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_cmp.mlir
@@ -66,7 +66,7 @@ module {
 
   func.func @cmp_lhs_sparse(%arga: tensor<4x4xf64, #DCSR>,
                             %argb: tensor<4x4xf64>) -> tensor<4x4xi8, #DCSR> {
-    %argx = bufferization.alloc_tensor() : tensor<4x4xi8, #DCSR>
+    %argx = tensor.empty() : tensor<4x4xi8, #DCSR>
     %0 = linalg.generic #trait
        ins(%arga, %argb: tensor<4x4xf64, #DCSR>, tensor<4x4xf64>)
       outs(%argx: tensor<4x4xi8, #DCSR>) {
@@ -80,7 +80,7 @@ module {
 
   func.func @cmp_all_sparse(%arga: tensor<4x4xf64, #DCSR>,
                             %argb: tensor<4x4xf64, #DCSR>) -> tensor<4x4xi8, #DCSR> {
-    %argx = bufferization.alloc_tensor() : tensor<4x4xi8, #DCSR>
+    %argx = tensor.empty() : tensor<4x4xi8, #DCSR>
     %0 = linalg.generic #trait
        ins(%arga, %argb: tensor<4x4xf64, #DCSR>, tensor<4x4xf64, #DCSR>)
       outs(%argx: tensor<4x4xi8, #DCSR>) {
diff --git a/mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_codegen_dim.mlir b/mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_codegen_dim.mlir
index 3203473f68b324d..45ea95d1a6f36fd 100644
--- a/mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_codegen_dim.mlir
+++ b/mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_codegen_dim.mlir
@@ -43,8 +43,8 @@ module {
     %c1 = arith.constant 1 : index
     %c2 = arith.constant 2 : index
     %c3 = arith.constant 3 : index
-    %t1 = bufferization.alloc_tensor() : tensor<4x5xf64, #DCSR>
-    %t2 = bufferization.alloc_tensor(%c2, %c3) : tensor<?x?xf64, #DCSR&g...

@aartbik aartbik merged commit d2e787d into llvm:main Sep 15, 2023
@aartbik aartbik deleted the bik branch September 15, 2023 19:08
ZijunZhaoCCK pushed a commit to ZijunZhaoCCK/llvm-project that referenced this pull request Sep 19, 2023
…66450)

Rationale:
    A bufferization.alloc_tensor can be directly replaced
    with tensor.empty since these are more or less semantically
    equivalent. The latter is considered a bit more "pure"
    with respect to SSA semantics.
zahiraam pushed a commit to tahonermann/llvm-project that referenced this pull request Oct 24, 2023
…66450)

Rationale:
    A bufferization.alloc_tensor can be directly replaced
    with tensor.empty since these are more or less semantically
    equivalent. The latter is considered a bit more "pure"
    with respect to SSA semantics.
zahiraam pushed a commit to tahonermann/llvm-project that referenced this pull request Oct 24, 2023
…66450)

Rationale:
    A bufferization.alloc_tensor can be directly replaced
    with tensor.empty since these are more or less semantically
    equivalent. The latter is considered a bit more "pure"
    with respect to SSA semantics.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
mlir:gpu mlir:sparse Sparse compiler in MLIR mlir
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants