-
Notifications
You must be signed in to change notification settings - Fork 13.6k
[mlir][sparse][tensor] replace bufferization with empty tensor #66450
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Rationale: A bufferization.alloc_tensor can be directly replaced with tensor.empty since these are more or less semantically equivalent. The latter is considered a bit more "pure" with respect to SSA semantics.
@llvm/pr-subscribers-mlir-sparse @llvm/pr-subscribers-mlir ChangesRationale: A bufferization.alloc_tensor can be directly replaced with tensor.empty since these are more or less semantically equivalent. The latter is considered a bit more "pure" with respect to SSA semantics.-- Patch is 78.42 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/66450.diff 48 Files Affected:
diff --git a/mlir/test/Integration/Dialect/SparseTensor/CPU/dense_output.mlir b/mlir/test/Integration/Dialect/SparseTensor/CPU/dense_output.mlir index 89bf215a2c7788b..4ef8b29ee4e1a84 100644 --- a/mlir/test/Integration/Dialect/SparseTensor/CPU/dense_output.mlir +++ b/mlir/test/Integration/Dialect/SparseTensor/CPU/dense_output.mlir @@ -71,7 +71,7 @@ module { %c2 = arith.constant 2.0 : f64 %d0 = tensor.dim %arga, %c0 : tensor<?x?xf64, #SparseMatrix> %d1 = tensor.dim %arga, %c1 : tensor<?x?xf64, #SparseMatrix> - %init = bufferization.alloc_tensor(%d0, %d1) : tensor<?x?xf64, #DenseMatrix> + %init = tensor.empty(%d0, %d1) : tensor<?x?xf64, #DenseMatrix> %0 = linalg.generic #trait_assign ins(%arga: tensor<?x?xf64, #SparseMatrix>) outs(%init: tensor<?x?xf64, #DenseMatrix>) { diff --git a/mlir/test/Integration/Dialect/SparseTensor/CPU/dense_output_bf16.mlir b/mlir/test/Integration/Dialect/SparseTensor/CPU/dense_output_bf16.mlir index 420d3d8c6232744..317c7af990f78c4 100644 --- a/mlir/test/Integration/Dialect/SparseTensor/CPU/dense_output_bf16.mlir +++ b/mlir/test/Integration/Dialect/SparseTensor/CPU/dense_output_bf16.mlir @@ -48,7 +48,7 @@ module { %argb: tensor<?xbf16, #SparseVector>) -> tensor<?xbf16, #DenseVector> { %c = arith.constant 0 : index %d = tensor.dim %arga, %c : tensor<?xbf16, #SparseVector> - %xv = bufferization.alloc_tensor (%d) : tensor<?xbf16, #DenseVector> + %xv = tensor.empty (%d) : tensor<?xbf16, #DenseVector> %0 = linalg.generic #trait_vec_op ins(%arga, %argb: tensor<?xbf16, #SparseVector>, tensor<?xbf16, #SparseVector>) outs(%xv: tensor<?xbf16, #DenseVector>) { diff --git a/mlir/test/Integration/Dialect/SparseTensor/CPU/dense_output_f16.mlir b/mlir/test/Integration/Dialect/SparseTensor/CPU/dense_output_f16.mlir index 96ea972bd6b5f0e..7c8510d8fbabc92 100644 --- a/mlir/test/Integration/Dialect/SparseTensor/CPU/dense_output_f16.mlir +++ b/mlir/test/Integration/Dialect/SparseTensor/CPU/dense_output_f16.mlir @@ -49,7 +49,7 @@ module { %argb: tensor<?xf16, #SparseVector>) -> tensor<?xf16, #DenseVector> { %c = arith.constant 0 : index %d = tensor.dim %arga, %c : tensor<?xf16, #SparseVector> - %xv = bufferization.alloc_tensor (%d) : tensor<?xf16, #DenseVector> + %xv = tensor.empty (%d) : tensor<?xf16, #DenseVector> %0 = linalg.generic #trait_vec_op ins(%arga, %argb: tensor<?xf16, #SparseVector>, tensor<?xf16, #SparseVector>) outs(%xv: tensor<?xf16, #DenseVector>) { diff --git a/mlir/test/Integration/Dialect/SparseTensor/CPU/dual_sparse_conv_2d.mlir b/mlir/test/Integration/Dialect/SparseTensor/CPU/dual_sparse_conv_2d.mlir index 0488f5186a4a77d..6cf99cf45997d43 100644 --- a/mlir/test/Integration/Dialect/SparseTensor/CPU/dual_sparse_conv_2d.mlir +++ b/mlir/test/Integration/Dialect/SparseTensor/CPU/dual_sparse_conv_2d.mlir @@ -51,7 +51,7 @@ module { func.func @conv2d_all_sparse_DCSR(%input: tensor<8x8xi32, #DCSR>, %filter: tensor<3x3xi32, #DCSR>) -> tensor<6x6xi32, #DCSR> { - %s = bufferization.alloc_tensor() : tensor<6x6xi32, #DCSR> + %s = tensor.empty() : tensor<6x6xi32, #DCSR> %0 = linalg.conv_2d ins (%input, %filter: tensor<8x8xi32, #DCSR>, tensor<3x3xi32, #DCSR>) outs (%s: tensor<6x6xi32, #DCSR>) -> tensor<6x6xi32, #DCSR> @@ -60,7 +60,7 @@ module { func.func @conv2d_all_sparse_CSR(%input: tensor<8x8xi32, #CSR>, %filter: tensor<3x3xi32, #CSR>) -> tensor<6x6xi32, #CSR> { - %s = bufferization.alloc_tensor() : tensor<6x6xi32, #CSR> + %s = tensor.empty() : tensor<6x6xi32, #CSR> %0 = linalg.conv_2d ins (%input, %filter: tensor<8x8xi32, #CSR>, tensor<3x3xi32, #CSR>) outs (%s: tensor<6x6xi32, #CSR>) -> tensor<6x6xi32, #CSR> @@ -69,7 +69,7 @@ module { func.func @conv2d_all_sparse_CD(%input: tensor<8x8xi32, #CDR>, %filter: tensor<3x3xi32, #CDR>) -> tensor<6x6xi32, #CDR> { - %s = bufferization.alloc_tensor() : tensor<6x6xi32, #CDR> + %s = tensor.empty() : tensor<6x6xi32, #CDR> %0 = linalg.conv_2d ins (%input, %filter: tensor<8x8xi32, #CDR>, tensor<3x3xi32, #CDR>) outs (%s: tensor<6x6xi32, #CDR>) -> tensor<6x6xi32, #CDR> @@ -78,7 +78,7 @@ module { func.func @conv2d_all_sparse_CSC(%input: tensor<8x8xi32, #CSC>, %filter: tensor<3x3xi32, #CSC>) -> tensor<6x6xi32, #CSC> { - %s = bufferization.alloc_tensor() : tensor<6x6xi32, #CSC> + %s = tensor.empty() : tensor<6x6xi32, #CSC> %0 = linalg.conv_2d ins (%input, %filter: tensor<8x8xi32, #CSC>, tensor<3x3xi32, #CSC>) outs (%s: tensor<6x6xi32, #CSC>) -> tensor<6x6xi32, #CSC> diff --git a/mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_abs.mlir b/mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_abs.mlir index 584906034d2d20e..71054e456e49475 100644 --- a/mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_abs.mlir +++ b/mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_abs.mlir @@ -46,7 +46,7 @@ module { -> tensor<?xf64, #SparseVector> { %c0 = arith.constant 0 : index %d = tensor.dim %arg0, %c0 : tensor<?xf64, #SparseVector> - %xin = bufferization.alloc_tensor(%d) : tensor<?xf64, #SparseVector> + %xin = tensor.empty(%d) : tensor<?xf64, #SparseVector> %0 = linalg.generic #trait_op ins(%arg0: tensor<?xf64, #SparseVector>) outs(%xin: tensor<?xf64, #SparseVector>) { @@ -61,7 +61,7 @@ module { -> tensor<?xi32, #SparseVector> { %c0 = arith.constant 0 : index %d = tensor.dim %arg0, %c0 : tensor<?xi32, #SparseVector> - %xin = bufferization.alloc_tensor(%d) : tensor<?xi32, #SparseVector> + %xin = tensor.empty(%d) : tensor<?xi32, #SparseVector> %0 = linalg.generic #trait_op ins(%arg0: tensor<?xi32, #SparseVector>) outs(%xin: tensor<?xi32, #SparseVector>) { diff --git a/mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_binary.mlir b/mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_binary.mlir index 917f8a4838f4de5..826bf0da0ec81f3 100644 --- a/mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_binary.mlir +++ b/mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_binary.mlir @@ -73,7 +73,7 @@ module { %argb: tensor<?xi32, #SparseVector>) -> tensor<?xi32, #SparseVector> { %c = arith.constant 0 : index %d = tensor.dim %arga, %c : tensor<?xi32, #SparseVector> - %xv = bufferization.alloc_tensor(%d) : tensor<?xi32, #SparseVector> + %xv = tensor.empty(%d) : tensor<?xi32, #SparseVector> %0 = linalg.generic #trait_vec_op ins(%arga, %argb: tensor<?xi32, #SparseVector>, tensor<?xi32, #SparseVector>) outs(%xv: tensor<?xi32, #SparseVector>) { @@ -97,7 +97,7 @@ module { %argb: tensor<?xf64>) -> tensor<?xf64, #SparseVector> { %c = arith.constant 0 : index %d = tensor.dim %arga, %c : tensor<?xf64, #SparseVector> - %xv = bufferization.alloc_tensor(%d) : tensor<?xf64, #SparseVector> + %xv = tensor.empty(%d) : tensor<?xf64, #SparseVector> %0 = linalg.generic #trait_vec_op ins(%arga, %argb: tensor<?xf64, #SparseVector>, tensor<?xf64>) outs(%xv: tensor<?xf64, #SparseVector>) { @@ -121,7 +121,7 @@ module { %argb: tensor<?xf64, #SparseVector>) -> tensor<?xf64, #SparseVector> { %c = arith.constant 0 : index %d = tensor.dim %arga, %c : tensor<?xf64, #SparseVector> - %xv = bufferization.alloc_tensor(%d) : tensor<?xf64, #SparseVector> + %xv = tensor.empty(%d) : tensor<?xf64, #SparseVector> %0 = linalg.generic #trait_vec_op ins(%arga, %argb: tensor<?xf64, #SparseVector>, tensor<?xf64, #SparseVector>) outs(%xv: tensor<?xf64, #SparseVector>) { @@ -139,7 +139,7 @@ module { func.func @vector_index(%arga: tensor<?xf64, #SparseVector>) -> tensor<?xi32, #SparseVector> { %c = arith.constant 0 : index %d = tensor.dim %arga, %c : tensor<?xf64, #SparseVector> - %xv = bufferization.alloc_tensor(%d) : tensor<?xi32, #SparseVector> + %xv = tensor.empty(%d) : tensor<?xi32, #SparseVector> %0 = linalg.generic #trait_vec_scale ins(%arga: tensor<?xf64, #SparseVector>) outs(%xv: tensor<?xi32, #SparseVector>) { @@ -166,7 +166,7 @@ module { %c1 = arith.constant 1 : index %d0 = tensor.dim %arga, %c0 : tensor<?x?xf64, #DCSR> %d1 = tensor.dim %arga, %c1 : tensor<?x?xf64, #DCSR> - %xv = bufferization.alloc_tensor(%d0, %d1) : tensor<?x?xf64, #DCSR> + %xv = tensor.empty(%d0, %d1) : tensor<?x?xf64, #DCSR> %0 = linalg.generic #trait_mat_op ins(%arga, %argb: tensor<?x?xf64, #DCSR>, tensor<?x?xf64, #DCSR>) outs(%xv: tensor<?x?xf64, #DCSR>) { @@ -191,7 +191,7 @@ module { // Tensor addition (use semi-ring binary operation). func.func @add_tensor_1(%A: tensor<4x4xf64, #DCSR>, %B: tensor<4x4xf64, #DCSR>) -> tensor<4x4xf64, #DCSR> { - %C = bufferization.alloc_tensor() : tensor<4x4xf64, #DCSR> + %C = tensor.empty() : tensor<4x4xf64, #DCSR> %0 = linalg.generic #trait_mat_op ins(%A, %B: tensor<4x4xf64, #DCSR>, tensor<4x4xf64, #DCSR>) @@ -213,7 +213,7 @@ module { // Same as @add_tensor_1, but use sparse_tensor.yield instead of identity to yield value. func.func @add_tensor_2(%A: tensor<4x4xf64, #DCSR>, %B: tensor<4x4xf64, #DCSR>) -> tensor<4x4xf64, #DCSR> { - %C = bufferization.alloc_tensor() : tensor<4x4xf64, #DCSR> + %C = tensor.empty() : tensor<4x4xf64, #DCSR> %0 = linalg.generic #trait_mat_op ins(%A, %B: tensor<4x4xf64, #DCSR>, tensor<4x4xf64, #DCSR>) @@ -241,7 +241,7 @@ module { // Performs triangular add/sub operation (using semi-ring binary op). func.func @triangular(%A: tensor<4x4xf64, #DCSR>, %B: tensor<4x4xf64, #DCSR>) -> tensor<4x4xf64, #DCSR> { - %C = bufferization.alloc_tensor() : tensor<4x4xf64, #DCSR> + %C = tensor.empty() : tensor<4x4xf64, #DCSR> %0 = linalg.generic #trait_mat_op ins(%A, %B: tensor<4x4xf64, #DCSR>, tensor<4x4xf64, #DCSR>) @@ -274,7 +274,7 @@ module { // Perform sub operation (using semi-ring binary op) with a constant threshold. func.func @sub_with_thres(%A: tensor<4x4xf64, #DCSR>, %B: tensor<4x4xf64, #DCSR>) -> tensor<4x4xf64, #DCSR> { - %C = bufferization.alloc_tensor() : tensor<4x4xf64, #DCSR> + %C = tensor.empty() : tensor<4x4xf64, #DCSR> // Defines out-block constant bounds. %thres_out_up = arith.constant 2.0 : f64 %thres_out_lo = arith.constant -2.0 : f64 @@ -323,7 +323,7 @@ module { // Performs isEqual only on intersecting elements. func.func @intersect_equal(%A: tensor<4x4xf64, #DCSR>, %B: tensor<4x4xf64, #DCSR>) -> tensor<4x4xi8, #DCSR> { - %C = bufferization.alloc_tensor() : tensor<4x4xi8, #DCSR> + %C = tensor.empty() : tensor<4x4xi8, #DCSR> %0 = linalg.generic #trait_mat_op ins(%A, %B: tensor<4x4xf64, #DCSR>, tensor<4x4xf64, #DCSR>) @@ -346,7 +346,7 @@ module { // Keeps values on left, negate value on right, ignore value when overlapping. func.func @only_left_right(%A: tensor<4x4xf64, #DCSR>, %B: tensor<4x4xf64, #DCSR>) -> tensor<4x4xf64, #DCSR> { - %C = bufferization.alloc_tensor() : tensor<4x4xf64, #DCSR> + %C = tensor.empty() : tensor<4x4xf64, #DCSR> %0 = linalg.generic #trait_mat_op ins(%A, %B: tensor<4x4xf64, #DCSR>, tensor<4x4xf64, #DCSR>) diff --git a/mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_cmp.mlir b/mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_cmp.mlir index f6c72581153bfac..87ab88b8d9de99c 100644 --- a/mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_cmp.mlir +++ b/mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_cmp.mlir @@ -66,7 +66,7 @@ module { func.func @cmp_lhs_sparse(%arga: tensor<4x4xf64, #DCSR>, %argb: tensor<4x4xf64>) -> tensor<4x4xi8, #DCSR> { - %argx = bufferization.alloc_tensor() : tensor<4x4xi8, #DCSR> + %argx = tensor.empty() : tensor<4x4xi8, #DCSR> %0 = linalg.generic #trait ins(%arga, %argb: tensor<4x4xf64, #DCSR>, tensor<4x4xf64>) outs(%argx: tensor<4x4xi8, #DCSR>) { @@ -80,7 +80,7 @@ module { func.func @cmp_all_sparse(%arga: tensor<4x4xf64, #DCSR>, %argb: tensor<4x4xf64, #DCSR>) -> tensor<4x4xi8, #DCSR> { - %argx = bufferization.alloc_tensor() : tensor<4x4xi8, #DCSR> + %argx = tensor.empty() : tensor<4x4xi8, #DCSR> %0 = linalg.generic #trait ins(%arga, %argb: tensor<4x4xf64, #DCSR>, tensor<4x4xf64, #DCSR>) outs(%argx: tensor<4x4xi8, #DCSR>) { diff --git a/mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_codegen_dim.mlir b/mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_codegen_dim.mlir index 3203473f68b324d..45ea95d1a6f36fd 100644 --- a/mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_codegen_dim.mlir +++ b/mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_codegen_dim.mlir @@ -43,8 +43,8 @@ module { %c1 = arith.constant 1 : index %c2 = arith.constant 2 : index %c3 = arith.constant 3 : index - %t1 = bufferization.alloc_tensor() : tensor<4x5xf64, #DCSR> - %t2 = bufferization.alloc_tensor(%c2, %c3) : tensor<?x?xf64, #DCSR&g... |
PeimingLiu
approved these changes
Sep 15, 2023
ZijunZhaoCCK
pushed a commit
to ZijunZhaoCCK/llvm-project
that referenced
this pull request
Sep 19, 2023
…66450) Rationale: A bufferization.alloc_tensor can be directly replaced with tensor.empty since these are more or less semantically equivalent. The latter is considered a bit more "pure" with respect to SSA semantics.
zahiraam
pushed a commit
to tahonermann/llvm-project
that referenced
this pull request
Oct 24, 2023
…66450) Rationale: A bufferization.alloc_tensor can be directly replaced with tensor.empty since these are more or less semantically equivalent. The latter is considered a bit more "pure" with respect to SSA semantics.
zahiraam
pushed a commit
to tahonermann/llvm-project
that referenced
this pull request
Oct 24, 2023
…66450) Rationale: A bufferization.alloc_tensor can be directly replaced with tensor.empty since these are more or less semantically equivalent. The latter is considered a bit more "pure" with respect to SSA semantics.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Rationale:
A bufferization.alloc_tensor can be directly replaced
with tensor.empty since these are more or less semantically
equivalent. The latter is considered a bit more "pure"
with respect to SSA semantics.