Skip to content

[mlir][vector][NFC] Make function name more meaningful in lit tests. #94538

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Jun 7, 2024
Merged
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
75 changes: 38 additions & 37 deletions mlir/test/Dialect/Vector/vector-transfer-flatten.mlir
Original file line number Diff line number Diff line change
Expand Up @@ -102,6 +102,25 @@ func.func @transfer_read_dims_mismatch_non_zero_indices(

// -----

func.func @transfer_read_dims_mismatch_contiguous_non_zero_idx(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
func.func @transfer_read_dims_mismatch_contiguous_non_zero_idx(
func.func @transfer_read_dims_mismatch_non_contiguous_non_zero_indices(
  • contiguous -> non_contiguous (probably a typo?)
  • idx -> indices (for consistency with transfer_read_dims_mismatch_non_zero_indices)

Similar comment for transfer_write_dims_mismatch_contiguous_non_zero_idx

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, I don't follow why it is non_contiguous? I think it is writing vector<2x2xf32> to a contiguous memory?

Copy link
Contributor

@banach-space banach-space Jun 6, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

%subview : memref<1x3x3x2xf32, strided<[40, 10, 2, 1], offset: ?>>

Note the strides - the memref is not contiguous when considering all dims. However, your point is correct:

I think it is writing vector<2x2xf32> to a contiguous memory?

When only considering the trailing 2 dims, memref is indeed contiguous. IIRC, the point of this test was to verify that the pattern works even if:

  • overall, memref is non-contiguous, however
  • the inner dims are contiguous and that's sufficient.

Does it make sense? If yes, then we probably need a better name or a comment to explain 😅 If not, bare with me :)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see, it makes sense to me now! Thanks for the explanation. I found that you're improving the comments in #94490. Given that it will follow the naming convention, I'll leave the "comment improvement" to the PR that you're working on. And I will update the function name in this PR.

%subview : memref<1x3x3x2xf32, strided<[40, 10, 2, 1], offset: ?>>,
%idx0 : index, %idx1 : index) -> vector<2x2xf32> {
%c0 = arith.constant 0 : index
%cst_1 = arith.constant 0.000000e+00 : f32
%8 = vector.transfer_read %subview[%c0, %idx0, %idx1, %c0], %cst_1 {in_bounds = [true, true]} : memref<1x3x3x2xf32, strided<[40, 10, 2, 1], offset: ?>>, vector<2x2xf32>
return %8 : vector<2x2xf32>
}

// CHECK: #[[$MAP:.+]] = affine_map<()[s0] -> (s0 * 2)>
// CHECK-LABEL: func.func @transfer_read_dims_mismatch_contiguous_non_zero_idx(
// CHECK: %[[COLLAPSE:.+]] = memref.collapse_shape %{{.*}} {{\[}}[0], [1], [2, 3]] : memref<1x3x3x2xf32, strided<[40, 10, 2, 1], offset: ?>> into memref<1x3x6xf32, strided<[40, 10, 1], offset: ?>>
// CHECK: %[[APPLY:.*]] = affine.apply #[[$MAP]]()

// CHECK-128B-LABEL: func @transfer_read_dims_mismatch_contiguous_non_zero_idx(
// CHECK-128B: memref.collapse_shape

// -----

// The input memref has a dynamic trailing shape and hence is not flattened.
// TODO: This case could be supported via memref.dim

Expand Down Expand Up @@ -212,6 +231,25 @@ func.func @transfer_write_dims_mismatch_contiguous(

// -----

func.func @transfer_write_dims_mismatch_contiguous_non_zero_idx(
%value : vector<2x2xf32>,
%subview : memref<1x3x3x2xf32, strided<[40, 10, 2, 1], offset: ?>>,
%idx0 : index, %idx1 : index) {
%c0 = arith.constant 0 : index
vector.transfer_write %value, %subview[%c0, %idx0, %idx1, %c0] {in_bounds = [true, true]} : vector<2x2xf32>, memref<1x3x3x2xf32, strided<[40, 10, 2, 1], offset: ?>>
return
}

// CHECK: #[[$MAP:.+]] = affine_map<()[s0] -> (s0 * 2)>
// CHECK-LABEL: func.func @transfer_write_dims_mismatch_contiguous_non_zero_idx(
// CHECK: %[[APPLY:.*]] = affine.apply #[[$MAP]]()
// CHECK: %[[COLLAPSE:.+]] = memref.collapse_shape %{{.*}} {{\[}}[0], [1], [2, 3]] : memref<1x3x3x2xf32, strided<[40, 10, 2, 1], offset: ?>> into memref<1x3x6xf32, strided<[40, 10, 1], offset: ?>>

// CHECK-128B-LABEL: func @transfer_write_dims_mismatch_contiguous_non_zero_idx(
// CHECK-128B: memref.collapse_shape

// -----

func.func @transfer_write_dims_mismatch_non_contiguous(
%arg : memref<5x4x3x2xi8, strided<[24, 6, 2, 1], offset: ?>>, %vec : vector<2x1x2x2xi8>) {
%c0 = arith.constant 0 : index
Expand Down Expand Up @@ -459,43 +497,6 @@ func.func @fold_unit_dims_entirely(%arg0 : vector<8xi32>,
// CHECK-128B-LABEL: func @fold_unit_dims_entirely(
// CHECK-128B-NOT: memref.collapse_shape


// -----

func.func @regression_non_contiguous_dim_read(%subview : memref<1x3x3x2xf32, strided<[40, 10, 2, 1], offset: ?>>,
%idx0 : index, %idx1 : index) -> vector<2x2xf32> {
%c0 = arith.constant 0 : index
%cst_1 = arith.constant 0.000000e+00 : f32
%8 = vector.transfer_read %subview[%c0, %idx0, %idx1, %c0], %cst_1 {in_bounds = [true, true]} : memref<1x3x3x2xf32, strided<[40, 10, 2, 1], offset: ?>>, vector<2x2xf32>
return %8 : vector<2x2xf32>
}

// CHECK: #[[$MAP:.+]] = affine_map<()[s0] -> (s0 * 2)>
// CHECK-LABEL: func.func @regression_non_contiguous_dim_read(
// CHECK: %[[COLLAPSE:.+]] = memref.collapse_shape %{{.*}} {{\[}}[0], [1], [2, 3]] : memref<1x3x3x2xf32, strided<[40, 10, 2, 1], offset: ?>> into memref<1x3x6xf32, strided<[40, 10, 1], offset: ?>>
// CHECK: %[[APPLY:.*]] = affine.apply #[[$MAP]]()

// CHECK-128B-LABEL: func @regression_non_contiguous_dim_read(
// CHECK-128B: memref.collapse_shape

// -----

func.func @regression_non_contiguous_dim_write(%value : vector<2x2xf32>,
%subview : memref<1x3x3x2xf32, strided<[40, 10, 2, 1], offset: ?>>,
%idx0 : index, %idx1 : index) {
%c0 = arith.constant 0 : index
vector.transfer_write %value, %subview[%c0, %idx0, %idx1, %c0] {in_bounds = [true, true]} : vector<2x2xf32>, memref<1x3x3x2xf32, strided<[40, 10, 2, 1], offset: ?>>
return
}

// CHECK: #[[$MAP:.+]] = affine_map<()[s0] -> (s0 * 2)>
// CHECK-LABEL: func.func @regression_non_contiguous_dim_write(
// CHECK: %[[APPLY:.*]] = affine.apply #[[$MAP]]()
// CHECK: %[[COLLAPSE:.+]] = memref.collapse_shape %{{.*}} {{\[}}[0], [1], [2, 3]] : memref<1x3x3x2xf32, strided<[40, 10, 2, 1], offset: ?>> into memref<1x3x6xf32, strided<[40, 10, 1], offset: ?>>

// CHECK-128B-LABEL: func @regression_non_contiguous_dim_write(
// CHECK-128B: memref.collapse_shape

// -----

func.func @negative_out_of_bound_transfer_read(
Expand Down