-
Notifications
You must be signed in to change notification settings - Fork 14.7k
[mlir] MemRefToSPIRV propagate alignment attributes from MemRef ops. #151723
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
[mlir] MemRefToSPIRV propagate alignment attributes from MemRef ops. #151723
Conversation
Thank you for submitting a Pull Request (PR) to the LLVM Project! This PR will be automatically labeled and the relevant teams will be notified. If you wish to, you can add reviewers by using the "Reviewers" section on this page. If this is not working for you, it is probably because you do not have write permissions for the repository. In which case you can instead tag reviewers by name in a comment by using If you have received no comments on your PR for a week, you can request a review by "ping"ing the PR by adding a comment “Ping”. The common courtesy "ping" rate is once a week. Please remember that you are asking for valuable time from other developers. If you have further questions, they may be answered by the LLVM GitHub User Guide. You can also ask questions in a comment on this PR, on the LLVM Discord or on the forums. |
PSB requires alignment, and in the other storage classes alignment is optional.
This is because the SPIR-V spec specifies alignment as a 32-bit power-of-two number. |
There was an early return in calculateMemoryRequirements that looked explicitly for alignment and only set the alignment attribute. However, this was not correct for the following reasons: * Alignment was set only if both the alignment and the memory_access attributes were both present in the memref operation, without handling the case when only the alignment was exclusively present. * In the case alignment and memory_access attributes were both present, the memory_access attribute would not be updated to aligned if the memory_access attribute was not marked aligned. * In the case alignment and memory_access attributes were both present, other memory requirements (e.g., non_temporal) would not be added as attributes.
// CHECK: %[[VAL:.+]] = spirv.Load "StorageBuffer" %[[ADDR]] ["Aligned", 32] : i8 | ||
// CHECK: %[[ZERO_I8:.+]] = spirv.Constant 0 : i8 | ||
// CHECK: %[[BOOL:.+]] = spirv.INotEqual %[[VAL]], %[[ZERO_I8]] : i8 | ||
%0 = memref.load %src[%i] { alignment = 32 } : memref<4xi1, #spirv.storage_class<StorageBuffer>> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you also add one non-temporal load to see if both get propagated?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the suggestion! I've added it here: 0337e22
@llvm/pr-subscribers-mlir Author: Erick Ochoa Lopez (amd-eochoalo) ChangesPropagating alignment attributes from memref operations into the SPIR-V dialect. Full diff: https://github.com/llvm/llvm-project/pull/151723.diff 2 Files Affected:
diff --git a/mlir/lib/Conversion/MemRefToSPIRV/MemRefToSPIRV.cpp b/mlir/lib/Conversion/MemRefToSPIRV/MemRefToSPIRV.cpp
index 7a705336bf11c..e730998f153b0 100644
--- a/mlir/lib/Conversion/MemRefToSPIRV/MemRefToSPIRV.cpp
+++ b/mlir/lib/Conversion/MemRefToSPIRV/MemRefToSPIRV.cpp
@@ -22,6 +22,7 @@
#include "mlir/IR/MLIRContext.h"
#include "mlir/IR/Visitors.h"
#include <cassert>
+#include <limits>
#include <optional>
#define DEBUG_TYPE "memref-to-spirv-pattern"
@@ -465,7 +466,13 @@ struct MemoryRequirements {
/// Given an accessed SPIR-V pointer, calculates its alignment requirements, if
/// any.
static FailureOr<MemoryRequirements>
-calculateMemoryRequirements(Value accessedPtr, bool isNontemporal) {
+calculateMemoryRequirements(Value accessedPtr, bool isNontemporal,
+ uint64_t preferredAlignment) {
+
+ if (std::numeric_limits<uint32_t>::max() < preferredAlignment) {
+ return failure();
+ }
+
MLIRContext *ctx = accessedPtr.getContext();
auto memoryAccess = spirv::MemoryAccess::None;
@@ -474,7 +481,10 @@ calculateMemoryRequirements(Value accessedPtr, bool isNontemporal) {
}
auto ptrType = cast<spirv::PointerType>(accessedPtr.getType());
- if (ptrType.getStorageClass() != spirv::StorageClass::PhysicalStorageBuffer) {
+ bool mayOmitAlignment =
+ !preferredAlignment &&
+ ptrType.getStorageClass() != spirv::StorageClass::PhysicalStorageBuffer;
+ if (mayOmitAlignment) {
if (memoryAccess == spirv::MemoryAccess::None) {
return MemoryRequirements{spirv::MemoryAccessAttr{}, IntegerAttr{}};
}
@@ -483,6 +493,7 @@ calculateMemoryRequirements(Value accessedPtr, bool isNontemporal) {
}
// PhysicalStorageBuffers require the `Aligned` attribute.
+ // Other storage types may show an `Aligned` attribute.
auto pointeeType = dyn_cast<spirv::ScalarType>(ptrType.getPointeeType());
if (!pointeeType)
return failure();
@@ -494,7 +505,8 @@ calculateMemoryRequirements(Value accessedPtr, bool isNontemporal) {
memoryAccess = memoryAccess | spirv::MemoryAccess::Aligned;
auto memAccessAttr = spirv::MemoryAccessAttr::get(ctx, memoryAccess);
- auto alignment = IntegerAttr::get(IntegerType::get(ctx, 32), *sizeInBytes);
+ auto alignmentValue = preferredAlignment ? preferredAlignment : *sizeInBytes;
+ auto alignment = IntegerAttr::get(IntegerType::get(ctx, 32), alignmentValue);
return MemoryRequirements{memAccessAttr, alignment};
}
@@ -508,16 +520,9 @@ calculateMemoryRequirements(Value accessedPtr, LoadOrStoreOp loadOrStoreOp) {
llvm::is_one_of<LoadOrStoreOp, memref::LoadOp, memref::StoreOp>::value,
"Must be called on either memref::LoadOp or memref::StoreOp");
- Operation *memrefAccessOp = loadOrStoreOp.getOperation();
- auto memrefMemAccess = memrefAccessOp->getAttrOfType<spirv::MemoryAccessAttr>(
- spirv::attributeName<spirv::MemoryAccess>());
- auto memrefAlignment =
- memrefAccessOp->getAttrOfType<IntegerAttr>("alignment");
- if (memrefMemAccess && memrefAlignment)
- return MemoryRequirements{memrefMemAccess, memrefAlignment};
-
return calculateMemoryRequirements(accessedPtr,
- loadOrStoreOp.getNontemporal());
+ loadOrStoreOp.getNontemporal(),
+ loadOrStoreOp.getAlignment().value_or(0));
}
LogicalResult
diff --git a/mlir/test/Conversion/MemRefToSPIRV/memref-to-spirv.mlir b/mlir/test/Conversion/MemRefToSPIRV/memref-to-spirv.mlir
index d0ddac8cd801c..7c765f70136bb 100644
--- a/mlir/test/Conversion/MemRefToSPIRV/memref-to-spirv.mlir
+++ b/mlir/test/Conversion/MemRefToSPIRV/memref-to-spirv.mlir
@@ -85,6 +85,51 @@ func.func @load_i1(%src: memref<4xi1, #spirv.storage_class<StorageBuffer>>, %i :
return %0: i1
}
+// CHECK-LABEL: func @load_aligned
+// CHECK-SAME: (%[[SRC:.+]]: memref<4xi1, #spirv.storage_class<StorageBuffer>>, %[[IDX:.+]]: index)
+func.func @load_aligned(%src: memref<4xi1, #spirv.storage_class<StorageBuffer>>, %i : index) -> i1 {
+ // CHECK-DAG: %[[SRC_CAST:.+]] = builtin.unrealized_conversion_cast %[[SRC]] : memref<4xi1, #spirv.storage_class<StorageBuffer>> to !spirv.ptr<!spirv.struct<(!spirv.array<4 x i8, stride=1> [0])>, StorageBuffer>
+ // CHECK-DAG: %[[IDX_CAST:.+]] = builtin.unrealized_conversion_cast %[[IDX]]
+ // CHECK: %[[ZERO:.*]] = spirv.Constant 0 : i32
+ // CHECK: %[[ADDR:.+]] = spirv.AccessChain %[[SRC_CAST]][%[[ZERO]], %[[IDX_CAST]]]
+ // CHECK: %[[VAL:.+]] = spirv.Load "StorageBuffer" %[[ADDR]] ["Aligned", 32] : i8
+ // CHECK: %[[ZERO_I8:.+]] = spirv.Constant 0 : i8
+ // CHECK: %[[BOOL:.+]] = spirv.INotEqual %[[VAL]], %[[ZERO_I8]] : i8
+ %0 = memref.load %src[%i] { alignment = 32 } : memref<4xi1, #spirv.storage_class<StorageBuffer>>
+ // CHECK: return %[[BOOL]]
+ return %0: i1
+}
+
+// CHECK-LABEL: func @load_aligned_nontemporal
+// CHECK-SAME: (%[[SRC:.+]]: memref<4xi1, #spirv.storage_class<StorageBuffer>>, %[[IDX:.+]]: index)
+func.func @load_aligned_nontemporal(%src: memref<4xi1, #spirv.storage_class<StorageBuffer>>, %i : index) -> i1 {
+ // CHECK-DAG: %[[SRC_CAST:.+]] = builtin.unrealized_conversion_cast %[[SRC]] : memref<4xi1, #spirv.storage_class<StorageBuffer>> to !spirv.ptr<!spirv.struct<(!spirv.array<4 x i8, stride=1> [0])>, StorageBuffer>
+ // CHECK-DAG: %[[IDX_CAST:.+]] = builtin.unrealized_conversion_cast %[[IDX]]
+ // CHECK: %[[ZERO:.*]] = spirv.Constant 0 : i32
+ // CHECK: %[[ADDR:.+]] = spirv.AccessChain %[[SRC_CAST]][%[[ZERO]], %[[IDX_CAST]]]
+ // CHECK: %[[VAL:.+]] = spirv.Load "StorageBuffer" %[[ADDR]] ["Aligned|Nontemporal", 32] : i8
+ // CHECK: %[[ZERO_I8:.+]] = spirv.Constant 0 : i8
+ // CHECK: %[[BOOL:.+]] = spirv.INotEqual %[[VAL]], %[[ZERO_I8]] : i8
+ %0 = memref.load %src[%i] { alignment = 32, nontemporal = true } : memref<4xi1, #spirv.storage_class<StorageBuffer>>
+ // CHECK: return %[[BOOL]]
+ return %0: i1
+}
+
+// CHECK-LABEL: func @load_aligned_psb
+// CHECK-SAME: (%[[SRC:.+]]: memref<4xi1, #spirv.storage_class<PhysicalStorageBuffer>>, %[[IDX:.+]]: index)
+func.func @load_aligned_psb(%src: memref<4xi1, #spirv.storage_class<PhysicalStorageBuffer>>, %i : index) -> i1 {
+ // CHECK-DAG: %[[SRC_CAST:.+]] = builtin.unrealized_conversion_cast %[[SRC]] : memref<4xi1, #spirv.storage_class<PhysicalStorageBuffer>> to !spirv.ptr<!spirv.struct<(!spirv.array<4 x i8, stride=1> [0])>, PhysicalStorageBuffer>
+ // CHECK-DAG: %[[IDX_CAST:.+]] = builtin.unrealized_conversion_cast %[[IDX]]
+ // CHECK: %[[ZERO:.*]] = spirv.Constant 0 : i32
+ // CHECK: %[[ADDR:.+]] = spirv.AccessChain %[[SRC_CAST]][%[[ZERO]], %[[IDX_CAST]]]
+ // CHECK: %[[VAL:.+]] = spirv.Load "PhysicalStorageBuffer" %[[ADDR]] ["Aligned", 32] : i8
+ // CHECK: %[[ZERO_I8:.+]] = spirv.Constant 0 : i8
+ // CHECK: %[[BOOL:.+]] = spirv.INotEqual %[[VAL]], %[[ZERO_I8]] : i8
+ %0 = memref.load %src[%i] { alignment = 32 } : memref<4xi1, #spirv.storage_class<PhysicalStorageBuffer>>
+ // CHECK: return %[[BOOL]]
+ return %0: i1
+}
+
// CHECK-LABEL: func @store_i1
// CHECK-SAME: %[[DST:.+]]: memref<4xi1, #spirv.storage_class<StorageBuffer>>,
// CHECK-SAME: %[[IDX:.+]]: index
|
@llvm/pr-subscribers-mlir-spirv Author: Erick Ochoa Lopez (amd-eochoalo) ChangesPropagating alignment attributes from memref operations into the SPIR-V dialect. Full diff: https://github.com/llvm/llvm-project/pull/151723.diff 2 Files Affected:
diff --git a/mlir/lib/Conversion/MemRefToSPIRV/MemRefToSPIRV.cpp b/mlir/lib/Conversion/MemRefToSPIRV/MemRefToSPIRV.cpp
index 7a705336bf11c..e730998f153b0 100644
--- a/mlir/lib/Conversion/MemRefToSPIRV/MemRefToSPIRV.cpp
+++ b/mlir/lib/Conversion/MemRefToSPIRV/MemRefToSPIRV.cpp
@@ -22,6 +22,7 @@
#include "mlir/IR/MLIRContext.h"
#include "mlir/IR/Visitors.h"
#include <cassert>
+#include <limits>
#include <optional>
#define DEBUG_TYPE "memref-to-spirv-pattern"
@@ -465,7 +466,13 @@ struct MemoryRequirements {
/// Given an accessed SPIR-V pointer, calculates its alignment requirements, if
/// any.
static FailureOr<MemoryRequirements>
-calculateMemoryRequirements(Value accessedPtr, bool isNontemporal) {
+calculateMemoryRequirements(Value accessedPtr, bool isNontemporal,
+ uint64_t preferredAlignment) {
+
+ if (std::numeric_limits<uint32_t>::max() < preferredAlignment) {
+ return failure();
+ }
+
MLIRContext *ctx = accessedPtr.getContext();
auto memoryAccess = spirv::MemoryAccess::None;
@@ -474,7 +481,10 @@ calculateMemoryRequirements(Value accessedPtr, bool isNontemporal) {
}
auto ptrType = cast<spirv::PointerType>(accessedPtr.getType());
- if (ptrType.getStorageClass() != spirv::StorageClass::PhysicalStorageBuffer) {
+ bool mayOmitAlignment =
+ !preferredAlignment &&
+ ptrType.getStorageClass() != spirv::StorageClass::PhysicalStorageBuffer;
+ if (mayOmitAlignment) {
if (memoryAccess == spirv::MemoryAccess::None) {
return MemoryRequirements{spirv::MemoryAccessAttr{}, IntegerAttr{}};
}
@@ -483,6 +493,7 @@ calculateMemoryRequirements(Value accessedPtr, bool isNontemporal) {
}
// PhysicalStorageBuffers require the `Aligned` attribute.
+ // Other storage types may show an `Aligned` attribute.
auto pointeeType = dyn_cast<spirv::ScalarType>(ptrType.getPointeeType());
if (!pointeeType)
return failure();
@@ -494,7 +505,8 @@ calculateMemoryRequirements(Value accessedPtr, bool isNontemporal) {
memoryAccess = memoryAccess | spirv::MemoryAccess::Aligned;
auto memAccessAttr = spirv::MemoryAccessAttr::get(ctx, memoryAccess);
- auto alignment = IntegerAttr::get(IntegerType::get(ctx, 32), *sizeInBytes);
+ auto alignmentValue = preferredAlignment ? preferredAlignment : *sizeInBytes;
+ auto alignment = IntegerAttr::get(IntegerType::get(ctx, 32), alignmentValue);
return MemoryRequirements{memAccessAttr, alignment};
}
@@ -508,16 +520,9 @@ calculateMemoryRequirements(Value accessedPtr, LoadOrStoreOp loadOrStoreOp) {
llvm::is_one_of<LoadOrStoreOp, memref::LoadOp, memref::StoreOp>::value,
"Must be called on either memref::LoadOp or memref::StoreOp");
- Operation *memrefAccessOp = loadOrStoreOp.getOperation();
- auto memrefMemAccess = memrefAccessOp->getAttrOfType<spirv::MemoryAccessAttr>(
- spirv::attributeName<spirv::MemoryAccess>());
- auto memrefAlignment =
- memrefAccessOp->getAttrOfType<IntegerAttr>("alignment");
- if (memrefMemAccess && memrefAlignment)
- return MemoryRequirements{memrefMemAccess, memrefAlignment};
-
return calculateMemoryRequirements(accessedPtr,
- loadOrStoreOp.getNontemporal());
+ loadOrStoreOp.getNontemporal(),
+ loadOrStoreOp.getAlignment().value_or(0));
}
LogicalResult
diff --git a/mlir/test/Conversion/MemRefToSPIRV/memref-to-spirv.mlir b/mlir/test/Conversion/MemRefToSPIRV/memref-to-spirv.mlir
index d0ddac8cd801c..7c765f70136bb 100644
--- a/mlir/test/Conversion/MemRefToSPIRV/memref-to-spirv.mlir
+++ b/mlir/test/Conversion/MemRefToSPIRV/memref-to-spirv.mlir
@@ -85,6 +85,51 @@ func.func @load_i1(%src: memref<4xi1, #spirv.storage_class<StorageBuffer>>, %i :
return %0: i1
}
+// CHECK-LABEL: func @load_aligned
+// CHECK-SAME: (%[[SRC:.+]]: memref<4xi1, #spirv.storage_class<StorageBuffer>>, %[[IDX:.+]]: index)
+func.func @load_aligned(%src: memref<4xi1, #spirv.storage_class<StorageBuffer>>, %i : index) -> i1 {
+ // CHECK-DAG: %[[SRC_CAST:.+]] = builtin.unrealized_conversion_cast %[[SRC]] : memref<4xi1, #spirv.storage_class<StorageBuffer>> to !spirv.ptr<!spirv.struct<(!spirv.array<4 x i8, stride=1> [0])>, StorageBuffer>
+ // CHECK-DAG: %[[IDX_CAST:.+]] = builtin.unrealized_conversion_cast %[[IDX]]
+ // CHECK: %[[ZERO:.*]] = spirv.Constant 0 : i32
+ // CHECK: %[[ADDR:.+]] = spirv.AccessChain %[[SRC_CAST]][%[[ZERO]], %[[IDX_CAST]]]
+ // CHECK: %[[VAL:.+]] = spirv.Load "StorageBuffer" %[[ADDR]] ["Aligned", 32] : i8
+ // CHECK: %[[ZERO_I8:.+]] = spirv.Constant 0 : i8
+ // CHECK: %[[BOOL:.+]] = spirv.INotEqual %[[VAL]], %[[ZERO_I8]] : i8
+ %0 = memref.load %src[%i] { alignment = 32 } : memref<4xi1, #spirv.storage_class<StorageBuffer>>
+ // CHECK: return %[[BOOL]]
+ return %0: i1
+}
+
+// CHECK-LABEL: func @load_aligned_nontemporal
+// CHECK-SAME: (%[[SRC:.+]]: memref<4xi1, #spirv.storage_class<StorageBuffer>>, %[[IDX:.+]]: index)
+func.func @load_aligned_nontemporal(%src: memref<4xi1, #spirv.storage_class<StorageBuffer>>, %i : index) -> i1 {
+ // CHECK-DAG: %[[SRC_CAST:.+]] = builtin.unrealized_conversion_cast %[[SRC]] : memref<4xi1, #spirv.storage_class<StorageBuffer>> to !spirv.ptr<!spirv.struct<(!spirv.array<4 x i8, stride=1> [0])>, StorageBuffer>
+ // CHECK-DAG: %[[IDX_CAST:.+]] = builtin.unrealized_conversion_cast %[[IDX]]
+ // CHECK: %[[ZERO:.*]] = spirv.Constant 0 : i32
+ // CHECK: %[[ADDR:.+]] = spirv.AccessChain %[[SRC_CAST]][%[[ZERO]], %[[IDX_CAST]]]
+ // CHECK: %[[VAL:.+]] = spirv.Load "StorageBuffer" %[[ADDR]] ["Aligned|Nontemporal", 32] : i8
+ // CHECK: %[[ZERO_I8:.+]] = spirv.Constant 0 : i8
+ // CHECK: %[[BOOL:.+]] = spirv.INotEqual %[[VAL]], %[[ZERO_I8]] : i8
+ %0 = memref.load %src[%i] { alignment = 32, nontemporal = true } : memref<4xi1, #spirv.storage_class<StorageBuffer>>
+ // CHECK: return %[[BOOL]]
+ return %0: i1
+}
+
+// CHECK-LABEL: func @load_aligned_psb
+// CHECK-SAME: (%[[SRC:.+]]: memref<4xi1, #spirv.storage_class<PhysicalStorageBuffer>>, %[[IDX:.+]]: index)
+func.func @load_aligned_psb(%src: memref<4xi1, #spirv.storage_class<PhysicalStorageBuffer>>, %i : index) -> i1 {
+ // CHECK-DAG: %[[SRC_CAST:.+]] = builtin.unrealized_conversion_cast %[[SRC]] : memref<4xi1, #spirv.storage_class<PhysicalStorageBuffer>> to !spirv.ptr<!spirv.struct<(!spirv.array<4 x i8, stride=1> [0])>, PhysicalStorageBuffer>
+ // CHECK-DAG: %[[IDX_CAST:.+]] = builtin.unrealized_conversion_cast %[[IDX]]
+ // CHECK: %[[ZERO:.*]] = spirv.Constant 0 : i32
+ // CHECK: %[[ADDR:.+]] = spirv.AccessChain %[[SRC_CAST]][%[[ZERO]], %[[IDX_CAST]]]
+ // CHECK: %[[VAL:.+]] = spirv.Load "PhysicalStorageBuffer" %[[ADDR]] ["Aligned", 32] : i8
+ // CHECK: %[[ZERO_I8:.+]] = spirv.Constant 0 : i8
+ // CHECK: %[[BOOL:.+]] = spirv.INotEqual %[[VAL]], %[[ZERO_I8]] : i8
+ %0 = memref.load %src[%i] { alignment = 32 } : memref<4xi1, #spirv.storage_class<PhysicalStorageBuffer>>
+ // CHECK: return %[[BOOL]]
+ return %0: i1
+}
+
// CHECK-LABEL: func @store_i1
// CHECK-SAME: %[[DST:.+]]: memref<4xi1, #spirv.storage_class<StorageBuffer>>,
// CHECK-SAME: %[[IDX:.+]]: index
|
This patchset: