-
Notifications
You must be signed in to change notification settings - Fork 14.4k
[AMDGPU][clang][CodeGen][opt] Add late-resolved feature identifying predicates #134016
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
@llvm/pr-subscribers-llvm-transforms @llvm/pr-subscribers-clang Author: Alex Voicu (AlexVlx) ChangesThis change adds two semi-magical builtins for AMDGPU:
Neither of these are The motivation for adding these is two-fold:
I've tried to keep the overall footprint of the change small. The changes to Sema are a bit unpleasant, but there was a strong desire to have Clang validate these, and to constrain their uses, and this was the most compact solution I could come up with (suggestions welcome). In the end, I will note there is nothing that is actually AMDGPU specific here, so it is possible that in the future, assuming interests from other targets / users, we'd just promote them to generic intrinsics. Patch is 59.55 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/134016.diff 17 Files Affected:
diff --git a/clang/docs/LanguageExtensions.rst b/clang/docs/LanguageExtensions.rst
index 3b8a9cac6587a..8a7cb75af13e5 100644
--- a/clang/docs/LanguageExtensions.rst
+++ b/clang/docs/LanguageExtensions.rst
@@ -4920,6 +4920,116 @@ If no address spaces names are provided, all address spaces are fenced.
__builtin_amdgcn_fence(__ATOMIC_SEQ_CST, "workgroup", "local")
__builtin_amdgcn_fence(__ATOMIC_SEQ_CST, "workgroup", "local", "global")
+__builtin_amdgcn_processor_is and __builtin_amdgcn_is_invocable
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+``__builtin_amdgcn_processor_is`` and ``__builtin_amdgcn_is_invocable`` provide
+a functional mechanism for programatically querying:
+
+* the identity of the current target processor;
+* the capability of the current target processor to invoke a particular builtin.
+
+**Syntax**:
+
+.. code-block:: c
+
+ // When used as the predicate for a control structure
+ bool __builtin_amdgcn_processor_is(const char*);
+ bool __builtin_amdgcn_is_invocable(builtin_name);
+ // Otherwise
+ void __builtin_amdgcn_processor_is(const char*);
+ void __builtin_amdgcn_is_invocable(void);
+
+**Example of use**:
+
+.. code-block:: c++
+
+ if (__builtin_amdgcn_processor_is("gfx1201") ||
+ __builtin_amdgcn_is_invocable(__builtin_amdgcn_s_sleep_var))
+ __builtin_amdgcn_s_sleep_var(x);
+
+ if (!__builtin_amdgcn_processor_is("gfx906"))
+ __builtin_amdgcn_s_wait_event_export_ready();
+ else if (__builtin_amdgcn_processor_is("gfx1010") ||
+ __builtin_amdgcn_processor_is("gfx1101"))
+ __builtin_amdgcn_s_ttracedata_imm(1);
+
+ while (__builtin_amdgcn_processor_is("gfx1101")) *p += x;
+
+ do { *p -= x; } while (__builtin_amdgcn_processor_is("gfx1010"));
+
+ for (; __builtin_amdgcn_processor_is("gfx1201"); ++*p) break;
+
+ if (__builtin_amdgcn_is_invocable(__builtin_amdgcn_s_wait_event_export_ready))
+ __builtin_amdgcn_s_wait_event_export_ready();
+ else if (__builtin_amdgcn_is_invocable(__builtin_amdgcn_s_ttracedata_imm))
+ __builtin_amdgcn_s_ttracedata_imm(1);
+
+ do {
+ *p -= x;
+ } while (__builtin_amdgcn_is_invocable(__builtin_amdgcn_global_load_tr_b64_i32));
+
+ for (; __builtin_amdgcn_is_invocable(__builtin_amdgcn_permlane64); ++*p) break;
+
+**Description**:
+
+When used as the predicate value of the following control structures:
+
+.. code-block:: c++
+
+ if (...)
+ while (...)
+ do { } while (...)
+ for (...)
+
+be it directly, or as arguments to logical operators such as ``!, ||, &&``, the
+builtins return a boolean value that:
+
+* indicates whether the current target matches the argument; the argument MUST
+ be a string literal and a valid AMDGPU target
+* indicates whether the builtin function passed as the argument can be invoked
+ by the current target; the argument MUST be either a generic or AMDGPU
+ specific builtin name
+
+Outside of these contexts, the builtins have a ``void`` returning signature
+which prevents their misuse.
+
+**Example of invalid use**:
+
+.. code-block:: c++
+
+ void kernel(int* p, int x, bool (*pfn)(bool), const char* str) {
+ if (__builtin_amdgcn_processor_is("not_an_amdgcn_gfx_id")) return;
+ else if (__builtin_amdgcn_processor_is(str)) __builtin_trap();
+
+ bool a = __builtin_amdgcn_processor_is("gfx906");
+ const bool b = !__builtin_amdgcn_processor_is("gfx906");
+ const bool c = !__builtin_amdgcn_processor_is("gfx906");
+ bool d = __builtin_amdgcn_is_invocable(__builtin_amdgcn_s_sleep_var);
+ bool e = !__builtin_amdgcn_is_invocable(__builtin_amdgcn_s_sleep_var);
+ const auto f =
+ !__builtin_amdgcn_is_invocable(__builtin_amdgcn_s_wait_event_export_ready)
+ || __builtin_amdgcn_is_invocable(__builtin_amdgcn_s_sleep_var);
+ const auto g =
+ !__builtin_amdgcn_is_invocable(__builtin_amdgcn_s_wait_event_export_ready)
+ || !__builtin_amdgcn_is_invocable(__builtin_amdgcn_s_sleep_var);
+ __builtin_amdgcn_processor_is("gfx1201")
+ ? __builtin_amdgcn_s_sleep_var(x) : __builtin_amdgcn_s_sleep(42);
+ if (pfn(__builtin_amdgcn_processor_is("gfx1200")))
+ __builtin_amdgcn_s_sleep_var(x);
+
+ if (__builtin_amdgcn_is_invocable("__builtin_amdgcn_s_sleep_var")) return;
+ else if (__builtin_amdgcn_is_invocable(x)) __builtin_trap();
+ }
+
+When invoked while compiling for a concrete target, the builtins are evaluated
+early by Clang, and never produce any CodeGen effects / have no observable
+side-effects in IR. Conversely, when compiling for AMDGCN flavoured SPIR-v,
+which is an abstract target, a series of predicate values are implicitly
+created. These predicates get resolved when finalizing the compilation process
+for a concrete target, and shall reflect the latter's identity and features.
+Thus, it is possible to author high-level code, in e.g. HIP, that is target
+adaptive in a dynamic fashion, contrary to macro based mechanisms.
ARM/AArch64 Language Extensions
-------------------------------
diff --git a/clang/include/clang/Basic/BuiltinsAMDGPU.def b/clang/include/clang/Basic/BuiltinsAMDGPU.def
index 44ef404aee72f..5d01a7e75f7e7 100644
--- a/clang/include/clang/Basic/BuiltinsAMDGPU.def
+++ b/clang/include/clang/Basic/BuiltinsAMDGPU.def
@@ -346,6 +346,11 @@ BUILTIN(__builtin_amdgcn_endpgm, "v", "nr")
BUILTIN(__builtin_amdgcn_get_fpenv, "WUi", "n")
BUILTIN(__builtin_amdgcn_set_fpenv, "vWUi", "n")
+// These are special FE only builtins intended for forwarding the requirements
+// to the ME.
+BUILTIN(__builtin_amdgcn_processor_is, "vcC*", "nctu")
+BUILTIN(__builtin_amdgcn_is_invocable, "v", "nctu")
+
//===----------------------------------------------------------------------===//
// R600-NI only builtins.
//===----------------------------------------------------------------------===//
diff --git a/clang/include/clang/Basic/DiagnosticSemaKinds.td b/clang/include/clang/Basic/DiagnosticSemaKinds.td
index 5e45482584946..45f0f9eb88e55 100644
--- a/clang/include/clang/Basic/DiagnosticSemaKinds.td
+++ b/clang/include/clang/Basic/DiagnosticSemaKinds.td
@@ -13054,4 +13054,14 @@ def err_acc_decl_for_routine
// AMDGCN builtins diagnostics
def err_amdgcn_global_load_lds_size_invalid_value : Error<"invalid size value">;
def note_amdgcn_global_load_lds_size_valid_value : Note<"size must be %select{1, 2, or 4|1, 2, 4, 12 or 16}0">;
+def err_amdgcn_processor_is_arg_not_literal
+ : Error<"the argument to __builtin_amdgcn_processor_is must be a string "
+ "literal">;
+def err_amdgcn_processor_is_arg_invalid_value
+ : Error<"the argument to __builtin_amdgcn_processor_is must be a valid "
+ "AMDGCN processor identifier; '%0' is not valid">;
+def err_amdgcn_is_invocable_arg_invalid_value
+ : Error<"the argument to __builtin_amdgcn_is_invocable must be either a "
+ "target agnostic builtin or an AMDGCN target specific builtin; `%0`"
+ " is not valid">;
} // end of sema component.
diff --git a/clang/lib/Basic/Targets/SPIR.cpp b/clang/lib/Basic/Targets/SPIR.cpp
index 5b5f47f9647a2..eb43d9b0be283 100644
--- a/clang/lib/Basic/Targets/SPIR.cpp
+++ b/clang/lib/Basic/Targets/SPIR.cpp
@@ -152,3 +152,7 @@ void SPIRV64AMDGCNTargetInfo::setAuxTarget(const TargetInfo *Aux) {
Float128Format = DoubleFormat;
}
}
+
+bool SPIRV64AMDGCNTargetInfo::isValidCPUName(StringRef CPU) const {
+ return AMDGPUTI.isValidCPUName(CPU);
+}
diff --git a/clang/lib/Basic/Targets/SPIR.h b/clang/lib/Basic/Targets/SPIR.h
index 78505d66d6f2f..7aa13cbeb89fd 100644
--- a/clang/lib/Basic/Targets/SPIR.h
+++ b/clang/lib/Basic/Targets/SPIR.h
@@ -432,6 +432,10 @@ class LLVM_LIBRARY_VISIBILITY SPIRV64AMDGCNTargetInfo final
}
bool hasInt128Type() const override { return TargetInfo::hasInt128Type(); }
+
+ // This is only needed for validating arguments passed to
+ // __builtin_amdgcn_processor_is
+ bool isValidCPUName(StringRef Name) const override;
};
} // namespace targets
diff --git a/clang/lib/CodeGen/TargetBuiltins/AMDGPU.cpp b/clang/lib/CodeGen/TargetBuiltins/AMDGPU.cpp
index b56b739094ff3..7b1a3815144b4 100644
--- a/clang/lib/CodeGen/TargetBuiltins/AMDGPU.cpp
+++ b/clang/lib/CodeGen/TargetBuiltins/AMDGPU.cpp
@@ -284,6 +284,18 @@ void CodeGenFunction::AddAMDGPUFenceAddressSpaceMMRA(llvm::Instruction *Inst,
Inst->setMetadata(LLVMContext::MD_mmra, MMRAMetadata::getMD(Ctx, MMRAs));
}
+static Value *GetOrInsertAMDGPUPredicate(CodeGenFunction &CGF, Twine Name) {
+ auto PTy = IntegerType::getInt1Ty(CGF.getLLVMContext());
+
+ auto P = cast<GlobalVariable>(
+ CGF.CGM.getModule().getOrInsertGlobal(Name.str(), PTy));
+ P->setConstant(true);
+ P->setExternallyInitialized(true);
+
+ return CGF.Builder.CreateLoad(RawAddress(P, PTy, CharUnits::One(),
+ KnownNonNull));
+}
+
Value *CodeGenFunction::EmitAMDGPUBuiltinExpr(unsigned BuiltinID,
const CallExpr *E) {
llvm::AtomicOrdering AO = llvm::AtomicOrdering::SequentiallyConsistent;
@@ -585,6 +597,23 @@ Value *CodeGenFunction::EmitAMDGPUBuiltinExpr(unsigned BuiltinID,
llvm::Value *Env = EmitScalarExpr(E->getArg(0));
return Builder.CreateCall(F, {Env});
}
+ case AMDGPU::BI__builtin_amdgcn_processor_is: {
+ assert(CGM.getTriple().isSPIRV() &&
+ "__builtin_amdgcn_processor_is should never reach CodeGen for "
+ "concrete targets!");
+ StringRef Proc = cast<clang::StringLiteral>(E->getArg(0))->getString();
+ return GetOrInsertAMDGPUPredicate(*this, "llvm.amdgcn.is." + Proc);
+ }
+ case AMDGPU::BI__builtin_amdgcn_is_invocable: {
+ assert(CGM.getTriple().isSPIRV() &&
+ "__builtin_amdgcn_is_invocable should never reach CodeGen for "
+ "concrete targets!");
+ auto FD = cast<FunctionDecl>(
+ cast<DeclRefExpr>(E->getArg(0))->getReferencedDeclOfCallee());
+ StringRef RF =
+ getContext().BuiltinInfo.getRequiredFeatures(FD->getBuiltinID());
+ return GetOrInsertAMDGPUPredicate(*this, "llvm.amdgcn.has." + RF);
+ }
case AMDGPU::BI__builtin_amdgcn_read_exec:
return EmitAMDGCNBallotForExec(*this, E, Int64Ty, Int64Ty, false);
case AMDGPU::BI__builtin_amdgcn_read_exec_lo:
diff --git a/clang/lib/Sema/SemaExpr.cpp b/clang/lib/Sema/SemaExpr.cpp
index 7cc8374e69d73..24f5262ab3cf4 100644
--- a/clang/lib/Sema/SemaExpr.cpp
+++ b/clang/lib/Sema/SemaExpr.cpp
@@ -6541,6 +6541,22 @@ ExprResult Sema::BuildCallExpr(Scope *Scope, Expr *Fn, SourceLocation LParenLoc,
if (Result.isInvalid()) return ExprError();
Fn = Result.get();
+ // The __builtin_amdgcn_is_invocable builtin is special, and will be resolved
+ // later, when we check boolean conditions, for now we merely forward it
+ // without any additional checking.
+ if (Fn->getType() == Context.BuiltinFnTy && ArgExprs.size() == 1 &&
+ ArgExprs[0]->getType() == Context.BuiltinFnTy) {
+ auto FD = cast<FunctionDecl>(Fn->getReferencedDeclOfCallee());
+
+ if (FD->getName() == "__builtin_amdgcn_is_invocable") {
+ auto FnPtrTy = Context.getPointerType(FD->getType());
+ auto R = ImpCastExprToType(Fn, FnPtrTy, CK_BuiltinFnToFnPtr).get();
+ return CallExpr::Create(Context, R, ArgExprs, Context.VoidTy,
+ ExprValueKind::VK_PRValue, RParenLoc,
+ FPOptionsOverride());
+ }
+ }
+
if (CheckArgsForPlaceholders(ArgExprs))
return ExprError();
@@ -13234,6 +13250,20 @@ inline QualType Sema::CheckBitwiseOperands(ExprResult &LHS, ExprResult &RHS,
return InvalidOperands(Loc, LHS, RHS);
}
+static inline bool IsAMDGPUPredicateBI(Expr *E) {
+ if (!E->getType()->isVoidType())
+ return false;
+
+ if (auto CE = dyn_cast<CallExpr>(E)) {
+ if (auto BI = CE->getDirectCallee())
+ if (BI->getName() == "__builtin_amdgcn_processor_is" ||
+ BI->getName() == "__builtin_amdgcn_is_invocable")
+ return true;
+ }
+
+ return false;
+}
+
// C99 6.5.[13,14]
inline QualType Sema::CheckLogicalOperands(ExprResult &LHS, ExprResult &RHS,
SourceLocation Loc,
@@ -13329,6 +13359,9 @@ inline QualType Sema::CheckLogicalOperands(ExprResult &LHS, ExprResult &RHS,
// The following is safe because we only use this method for
// non-overloadable operands.
+ if (IsAMDGPUPredicateBI(LHS.get()) && IsAMDGPUPredicateBI(RHS.get()))
+ return Context.VoidTy;
+
// C++ [expr.log.and]p1
// C++ [expr.log.or]p1
// The operands are both contextually converted to type bool.
@@ -15576,6 +15609,38 @@ static bool isOverflowingIntegerType(ASTContext &Ctx, QualType T) {
return Ctx.getIntWidth(T) >= Ctx.getIntWidth(Ctx.IntTy);
}
+static Expr *ExpandAMDGPUPredicateBI(ASTContext &Ctx, CallExpr *CE) {
+ if (!CE->getBuiltinCallee())
+ return CXXBoolLiteralExpr::Create(Ctx, false, Ctx.BoolTy, CE->getExprLoc());
+
+ if (Ctx.getTargetInfo().getTriple().isSPIRV()) {
+ CE->setType(Ctx.getLogicalOperationType());
+ return CE;
+ }
+
+ bool P = false;
+ auto &TI = Ctx.getTargetInfo();
+
+ if (CE->getDirectCallee()->getName() == "__builtin_amdgcn_processor_is") {
+ auto GFX = dyn_cast<StringLiteral>(CE->getArg(0)->IgnoreParenCasts());
+ auto TID = TI.getTargetID();
+ if (GFX && TID) {
+ auto N = GFX->getString();
+ P = TI.isValidCPUName(GFX->getString()) && TID->find(N) == 0;
+ }
+ } else {
+ auto FD = cast<FunctionDecl>(CE->getArg(0)->getReferencedDeclOfCallee());
+
+ StringRef RF = Ctx.BuiltinInfo.getRequiredFeatures(FD->getBuiltinID());
+ llvm::StringMap<bool> CF;
+ Ctx.getFunctionFeatureMap(CF, FD);
+
+ P = Builtin::evaluateRequiredTargetFeatures(RF, CF);
+ }
+
+ return CXXBoolLiteralExpr::Create(Ctx, P, Ctx.BoolTy, CE->getExprLoc());
+}
+
ExprResult Sema::CreateBuiltinUnaryOp(SourceLocation OpLoc,
UnaryOperatorKind Opc, Expr *InputExpr,
bool IsAfterAmp) {
@@ -15753,6 +15818,8 @@ ExprResult Sema::CreateBuiltinUnaryOp(SourceLocation OpLoc,
// Vector logical not returns the signed variant of the operand type.
resultType = GetSignedVectorType(resultType);
break;
+ } else if (IsAMDGPUPredicateBI(InputExpr)) {
+ break;
} else {
return ExprError(Diag(OpLoc, diag::err_typecheck_unary_expr)
<< resultType << Input.get()->getSourceRange());
@@ -20469,6 +20536,88 @@ void Sema::DiagnoseEqualityWithExtraParens(ParenExpr *ParenE) {
}
}
+static bool ValidateAMDGPUPredicateBI(Sema &Sema, CallExpr *CE) {
+ if (CE->getDirectCallee()->getName() == "__builtin_amdgcn_processor_is") {
+ auto GFX = dyn_cast<StringLiteral>(CE->getArg(0)->IgnoreParenCasts());
+ if (!GFX) {
+ Sema.Diag(CE->getExprLoc(),
+ diag::err_amdgcn_processor_is_arg_not_literal);
+ return false;
+ }
+ auto N = GFX->getString();
+ if (!Sema.getASTContext().getTargetInfo().isValidCPUName(N) &&
+ (!Sema.getASTContext().getAuxTargetInfo() ||
+ !Sema.getASTContext().getAuxTargetInfo()->isValidCPUName(N))) {
+ Sema.Diag(CE->getExprLoc(),
+ diag::err_amdgcn_processor_is_arg_invalid_value) << N;
+ return false;
+ }
+ } else {
+ auto Arg = CE->getArg(0);
+ if (!Arg || Arg->getType() != Sema.getASTContext().BuiltinFnTy) {
+ Sema.Diag(CE->getExprLoc(),
+ diag::err_amdgcn_is_invocable_arg_invalid_value) << Arg;
+ return false;
+ }
+ }
+
+ return true;
+}
+
+static Expr *MaybeHandleAMDGPUPredicateBI(Sema &Sema, Expr *E, bool &Invalid) {
+ if (auto UO = dyn_cast<UnaryOperator>(E)) {
+ auto SE = dyn_cast<CallExpr>(UO->getSubExpr());
+ if (IsAMDGPUPredicateBI(SE)) {
+ assert(
+ UO->getOpcode() == UnaryOperator::Opcode::UO_LNot &&
+ "__builtin_amdgcn_processor_is and __builtin_amdgcn_is_invocable "
+ "can only be used as operands of logical ops!");
+
+ if (!ValidateAMDGPUPredicateBI(Sema, SE)) {
+ Invalid = true;
+ return nullptr;
+ }
+
+ UO->setSubExpr(ExpandAMDGPUPredicateBI(Sema.getASTContext(), SE));
+ UO->setType(Sema.getASTContext().getLogicalOperationType());
+
+ return UO;
+ }
+ }
+ if (auto BO = dyn_cast<BinaryOperator>(E)) {
+ auto LHS = dyn_cast<CallExpr>(BO->getLHS());
+ auto RHS = dyn_cast<CallExpr>(BO->getRHS());
+ if (IsAMDGPUPredicateBI(LHS) && IsAMDGPUPredicateBI(RHS)) {
+ assert(
+ BO->isLogicalOp() &&
+ "__builtin_amdgcn_processor_is and __builtin_amdgcn_is_invocable "
+ "can only be used as operands of logical ops!");
+
+ if (!ValidateAMDGPUPredicateBI(Sema, LHS) ||
+ !ValidateAMDGPUPredicateBI(Sema, RHS)) {
+ Invalid = true;
+ return nullptr;
+ }
+
+ BO->setLHS(ExpandAMDGPUPredicateBI(Sema.getASTContext(), LHS));
+ BO->setRHS(ExpandAMDGPUPredicateBI(Sema.getASTContext(), RHS));
+ BO->setType(Sema.getASTContext().getLogicalOperationType());
+
+ return BO;
+ }
+ }
+ if (auto CE = dyn_cast<CallExpr>(E))
+ if (IsAMDGPUPredicateBI(CE)) {
+ if (!ValidateAMDGPUPredicateBI(Sema, CE)) {
+ Invalid = true;
+ return nullptr;
+ }
+ return ExpandAMDGPUPredicateBI(Sema.getASTContext(), CE);
+ }
+
+ return nullptr;
+}
+
ExprResult Sema::CheckBooleanCondition(SourceLocation Loc, Expr *E,
bool IsConstexpr) {
DiagnoseAssignmentAsCondition(E);
@@ -20480,6 +20629,14 @@ ExprResult Sema::CheckBooleanCondition(SourceLocation Loc, Expr *E,
E = result.get();
if (!E->isTypeDependent()) {
+ if (E->getType()->isVoidType()) {
+ bool IsInvalidPredicate = false;
+ if (auto BIC = MaybeHandleAMDGPUPredicateBI(*this, E, IsInvalidPredicate))
+ return BIC;
+ else if (IsInvalidPredicate)
+ return ExprError();
+ }
+
if (getLangOpts().CPlusPlus)
return CheckCXXBooleanCondition(E, IsConstexpr); // C++ 6.4p4
diff --git a/clang/test/CodeGen/amdgpu-builtin-cpu-is.c b/clang/test/CodeGen/amdgpu-builtin-cpu-is.c
new file mode 100644
index 0000000000000..6e261d9f5d239
--- /dev/null
+++ b/clang/test/CodeGen/amdgpu-builtin-cpu-is.c
@@ -0,0 +1,65 @@
+// NOTE: Assertions have been autogenerated by utils/update_cc_test_checks.py UTC_ARGS: --check-globals all --version 5
+// RUN: %clang_cc1 -triple amdgcn-amd-amdhsa -target-cpu gfx900 -emit-llvm %s -o - | FileCheck --check-prefix=AMDGCN-GFX900 %s
+// RUN: %clang_cc1 -triple amdgcn-amd-amdhsa -target-cpu gfx1010 -emit-llvm %s -o - | FileCheck --check-prefix=AMDGCN-GFX1010 %s
+// RUN: %clang_cc1 -triple spirv64-amd-amdhsa -emit-llvm %s -o - | FileCheck --check-prefix=AMDGCNSPIRV %s
+
+// Test that, depending on triple and, if applicable, target-cpu, one of three
+// things happens:
+// 1) for gfx900 we emit a call to trap (concrete target, matches)
+// 2) for gfx1010 we emit an empty kernel (concrete target, does not match)
+// 3) for AMDGCNSPIRV we emit llvm.amdgcn.is.gfx900 as a bool global, and
+// load from it to provide the condition a br (abstract target)
+//.
+// AMDGCN-GFX900: @__oclc_ABI_version = weak_odr hidden local_unnamed_addr addrspace(4) constant i32 600
+//.
+// AMDGCN-GFX1010: @__oclc_ABI_version = weak_odr hidden local_unnamed_addr addrspace(4) constant i32 600
+//.
+// AMDGCNSPIRV: @llvm.amdgcn.is.gfx900 = external addrspace(1) externally_initialized constant i1
+//.
+// AMDGCN-GFX900-LABEL: define dso_local void @foo(
+// AMDGCN-GFX900-SAME: ) #[[ATTR0:[0-9]+]] {
+// AMDGCN-GFX900-NEXT: [[ENTRY:.*:]]
+// AMDGCN-GFX900-NEXT: call void @llvm.trap()
+// AMDGCN-GFX900-NEXT: ret void
+//
+// AMDGCN-GFX1010-LABEL: define dso_local void @foo(
+// AMDGCN-GFX1010-SAME: ) #[[ATTR0:[0-9]+]] {
+// AMDGCN-GFX1010-NEXT: [[ENTRY:.*:]]
+// AMDGCN-GFX1010-NEXT: ret void
+//
+// AMDGCNSPIRV-LABEL: define spir_func void @foo(
+// AMDGCNSPIRV-SAME: ) addrspace(4) #[[ATTR0:[0-9]+]] {
+// AMDGCNSPIRV-NEXT: [[ENTRY:.*:]]
+// AMDGCNSPIRV-NEXT: [[TMP0:%.*]] = load i1, ptr addrspace(1) @llvm.amdgcn.is.gfx900, align 1
+// AMDGCNSPIRV-NEXT: br i1 [[TMP0]], label %[[IF_THEN:.*]], label %[[IF_END:.*]]
+// AMDGCNSPIRV: [[IF_THEN]]:
+// AMDGCNSPIRV-NEXT: call addrspace(4) void @llvm.trap()
+// AMDGCNSPIRV-NEXT: br label %[[IF_END]]
+// AMDGCNSPIRV: [[IF_END]]:
+// AMDGCNSPIRV-NEXT: ret void
+//
+void foo() {
+ if (__builtin_cpu_is("gfx90...
[truncated]
|
@llvm/pr-subscribers-clang-codegen Author: Alex Voicu (AlexVlx) ChangesThis change adds two semi-magical builtins for AMDGPU:
Neither of these are The motivation for adding these is two-fold:
I've tried to keep the overall footprint of the change small. The changes to Sema are a bit unpleasant, but there was a strong desire to have Clang validate these, and to constrain their uses, and this was the most compact solution I could come up with (suggestions welcome). In the end, I will note there is nothing that is actually AMDGPU specific here, so it is possible that in the future, assuming interests from other targets / users, we'd just promote them to generic intrinsics. Patch is 59.55 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/134016.diff 17 Files Affected:
diff --git a/clang/docs/LanguageExtensions.rst b/clang/docs/LanguageExtensions.rst
index 3b8a9cac6587a..8a7cb75af13e5 100644
--- a/clang/docs/LanguageExtensions.rst
+++ b/clang/docs/LanguageExtensions.rst
@@ -4920,6 +4920,116 @@ If no address spaces names are provided, all address spaces are fenced.
__builtin_amdgcn_fence(__ATOMIC_SEQ_CST, "workgroup", "local")
__builtin_amdgcn_fence(__ATOMIC_SEQ_CST, "workgroup", "local", "global")
+__builtin_amdgcn_processor_is and __builtin_amdgcn_is_invocable
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+``__builtin_amdgcn_processor_is`` and ``__builtin_amdgcn_is_invocable`` provide
+a functional mechanism for programatically querying:
+
+* the identity of the current target processor;
+* the capability of the current target processor to invoke a particular builtin.
+
+**Syntax**:
+
+.. code-block:: c
+
+ // When used as the predicate for a control structure
+ bool __builtin_amdgcn_processor_is(const char*);
+ bool __builtin_amdgcn_is_invocable(builtin_name);
+ // Otherwise
+ void __builtin_amdgcn_processor_is(const char*);
+ void __builtin_amdgcn_is_invocable(void);
+
+**Example of use**:
+
+.. code-block:: c++
+
+ if (__builtin_amdgcn_processor_is("gfx1201") ||
+ __builtin_amdgcn_is_invocable(__builtin_amdgcn_s_sleep_var))
+ __builtin_amdgcn_s_sleep_var(x);
+
+ if (!__builtin_amdgcn_processor_is("gfx906"))
+ __builtin_amdgcn_s_wait_event_export_ready();
+ else if (__builtin_amdgcn_processor_is("gfx1010") ||
+ __builtin_amdgcn_processor_is("gfx1101"))
+ __builtin_amdgcn_s_ttracedata_imm(1);
+
+ while (__builtin_amdgcn_processor_is("gfx1101")) *p += x;
+
+ do { *p -= x; } while (__builtin_amdgcn_processor_is("gfx1010"));
+
+ for (; __builtin_amdgcn_processor_is("gfx1201"); ++*p) break;
+
+ if (__builtin_amdgcn_is_invocable(__builtin_amdgcn_s_wait_event_export_ready))
+ __builtin_amdgcn_s_wait_event_export_ready();
+ else if (__builtin_amdgcn_is_invocable(__builtin_amdgcn_s_ttracedata_imm))
+ __builtin_amdgcn_s_ttracedata_imm(1);
+
+ do {
+ *p -= x;
+ } while (__builtin_amdgcn_is_invocable(__builtin_amdgcn_global_load_tr_b64_i32));
+
+ for (; __builtin_amdgcn_is_invocable(__builtin_amdgcn_permlane64); ++*p) break;
+
+**Description**:
+
+When used as the predicate value of the following control structures:
+
+.. code-block:: c++
+
+ if (...)
+ while (...)
+ do { } while (...)
+ for (...)
+
+be it directly, or as arguments to logical operators such as ``!, ||, &&``, the
+builtins return a boolean value that:
+
+* indicates whether the current target matches the argument; the argument MUST
+ be a string literal and a valid AMDGPU target
+* indicates whether the builtin function passed as the argument can be invoked
+ by the current target; the argument MUST be either a generic or AMDGPU
+ specific builtin name
+
+Outside of these contexts, the builtins have a ``void`` returning signature
+which prevents their misuse.
+
+**Example of invalid use**:
+
+.. code-block:: c++
+
+ void kernel(int* p, int x, bool (*pfn)(bool), const char* str) {
+ if (__builtin_amdgcn_processor_is("not_an_amdgcn_gfx_id")) return;
+ else if (__builtin_amdgcn_processor_is(str)) __builtin_trap();
+
+ bool a = __builtin_amdgcn_processor_is("gfx906");
+ const bool b = !__builtin_amdgcn_processor_is("gfx906");
+ const bool c = !__builtin_amdgcn_processor_is("gfx906");
+ bool d = __builtin_amdgcn_is_invocable(__builtin_amdgcn_s_sleep_var);
+ bool e = !__builtin_amdgcn_is_invocable(__builtin_amdgcn_s_sleep_var);
+ const auto f =
+ !__builtin_amdgcn_is_invocable(__builtin_amdgcn_s_wait_event_export_ready)
+ || __builtin_amdgcn_is_invocable(__builtin_amdgcn_s_sleep_var);
+ const auto g =
+ !__builtin_amdgcn_is_invocable(__builtin_amdgcn_s_wait_event_export_ready)
+ || !__builtin_amdgcn_is_invocable(__builtin_amdgcn_s_sleep_var);
+ __builtin_amdgcn_processor_is("gfx1201")
+ ? __builtin_amdgcn_s_sleep_var(x) : __builtin_amdgcn_s_sleep(42);
+ if (pfn(__builtin_amdgcn_processor_is("gfx1200")))
+ __builtin_amdgcn_s_sleep_var(x);
+
+ if (__builtin_amdgcn_is_invocable("__builtin_amdgcn_s_sleep_var")) return;
+ else if (__builtin_amdgcn_is_invocable(x)) __builtin_trap();
+ }
+
+When invoked while compiling for a concrete target, the builtins are evaluated
+early by Clang, and never produce any CodeGen effects / have no observable
+side-effects in IR. Conversely, when compiling for AMDGCN flavoured SPIR-v,
+which is an abstract target, a series of predicate values are implicitly
+created. These predicates get resolved when finalizing the compilation process
+for a concrete target, and shall reflect the latter's identity and features.
+Thus, it is possible to author high-level code, in e.g. HIP, that is target
+adaptive in a dynamic fashion, contrary to macro based mechanisms.
ARM/AArch64 Language Extensions
-------------------------------
diff --git a/clang/include/clang/Basic/BuiltinsAMDGPU.def b/clang/include/clang/Basic/BuiltinsAMDGPU.def
index 44ef404aee72f..5d01a7e75f7e7 100644
--- a/clang/include/clang/Basic/BuiltinsAMDGPU.def
+++ b/clang/include/clang/Basic/BuiltinsAMDGPU.def
@@ -346,6 +346,11 @@ BUILTIN(__builtin_amdgcn_endpgm, "v", "nr")
BUILTIN(__builtin_amdgcn_get_fpenv, "WUi", "n")
BUILTIN(__builtin_amdgcn_set_fpenv, "vWUi", "n")
+// These are special FE only builtins intended for forwarding the requirements
+// to the ME.
+BUILTIN(__builtin_amdgcn_processor_is, "vcC*", "nctu")
+BUILTIN(__builtin_amdgcn_is_invocable, "v", "nctu")
+
//===----------------------------------------------------------------------===//
// R600-NI only builtins.
//===----------------------------------------------------------------------===//
diff --git a/clang/include/clang/Basic/DiagnosticSemaKinds.td b/clang/include/clang/Basic/DiagnosticSemaKinds.td
index 5e45482584946..45f0f9eb88e55 100644
--- a/clang/include/clang/Basic/DiagnosticSemaKinds.td
+++ b/clang/include/clang/Basic/DiagnosticSemaKinds.td
@@ -13054,4 +13054,14 @@ def err_acc_decl_for_routine
// AMDGCN builtins diagnostics
def err_amdgcn_global_load_lds_size_invalid_value : Error<"invalid size value">;
def note_amdgcn_global_load_lds_size_valid_value : Note<"size must be %select{1, 2, or 4|1, 2, 4, 12 or 16}0">;
+def err_amdgcn_processor_is_arg_not_literal
+ : Error<"the argument to __builtin_amdgcn_processor_is must be a string "
+ "literal">;
+def err_amdgcn_processor_is_arg_invalid_value
+ : Error<"the argument to __builtin_amdgcn_processor_is must be a valid "
+ "AMDGCN processor identifier; '%0' is not valid">;
+def err_amdgcn_is_invocable_arg_invalid_value
+ : Error<"the argument to __builtin_amdgcn_is_invocable must be either a "
+ "target agnostic builtin or an AMDGCN target specific builtin; `%0`"
+ " is not valid">;
} // end of sema component.
diff --git a/clang/lib/Basic/Targets/SPIR.cpp b/clang/lib/Basic/Targets/SPIR.cpp
index 5b5f47f9647a2..eb43d9b0be283 100644
--- a/clang/lib/Basic/Targets/SPIR.cpp
+++ b/clang/lib/Basic/Targets/SPIR.cpp
@@ -152,3 +152,7 @@ void SPIRV64AMDGCNTargetInfo::setAuxTarget(const TargetInfo *Aux) {
Float128Format = DoubleFormat;
}
}
+
+bool SPIRV64AMDGCNTargetInfo::isValidCPUName(StringRef CPU) const {
+ return AMDGPUTI.isValidCPUName(CPU);
+}
diff --git a/clang/lib/Basic/Targets/SPIR.h b/clang/lib/Basic/Targets/SPIR.h
index 78505d66d6f2f..7aa13cbeb89fd 100644
--- a/clang/lib/Basic/Targets/SPIR.h
+++ b/clang/lib/Basic/Targets/SPIR.h
@@ -432,6 +432,10 @@ class LLVM_LIBRARY_VISIBILITY SPIRV64AMDGCNTargetInfo final
}
bool hasInt128Type() const override { return TargetInfo::hasInt128Type(); }
+
+ // This is only needed for validating arguments passed to
+ // __builtin_amdgcn_processor_is
+ bool isValidCPUName(StringRef Name) const override;
};
} // namespace targets
diff --git a/clang/lib/CodeGen/TargetBuiltins/AMDGPU.cpp b/clang/lib/CodeGen/TargetBuiltins/AMDGPU.cpp
index b56b739094ff3..7b1a3815144b4 100644
--- a/clang/lib/CodeGen/TargetBuiltins/AMDGPU.cpp
+++ b/clang/lib/CodeGen/TargetBuiltins/AMDGPU.cpp
@@ -284,6 +284,18 @@ void CodeGenFunction::AddAMDGPUFenceAddressSpaceMMRA(llvm::Instruction *Inst,
Inst->setMetadata(LLVMContext::MD_mmra, MMRAMetadata::getMD(Ctx, MMRAs));
}
+static Value *GetOrInsertAMDGPUPredicate(CodeGenFunction &CGF, Twine Name) {
+ auto PTy = IntegerType::getInt1Ty(CGF.getLLVMContext());
+
+ auto P = cast<GlobalVariable>(
+ CGF.CGM.getModule().getOrInsertGlobal(Name.str(), PTy));
+ P->setConstant(true);
+ P->setExternallyInitialized(true);
+
+ return CGF.Builder.CreateLoad(RawAddress(P, PTy, CharUnits::One(),
+ KnownNonNull));
+}
+
Value *CodeGenFunction::EmitAMDGPUBuiltinExpr(unsigned BuiltinID,
const CallExpr *E) {
llvm::AtomicOrdering AO = llvm::AtomicOrdering::SequentiallyConsistent;
@@ -585,6 +597,23 @@ Value *CodeGenFunction::EmitAMDGPUBuiltinExpr(unsigned BuiltinID,
llvm::Value *Env = EmitScalarExpr(E->getArg(0));
return Builder.CreateCall(F, {Env});
}
+ case AMDGPU::BI__builtin_amdgcn_processor_is: {
+ assert(CGM.getTriple().isSPIRV() &&
+ "__builtin_amdgcn_processor_is should never reach CodeGen for "
+ "concrete targets!");
+ StringRef Proc = cast<clang::StringLiteral>(E->getArg(0))->getString();
+ return GetOrInsertAMDGPUPredicate(*this, "llvm.amdgcn.is." + Proc);
+ }
+ case AMDGPU::BI__builtin_amdgcn_is_invocable: {
+ assert(CGM.getTriple().isSPIRV() &&
+ "__builtin_amdgcn_is_invocable should never reach CodeGen for "
+ "concrete targets!");
+ auto FD = cast<FunctionDecl>(
+ cast<DeclRefExpr>(E->getArg(0))->getReferencedDeclOfCallee());
+ StringRef RF =
+ getContext().BuiltinInfo.getRequiredFeatures(FD->getBuiltinID());
+ return GetOrInsertAMDGPUPredicate(*this, "llvm.amdgcn.has." + RF);
+ }
case AMDGPU::BI__builtin_amdgcn_read_exec:
return EmitAMDGCNBallotForExec(*this, E, Int64Ty, Int64Ty, false);
case AMDGPU::BI__builtin_amdgcn_read_exec_lo:
diff --git a/clang/lib/Sema/SemaExpr.cpp b/clang/lib/Sema/SemaExpr.cpp
index 7cc8374e69d73..24f5262ab3cf4 100644
--- a/clang/lib/Sema/SemaExpr.cpp
+++ b/clang/lib/Sema/SemaExpr.cpp
@@ -6541,6 +6541,22 @@ ExprResult Sema::BuildCallExpr(Scope *Scope, Expr *Fn, SourceLocation LParenLoc,
if (Result.isInvalid()) return ExprError();
Fn = Result.get();
+ // The __builtin_amdgcn_is_invocable builtin is special, and will be resolved
+ // later, when we check boolean conditions, for now we merely forward it
+ // without any additional checking.
+ if (Fn->getType() == Context.BuiltinFnTy && ArgExprs.size() == 1 &&
+ ArgExprs[0]->getType() == Context.BuiltinFnTy) {
+ auto FD = cast<FunctionDecl>(Fn->getReferencedDeclOfCallee());
+
+ if (FD->getName() == "__builtin_amdgcn_is_invocable") {
+ auto FnPtrTy = Context.getPointerType(FD->getType());
+ auto R = ImpCastExprToType(Fn, FnPtrTy, CK_BuiltinFnToFnPtr).get();
+ return CallExpr::Create(Context, R, ArgExprs, Context.VoidTy,
+ ExprValueKind::VK_PRValue, RParenLoc,
+ FPOptionsOverride());
+ }
+ }
+
if (CheckArgsForPlaceholders(ArgExprs))
return ExprError();
@@ -13234,6 +13250,20 @@ inline QualType Sema::CheckBitwiseOperands(ExprResult &LHS, ExprResult &RHS,
return InvalidOperands(Loc, LHS, RHS);
}
+static inline bool IsAMDGPUPredicateBI(Expr *E) {
+ if (!E->getType()->isVoidType())
+ return false;
+
+ if (auto CE = dyn_cast<CallExpr>(E)) {
+ if (auto BI = CE->getDirectCallee())
+ if (BI->getName() == "__builtin_amdgcn_processor_is" ||
+ BI->getName() == "__builtin_amdgcn_is_invocable")
+ return true;
+ }
+
+ return false;
+}
+
// C99 6.5.[13,14]
inline QualType Sema::CheckLogicalOperands(ExprResult &LHS, ExprResult &RHS,
SourceLocation Loc,
@@ -13329,6 +13359,9 @@ inline QualType Sema::CheckLogicalOperands(ExprResult &LHS, ExprResult &RHS,
// The following is safe because we only use this method for
// non-overloadable operands.
+ if (IsAMDGPUPredicateBI(LHS.get()) && IsAMDGPUPredicateBI(RHS.get()))
+ return Context.VoidTy;
+
// C++ [expr.log.and]p1
// C++ [expr.log.or]p1
// The operands are both contextually converted to type bool.
@@ -15576,6 +15609,38 @@ static bool isOverflowingIntegerType(ASTContext &Ctx, QualType T) {
return Ctx.getIntWidth(T) >= Ctx.getIntWidth(Ctx.IntTy);
}
+static Expr *ExpandAMDGPUPredicateBI(ASTContext &Ctx, CallExpr *CE) {
+ if (!CE->getBuiltinCallee())
+ return CXXBoolLiteralExpr::Create(Ctx, false, Ctx.BoolTy, CE->getExprLoc());
+
+ if (Ctx.getTargetInfo().getTriple().isSPIRV()) {
+ CE->setType(Ctx.getLogicalOperationType());
+ return CE;
+ }
+
+ bool P = false;
+ auto &TI = Ctx.getTargetInfo();
+
+ if (CE->getDirectCallee()->getName() == "__builtin_amdgcn_processor_is") {
+ auto GFX = dyn_cast<StringLiteral>(CE->getArg(0)->IgnoreParenCasts());
+ auto TID = TI.getTargetID();
+ if (GFX && TID) {
+ auto N = GFX->getString();
+ P = TI.isValidCPUName(GFX->getString()) && TID->find(N) == 0;
+ }
+ } else {
+ auto FD = cast<FunctionDecl>(CE->getArg(0)->getReferencedDeclOfCallee());
+
+ StringRef RF = Ctx.BuiltinInfo.getRequiredFeatures(FD->getBuiltinID());
+ llvm::StringMap<bool> CF;
+ Ctx.getFunctionFeatureMap(CF, FD);
+
+ P = Builtin::evaluateRequiredTargetFeatures(RF, CF);
+ }
+
+ return CXXBoolLiteralExpr::Create(Ctx, P, Ctx.BoolTy, CE->getExprLoc());
+}
+
ExprResult Sema::CreateBuiltinUnaryOp(SourceLocation OpLoc,
UnaryOperatorKind Opc, Expr *InputExpr,
bool IsAfterAmp) {
@@ -15753,6 +15818,8 @@ ExprResult Sema::CreateBuiltinUnaryOp(SourceLocation OpLoc,
// Vector logical not returns the signed variant of the operand type.
resultType = GetSignedVectorType(resultType);
break;
+ } else if (IsAMDGPUPredicateBI(InputExpr)) {
+ break;
} else {
return ExprError(Diag(OpLoc, diag::err_typecheck_unary_expr)
<< resultType << Input.get()->getSourceRange());
@@ -20469,6 +20536,88 @@ void Sema::DiagnoseEqualityWithExtraParens(ParenExpr *ParenE) {
}
}
+static bool ValidateAMDGPUPredicateBI(Sema &Sema, CallExpr *CE) {
+ if (CE->getDirectCallee()->getName() == "__builtin_amdgcn_processor_is") {
+ auto GFX = dyn_cast<StringLiteral>(CE->getArg(0)->IgnoreParenCasts());
+ if (!GFX) {
+ Sema.Diag(CE->getExprLoc(),
+ diag::err_amdgcn_processor_is_arg_not_literal);
+ return false;
+ }
+ auto N = GFX->getString();
+ if (!Sema.getASTContext().getTargetInfo().isValidCPUName(N) &&
+ (!Sema.getASTContext().getAuxTargetInfo() ||
+ !Sema.getASTContext().getAuxTargetInfo()->isValidCPUName(N))) {
+ Sema.Diag(CE->getExprLoc(),
+ diag::err_amdgcn_processor_is_arg_invalid_value) << N;
+ return false;
+ }
+ } else {
+ auto Arg = CE->getArg(0);
+ if (!Arg || Arg->getType() != Sema.getASTContext().BuiltinFnTy) {
+ Sema.Diag(CE->getExprLoc(),
+ diag::err_amdgcn_is_invocable_arg_invalid_value) << Arg;
+ return false;
+ }
+ }
+
+ return true;
+}
+
+static Expr *MaybeHandleAMDGPUPredicateBI(Sema &Sema, Expr *E, bool &Invalid) {
+ if (auto UO = dyn_cast<UnaryOperator>(E)) {
+ auto SE = dyn_cast<CallExpr>(UO->getSubExpr());
+ if (IsAMDGPUPredicateBI(SE)) {
+ assert(
+ UO->getOpcode() == UnaryOperator::Opcode::UO_LNot &&
+ "__builtin_amdgcn_processor_is and __builtin_amdgcn_is_invocable "
+ "can only be used as operands of logical ops!");
+
+ if (!ValidateAMDGPUPredicateBI(Sema, SE)) {
+ Invalid = true;
+ return nullptr;
+ }
+
+ UO->setSubExpr(ExpandAMDGPUPredicateBI(Sema.getASTContext(), SE));
+ UO->setType(Sema.getASTContext().getLogicalOperationType());
+
+ return UO;
+ }
+ }
+ if (auto BO = dyn_cast<BinaryOperator>(E)) {
+ auto LHS = dyn_cast<CallExpr>(BO->getLHS());
+ auto RHS = dyn_cast<CallExpr>(BO->getRHS());
+ if (IsAMDGPUPredicateBI(LHS) && IsAMDGPUPredicateBI(RHS)) {
+ assert(
+ BO->isLogicalOp() &&
+ "__builtin_amdgcn_processor_is and __builtin_amdgcn_is_invocable "
+ "can only be used as operands of logical ops!");
+
+ if (!ValidateAMDGPUPredicateBI(Sema, LHS) ||
+ !ValidateAMDGPUPredicateBI(Sema, RHS)) {
+ Invalid = true;
+ return nullptr;
+ }
+
+ BO->setLHS(ExpandAMDGPUPredicateBI(Sema.getASTContext(), LHS));
+ BO->setRHS(ExpandAMDGPUPredicateBI(Sema.getASTContext(), RHS));
+ BO->setType(Sema.getASTContext().getLogicalOperationType());
+
+ return BO;
+ }
+ }
+ if (auto CE = dyn_cast<CallExpr>(E))
+ if (IsAMDGPUPredicateBI(CE)) {
+ if (!ValidateAMDGPUPredicateBI(Sema, CE)) {
+ Invalid = true;
+ return nullptr;
+ }
+ return ExpandAMDGPUPredicateBI(Sema.getASTContext(), CE);
+ }
+
+ return nullptr;
+}
+
ExprResult Sema::CheckBooleanCondition(SourceLocation Loc, Expr *E,
bool IsConstexpr) {
DiagnoseAssignmentAsCondition(E);
@@ -20480,6 +20629,14 @@ ExprResult Sema::CheckBooleanCondition(SourceLocation Loc, Expr *E,
E = result.get();
if (!E->isTypeDependent()) {
+ if (E->getType()->isVoidType()) {
+ bool IsInvalidPredicate = false;
+ if (auto BIC = MaybeHandleAMDGPUPredicateBI(*this, E, IsInvalidPredicate))
+ return BIC;
+ else if (IsInvalidPredicate)
+ return ExprError();
+ }
+
if (getLangOpts().CPlusPlus)
return CheckCXXBooleanCondition(E, IsConstexpr); // C++ 6.4p4
diff --git a/clang/test/CodeGen/amdgpu-builtin-cpu-is.c b/clang/test/CodeGen/amdgpu-builtin-cpu-is.c
new file mode 100644
index 0000000000000..6e261d9f5d239
--- /dev/null
+++ b/clang/test/CodeGen/amdgpu-builtin-cpu-is.c
@@ -0,0 +1,65 @@
+// NOTE: Assertions have been autogenerated by utils/update_cc_test_checks.py UTC_ARGS: --check-globals all --version 5
+// RUN: %clang_cc1 -triple amdgcn-amd-amdhsa -target-cpu gfx900 -emit-llvm %s -o - | FileCheck --check-prefix=AMDGCN-GFX900 %s
+// RUN: %clang_cc1 -triple amdgcn-amd-amdhsa -target-cpu gfx1010 -emit-llvm %s -o - | FileCheck --check-prefix=AMDGCN-GFX1010 %s
+// RUN: %clang_cc1 -triple spirv64-amd-amdhsa -emit-llvm %s -o - | FileCheck --check-prefix=AMDGCNSPIRV %s
+
+// Test that, depending on triple and, if applicable, target-cpu, one of three
+// things happens:
+// 1) for gfx900 we emit a call to trap (concrete target, matches)
+// 2) for gfx1010 we emit an empty kernel (concrete target, does not match)
+// 3) for AMDGCNSPIRV we emit llvm.amdgcn.is.gfx900 as a bool global, and
+// load from it to provide the condition a br (abstract target)
+//.
+// AMDGCN-GFX900: @__oclc_ABI_version = weak_odr hidden local_unnamed_addr addrspace(4) constant i32 600
+//.
+// AMDGCN-GFX1010: @__oclc_ABI_version = weak_odr hidden local_unnamed_addr addrspace(4) constant i32 600
+//.
+// AMDGCNSPIRV: @llvm.amdgcn.is.gfx900 = external addrspace(1) externally_initialized constant i1
+//.
+// AMDGCN-GFX900-LABEL: define dso_local void @foo(
+// AMDGCN-GFX900-SAME: ) #[[ATTR0:[0-9]+]] {
+// AMDGCN-GFX900-NEXT: [[ENTRY:.*:]]
+// AMDGCN-GFX900-NEXT: call void @llvm.trap()
+// AMDGCN-GFX900-NEXT: ret void
+//
+// AMDGCN-GFX1010-LABEL: define dso_local void @foo(
+// AMDGCN-GFX1010-SAME: ) #[[ATTR0:[0-9]+]] {
+// AMDGCN-GFX1010-NEXT: [[ENTRY:.*:]]
+// AMDGCN-GFX1010-NEXT: ret void
+//
+// AMDGCNSPIRV-LABEL: define spir_func void @foo(
+// AMDGCNSPIRV-SAME: ) addrspace(4) #[[ATTR0:[0-9]+]] {
+// AMDGCNSPIRV-NEXT: [[ENTRY:.*:]]
+// AMDGCNSPIRV-NEXT: [[TMP0:%.*]] = load i1, ptr addrspace(1) @llvm.amdgcn.is.gfx900, align 1
+// AMDGCNSPIRV-NEXT: br i1 [[TMP0]], label %[[IF_THEN:.*]], label %[[IF_END:.*]]
+// AMDGCNSPIRV: [[IF_THEN]]:
+// AMDGCNSPIRV-NEXT: call addrspace(4) void @llvm.trap()
+// AMDGCNSPIRV-NEXT: br label %[[IF_END]]
+// AMDGCNSPIRV: [[IF_END]]:
+// AMDGCNSPIRV-NEXT: ret void
+//
+void foo() {
+ if (__builtin_cpu_is("gfx90...
[truncated]
|
✅ With the latest revision this PR passed the C/C++ code formatter. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Very cool, in general I'm a fan of being able to use LLVM-IR as a more general target. We already hack around these things in practice, so I think it's only beneficial to formalize is in a more correct way, even if LLVM-IR wasn't 'strictly' intended to be this kind of serialization format.
// AMDGCNSPIRV-NEXT: ret void | ||
// | ||
void foo() { | ||
if (__builtin_amdgcn_is_invocable(__builtin_amdgcn_permlanex16)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this intended to handle builtins that require certain target features to be set?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we get a test? Something simple like +dpp
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we get a test? Something simple like
+dpp
?
Sure, but if possible, could you clarify what you would like to be tested / what you expect to see, so that we avoid churning.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The issue with how the ROCm device libs does it, is that certain builtins require target features to be used. It hacks around this with the __attribute__((target))
. I just want to know that you can call a builtin that requires +ddp
features without that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is worth a release note item.
Indeed! I botched moving the changes from my internal scratchpad, and the rel notes got lost; fixing. |
I went into it a bit above without having seen your question (race condition I guess:) ), but to have it in one spot:
Now, this is specific to AMDGPU, I don't want to speculate too much about how other targets deal with this - which is another reason for which these are target builtins rather than going for something more generic. |
Let me add my few cents here.
Let me try to attempt to answer this question without introducing a new builtin in clang (at first). In SPIR-V there is specialization constant which AFAIK doesn't have a direct LLVM IR counterpart.
At runtime, when such SPIR-V module is JIT compiled OpSpecConstant materializes, so DCE (or better say some variation of DCE that is enforced to work with optnone) will be able to reason about %cmp result removing the dead branch, so we won't get unsupported feature at codegen. Problem is: how to generate such SPIR-V from clang. So my understanding, that the new builtin should eventually lowered (by SPIR-V backend?) to a construct like in the pseudo-code, though that is not what is currently happening. And I believe, that existing |
This is one possible implementation indeed, for a workflow that goes from SPIR-V to ISA, or chooses to do the DCE in SPIR-V. Due to having to compose with an existing mature toolchain, rather than starting fresh, we have a slightly different flow where we reverse translate to LLVM IR and "resume" compilation from that point. Hence, the implicitly inserted never to be emitted globals, which play the role the spec constants play in your example, when coupled with the dedicated predicate expansion pass. Something similar could be added to e.g. |
High liklihood that I'll need something similar for my GPU libraries so I'd prefer something not explicitly tied to SPIR-V. |
An intrinsic seems like the right IR model for CPU recognition, even for targets that don't specifically need to late-resolve it. That should be much easier for passes to optimize based on CPU settings than directly emitting the compiler-rt reference in the frontend. I know that generating IR with conservative target options and then bumping the target CPU in a pass is something various people have been interested in, so late optimization is specifically worth planning for here. We do have a theoretical problem with guaranteeing that non-matching code isn't emitted, because LLVM IR doesn't promise to leave a code sequence like this alone:
LLVM could theoretically complicate this by e.g. introducing a PHI or an |
The solution we went with here (for our use case) is to just run the predicate expansion pass over pristine Clang generated IR, before any other optimisation. I think that @nikic suggested an alternative based on |
We didn't really say much on the call itself; we just spent a minute while we were going through controversial RFCs/PRs, to call this out as something that needed attention. If you think this topic would benefit from a meeting, we can organize one... but maybe a 1-on-1 chat would be better to start with, just to make sure we're on the same page.
If you have a construct like the following:
We can tell, statically, that the first call is correctly guarded by an if statement: it's guaranteed it will never run on a non-gfx9000 processor. The second call, on the other hand, is not. So we can add a frontend rule: the first call is legal, the second is not. Obviously the error has false positives, in the sense that we can't actually prove the second call is incorrect at runtime... but that's fine, probably. What I don't want is that we end up with, essentially, the same constraint, but enforced by the backend.
Sure; we can't stop people from calling arbitrary pointers.
There are ways to solve this: for example, we can make the llvm.compiler.supports produce a token, and staple that token onto the intrinsics using a bundle. Making this work requires that IRGen knows which intrinic calls are actually impacted... I care less about exactly how we solve this because we can adjust the solution later. Whatever we expose in the frontend is much harder to change later. |
Definitely, more than happy to have a 1-on-1 (2-on-1 even, since I think @AaronBallman also suggested something along these lines as well :) ).
I will note that on concrete targets, what is being proposed already works as described, by virtue of it being an error to call a builtin that is not available. Having said that, this gives me some trepidation and I think it can end up being user adverse. Consider the following case: void foo() {
if (__builtin_amdgcn_is_invocable(__builtin_amdgcn_gfx9000_specific_intrinsic))
__builtin_amdgcn_gfx9000_specific_intrinsic;
}
void bar() {
if (__builtin_amdgcn_processor_is("gfx9000")
foo();
foo();
} We've just made the call to foo() illegal on anything that is not gfx9000, but that builtin / intrinsic could exist in 8999 other gfx versions. These don't always form binary, mutually exclusive structures. So I think I disagree with the "that's fine, probably".
Could you please detail why? Ultimately the BE still gets to decide on the legality of things that tweak it pretty intrinsically, even if said things come from otherwise linguistically correct constructs which have passed FE analysis. Also, we'd never really reach the BE, we're just sliding in immediately after Clang, before optimisation, so there's still enough info to provide an useful error message. Furthermore, this might be a better point to check anyways, as linking in bitcode could / should have already occured, so what would otherwise have been external symbols that impact viability would now be satisfied.
Between making the wrong choice and going with something that's user adverse early on, then trying to build increasingly complicated mechanisms to make it work, I would prefer we just left these as target specific, low level builtins returning |
Please email me with some times that will work for you.
I... don't think I'm suggesting this? The fact that a call to foo() from a __builtin_amdgcn_processor_is block shouldn't imply anything about other calls to foo(). What I'm basically suggesting is just exposing SPIR-V specialization constants as a C construct. Your example SPIR-V was something like:
We want to come up with a corresponding C construct that's guaranteed to compile to valid SPIR-V. My suggestion is something like:
In the body of the if statement, you can use whatever intrinsics are legal on hw_id_that_supports_feature.
Isn't doing checks immediately after IR generation basically the same as checking the AST, just on a slightly different representation? |
Perhaps I am misunderstanding, case in which I apologise. I started from: "We can tell, statically, that the first call is correctly guarded by an if statement: it's guaranteed it will never run on a non-gfx9000 processor. The second call, on the other hand, is not. So we can add a frontend rule: the first call is legal, the second is not." I'm saying we cannot really infer anything about the legality of a naked call to a builtin either, at this point. Because the builtin might be available on many processors / processors other than gfx9000. We can develop the argument to say "well, fine, what we actually meant here is
I'm confused as to what is different versus what this PR does, which is does generate valid SPIRV / LLVM IR. Perhaps there is an underlying assumption that there is some construct that makes the otherwise dead block still contain valid code, and there really isn't. There's an example I provided above where what is guarded is (static) finite resource allocation, not just the use of an intrinsic; we'd not know in the FE which is correct, and we cannot allocate both until we know the target at JIT / finalisation time (so before executing the code), and we cannot generate executable code with both allocation requests live, as the finite resource gets exhausted. So the only place where we can meaningfully deal with this is in the ME / over IR, before hitting the BE. We should be careful to avoid focusing on the
Not in this case. There's at least two aspects that make a difference:
None of the above matters for concrete targets, where we just resolve everything in the AST already, because we have full information in the FE. |
I think Eli is suggesting something like the rule for @available:
So it's not that adding a check for the builtin will suddenly cause un-checked calls to it to fail, it's that you have to have such a check to use it in the first place, which makes sense because it's not always available on the target. Note that the |
This is interesting, and I had / have looked at
If the added warning is considered I can loot at adding that but I think that should be a separate patch / conversation since it'd mess with established builtin behaviour (as mentioned, one can reach for an unguarded builtin today without any additional diagnostics / invitation to touch
Unless I am missing something obvious, this brings us dangerously close to re-inventing language subsetting / restrictions, that are already present in single-source(-ish) offload languages. It ends up being |
So your users today are building for generic AMDGPU but using builtins that are only available on a specific processor release? Presumably those builtin calls are protected somehow, since otherwise their programs would be crashing in the backend or, worse, at runtime. Are they managing that in some way that the language could theoretically leverage at all, or in practice is it too varied and ad hoc? |
This is basically how the NVIDIA device library and ROCm device library already work. In the latter case we just accept that |
Right, but the code still contains some kind of check to decide which version of the library function to call. Maybe it's implicit by multiversioned functions or something, but there's something in the source code, because if the user just writes a kernel that does nothing but call a builtin that's not always available, you've got a problem. My question is just what those checks look like and whether that's something we can reasonably leverage in the language design here to help programmers not make that kind of mistake. And if these checks are all done in the library, the library can of course just be annotated. |
They do crash at run time, except not in the way one would expect - they fail when finalising / JIT-ing from SPIRV, which is still a compiler failure, except it's a BE / ISEL one. But yes, this is a current problem (which this is addressing). Here's an example (there are others):
What we would like to do is to allow people to handle these cases with a linear translation from the above into |
This is precisely why we want the frontend diagnostic: if we diagnose the bad cases in the frontend, later stages are guaranteed to lower correctly. If we diagnose later, you don't know you messed up until you get crash reports from your users.
I prefer to think of it as a generic target which has none of the features. Yes, you might have to take some time to annotate your code, but once you have the annotations, it catches a lot of potential mistakes. In case nobody else has brought it up, we currently do the following on Arm, which is conceptually similar:
|
The ROCm Device Libs use the |
I guess I am lost as to what one can diagnose. The upper bound would be the warning
This is an interesting view but does not address our uses / interests, which is why we defined it as described, as union rather than intersection.
But not all, and it does so at the cost of yet another annotation (we have quite a few already). Furthermore, I feel that what is getting lost in translation is that the scope of these is not just functions. You can opt to guard ASM blocks, which might have target specific constraints, or resource allocation (as I mentioned). Extracting all of those into separate functions, that are now annotated, pretty much guarantees limited interest - very few users are going to start rewriting their working code to extract builtin / asm / resource allocation into separate named functions with attributes. Even if they did, you still do not know the concrete target, and therefore cannot unambiguously reason about the legality of this or that call (which is why even |
@efriedma-quic was kind enough to have a call where we discussed this a bit more. I'll update tomorrow with a potential way forward, for the group's consideration. |
Following up, here's a possible approach to making progress, broken down in phases, (@efriedma-quic can correct me if I am misrepresenting any of these):
|
The frontend tradeoffs here are complicated and already under discussion so I'm going to skip over that aspect. This looks like a layer of stuff which can be built on top of an llvm intrinsic that guarantees branch folding before instruction selection. The problems with the rocm device libs having invalid code on branches that the compiler is meant to strip but doesn't at O0 would be solvable by leaving calls to that raw intrinsic in place. We can probably do that as an intrinsic returning bool passed to the branch, where a target hook is called on it to resolve to true/false/report-error, as part of a simple simplify-cfg style pass. Essentially just force the evaluation of the intrinsic and then promise to delete dead branches. I'd like that intrinsic anyway for language runtime hackery. Is there an RFC associated with this that would be a better place to put that thought? If I implement it, are we game for rebasing this on said general purpose lowering intrinsic? |
This change adds two semi-magical builtins for AMDGPU:
__builtin_amdgcn_processor_is
, which is similar in observable behaviour with__builtin_cpu_is
, except that it is never "evaluated" at run time;__builtin_amdgcn_is_invocable
, which is behaviourally similar with__has_builtin
, except that it is not a macro (i.e. not evaluated at preprocessing time).Neither of these are
constexpr
, even though when compiling for concrete (i.e.gfxXXX
/gfxXXX-generic
) targets they get evaluated in Clang, so they shouldn't tear the AST too badly / at all for multi-pass compilation cases like HIP. They can only be used in specific contexts (as args to control structures).The motivation for adding these is two-fold:
I've tried to keep the overall footprint of the change small. The changes to Sema are a bit unpleasant, but there was a strong desire to have Clang validate these, and to constrain their uses, and this was the most compact solution I could come up with (suggestions welcome).
In the end, I will note there is nothing that is actually AMDGPU specific here, so it is possible that in the future, assuming interests from other targets / users, we'd just promote them to generic intrinsics.