Skip to content

Conversation

@melver
Copy link
Contributor

@melver melver commented Nov 19, 2025

The option -falloc-token-max=0 is supposed to be usable to override previous settings back to the target default max tokens (SIZE_MAX).

This did not work for the builtin:

| executed command: clang -cc1 [..] -nostdsysteminc -triple x86_64-linux-gnu -std=c++23 -fsyntax-only -verify clang/test/SemaCXX/alloc-token.cpp -falloc-token-max=0 | clang: llvm/lib/Support/AllocToken.cpp:38: std::optional<uint64_t> llvm::getAllocToken(AllocTokenMode, const AllocTokenMetadata &, uint64_t): Assertion `MaxTokens && "Must provide non-zero max tokens"' failed. 

Fix it by also picking the default if "0" is passed.

Improve the documentation to be clearer what the value of "0" means.

The option -falloc-token-max=0 is supposed to be usable to override previous settings back to the target default max tokens (SIZE_MAX). This did not work for the builtin: | executed command: clang -cc1 [..] -nostdsysteminc -triple x86_64-linux-gnu -std=c++23 -fsyntax-only -verify clang/test/SemaCXX/alloc-token.cpp -falloc-token-max=0 | clang: llvm/lib/Support/AllocToken.cpp:38: std::optional<uint64_t> llvm::getAllocToken(AllocTokenMode, const AllocTokenMetadata &, uint64_t): Assertion `MaxTokens && "Must provide non-zero max tokens"' failed. Fix it by also picking the default if "0" is passed. Improve the documentation to be clearer what the value of "0" means.
@llvmbot llvmbot added clang:frontend Language frontend issues, e.g. anything involving "Sema" llvm:transforms clang:bytecode Issues for the clang bytecode constexpr interpreter labels Nov 19, 2025
@llvmbot
Copy link
Member

llvmbot commented Nov 19, 2025

@llvm/pr-subscribers-llvm-transforms

Author: Marco Elver (melver)

Changes

The option -falloc-token-max=0 is supposed to be usable to override previous settings back to the target default max tokens (SIZE_MAX).

This did not work for the builtin:

| executed command: clang -cc1 [..] -nostdsysteminc -triple x86_64-linux-gnu -std=c++23 -fsyntax-only -verify clang/test/SemaCXX/alloc-token.cpp -falloc-token-max=0
| clang: llvm/lib/Support/AllocToken.cpp:38: std::optional<uint64_t> llvm::getAllocToken(AllocTokenMode, const AllocTokenMetadata &, uint64_t): Assertion `MaxTokens && "Must provide non-zero max tokens"' failed.

Fix it by also picking the default if "0" is passed.

Improve the documentation to be clearer what the value of "0" means.


Full diff: https://github.com/llvm/llvm-project/pull/168689.diff

7 Files Affected:

  • (modified) clang/docs/AllocToken.rst (+3-3)
  • (modified) clang/include/clang/Basic/LangOptions.h (+2-2)
  • (modified) clang/include/clang/Options/Options.td (+1-1)
  • (modified) clang/lib/AST/ByteCode/InterpBuiltin.cpp (+2-1)
  • (modified) clang/lib/AST/ExprConstant.cpp (+2-1)
  • (modified) clang/test/SemaCXX/alloc-token.cpp (+1)
  • (modified) llvm/lib/Transforms/Instrumentation/AllocToken.cpp (+4-3)
diff --git a/clang/docs/AllocToken.rst b/clang/docs/AllocToken.rst index 1a740e5e22c29..3f319e8be6421 100644 --- a/clang/docs/AllocToken.rst +++ b/clang/docs/AllocToken.rst @@ -52,8 +52,8 @@ change or removal. These may (experimentally) be selected with ``-Xclang The following command-line options affect generated token IDs: * ``-falloc-token-max=<N>`` - Configures the maximum number of tokens. No max by default (tokens bounded - by ``SIZE_MAX``). + Configures the maximum number of token IDs. By default the number of tokens + is bounded by ``SIZE_MAX``. Querying Token IDs with ``__builtin_infer_alloc_token`` ======================================================= @@ -129,7 +129,7 @@ Fast ABI -------- An alternative ABI can be enabled with ``-fsanitize-alloc-token-fast-abi``, -which encodes the token ID hint in the allocation function name. +which encodes the token ID in the allocation function name. .. code-block:: c diff --git a/clang/include/clang/Basic/LangOptions.h b/clang/include/clang/Basic/LangOptions.h index 8aa89d8c8c807..3f042f8ddb5a1 100644 --- a/clang/include/clang/Basic/LangOptions.h +++ b/clang/include/clang/Basic/LangOptions.h @@ -566,8 +566,8 @@ class LangOptions : public LangOptionsBase { bool AtomicFineGrainedMemory = false; bool AtomicIgnoreDenormalMode = false; - /// Maximum number of allocation tokens (0 = no max), nullopt if none set (use - /// target default). + /// Maximum number of allocation tokens (0 = target SIZE_MAX), nullopt if none + /// set (use target SIZE_MAX). std::optional<uint64_t> AllocTokenMax; /// The allocation token mode. diff --git a/clang/include/clang/Options/Options.td b/clang/include/clang/Options/Options.td index cda11fdc94230..786acd6abbd21 100644 --- a/clang/include/clang/Options/Options.td +++ b/clang/include/clang/Options/Options.td @@ -2758,7 +2758,7 @@ defm sanitize_alloc_token_extended : BoolOption<"f", "sanitize-alloc-token-exten def falloc_token_max_EQ : Joined<["-"], "falloc-token-max=">, Group<f_Group>, Visibility<[ClangOption, CC1Option]>, MetaVarName<"<N>">, - HelpText<"Limit to maximum N allocation tokens (0 = no max)">; + HelpText<"Limit to maximum N allocation tokens (0 = target SIZE_MAX)">; def falloc_token_mode_EQ : Joined<["-"], "falloc-token-mode=">, Group<f_Group>, Visibility<[CC1Option]>, diff --git a/clang/lib/AST/ByteCode/InterpBuiltin.cpp b/clang/lib/AST/ByteCode/InterpBuiltin.cpp index 5a96320e12b6f..b6013834b6852 100644 --- a/clang/lib/AST/ByteCode/InterpBuiltin.cpp +++ b/clang/lib/AST/ByteCode/InterpBuiltin.cpp @@ -1317,8 +1317,9 @@ static bool interp__builtin_infer_alloc_token(InterpState &S, CodePtr OpPC, uint64_t BitWidth = ASTCtx.getTypeSize(ASTCtx.getSizeType()); auto Mode = ASTCtx.getLangOpts().AllocTokenMode.value_or(llvm::DefaultAllocTokenMode); + auto MaxTokensOpt = ASTCtx.getLangOpts().AllocTokenMax; uint64_t MaxTokens = - ASTCtx.getLangOpts().AllocTokenMax.value_or(~0ULL >> (64 - BitWidth)); + MaxTokensOpt.value_or(0) ? *MaxTokensOpt : (~0ULL >> (64 - BitWidth)); // We do not read any of the arguments; discard them. for (int I = Call->getNumArgs() - 1; I >= 0; --I) diff --git a/clang/lib/AST/ExprConstant.cpp b/clang/lib/AST/ExprConstant.cpp index 74f6e3acb6b39..120c68d27de13 100644 --- a/clang/lib/AST/ExprConstant.cpp +++ b/clang/lib/AST/ExprConstant.cpp @@ -15559,8 +15559,9 @@ bool IntExprEvaluator::VisitBuiltinCallExpr(const CallExpr *E, auto Mode = Info.getLangOpts().AllocTokenMode.value_or(llvm::DefaultAllocTokenMode); uint64_t BitWidth = Info.Ctx.getTypeSize(Info.Ctx.getSizeType()); + auto MaxTokensOpt = Info.getLangOpts().AllocTokenMax; uint64_t MaxTokens = - Info.getLangOpts().AllocTokenMax.value_or(~0ULL >> (64 - BitWidth)); + MaxTokensOpt.value_or(0) ? *MaxTokensOpt : (~0ULL >> (64 - BitWidth)); auto MaybeToken = llvm::getAllocToken(Mode, *ATMD, MaxTokens); if (!MaybeToken) return Error(E, diag::note_constexpr_infer_alloc_token_stateful_mode); diff --git a/clang/test/SemaCXX/alloc-token.cpp b/clang/test/SemaCXX/alloc-token.cpp index be7acb7d42ef2..518ad7d94eb96 100644 --- a/clang/test/SemaCXX/alloc-token.cpp +++ b/clang/test/SemaCXX/alloc-token.cpp @@ -1,4 +1,5 @@ // RUN: %clang_cc1 -triple x86_64-linux-gnu -std=c++23 -fsyntax-only -verify %s +// RUN: %clang_cc1 -triple x86_64-linux-gnu -std=c++23 -fsyntax-only -verify %s -falloc-token-max=0 // RUN: %clang_cc1 -triple x86_64-linux-gnu -std=c++23 -fsyntax-only -verify %s -fexperimental-new-constant-interpreter // RUN: %clang_cc1 -triple x86_64-linux-gnu -std=c++23 -fsyntax-only -verify %s -falloc-token-mode=typehash -DMODE_TYPEHASH // RUN: %clang_cc1 -triple x86_64-linux-gnu -std=c++23 -fsyntax-only -verify %s -falloc-token-max=2 -DTOKEN_MAX=2 diff --git a/llvm/lib/Transforms/Instrumentation/AllocToken.cpp b/llvm/lib/Transforms/Instrumentation/AllocToken.cpp index 8181e4ef1d74f..cf74354cb438f 100644 --- a/llvm/lib/Transforms/Instrumentation/AllocToken.cpp +++ b/llvm/lib/Transforms/Instrumentation/AllocToken.cpp @@ -67,9 +67,10 @@ cl::opt<std::string> ClFuncPrefix("alloc-token-prefix", cl::desc("The allocation function prefix"), cl::Hidden, cl::init("__alloc_token_")); -cl::opt<uint64_t> ClMaxTokens("alloc-token-max", - cl::desc("Maximum number of tokens (0 = no max)"), - cl::Hidden, cl::init(0)); +cl::opt<uint64_t> + ClMaxTokens("alloc-token-max", + cl::desc("Maximum number of tokens (0 = target SIZE_MAX)"), + cl::Hidden, cl::init(0)); cl::opt<bool> ClFastABI("alloc-token-fast-abi", 
Copy link
Contributor

@a-nogikh a-nogikh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@melver melver merged commit 1500536 into llvm:main Nov 19, 2025
14 of 15 checks passed
@llvm-ci
Copy link
Collaborator

llvm-ci commented Nov 19, 2025

LLVM Buildbot has detected a new failure on builder mlir-nvidia-gcc7 running on mlir-nvidia while building clang,llvm at step 7 "test-build-check-mlir-build-only-check-mlir".

Full details are available at: https://lab.llvm.org/buildbot/#/builders/116/builds/21183

Here is the relevant piece of the build log for the reference
Step 7 (test-build-check-mlir-build-only-check-mlir) failure: test (failure) ******************** TEST 'MLIR :: Integration/GPU/CUDA/async.mlir' FAILED ******************** Exit Code: 1 Command Output (stdout): -- # RUN: at line 1 /vol/worker/mlir-nvidia/mlir-nvidia-gcc7/llvm.obj/bin/mlir-opt /vol/worker/mlir-nvidia/mlir-nvidia-gcc7/llvm.src/mlir/test/Integration/GPU/CUDA/async.mlir | /vol/worker/mlir-nvidia/mlir-nvidia-gcc7/llvm.obj/bin/mlir-opt -gpu-kernel-outlining | /vol/worker/mlir-nvidia/mlir-nvidia-gcc7/llvm.obj/bin/mlir-opt -pass-pipeline='builtin.module(gpu.module(strip-debuginfo,convert-gpu-to-nvvm),nvvm-attach-target)' | /vol/worker/mlir-nvidia/mlir-nvidia-gcc7/llvm.obj/bin/mlir-opt -gpu-async-region -gpu-to-llvm -reconcile-unrealized-casts -gpu-module-to-binary="format=fatbin" | /vol/worker/mlir-nvidia/mlir-nvidia-gcc7/llvm.obj/bin/mlir-opt -async-to-async-runtime -async-runtime-ref-counting | /vol/worker/mlir-nvidia/mlir-nvidia-gcc7/llvm.obj/bin/mlir-opt -convert-async-to-llvm -convert-func-to-llvm -convert-arith-to-llvm -convert-cf-to-llvm -reconcile-unrealized-casts | /vol/worker/mlir-nvidia/mlir-nvidia-gcc7/llvm.obj/bin/mlir-runner --shared-libs=/vol/worker/mlir-nvidia/mlir-nvidia-gcc7/llvm.obj/lib/libmlir_cuda_runtime.so --shared-libs=/vol/worker/mlir-nvidia/mlir-nvidia-gcc7/llvm.obj/lib/libmlir_async_runtime.so --shared-libs=/vol/worker/mlir-nvidia/mlir-nvidia-gcc7/llvm.obj/lib/libmlir_runner_utils.so --entry-point-result=void -O0 | /vol/worker/mlir-nvidia/mlir-nvidia-gcc7/llvm.obj/bin/FileCheck /vol/worker/mlir-nvidia/mlir-nvidia-gcc7/llvm.src/mlir/test/Integration/GPU/CUDA/async.mlir # executed command: /vol/worker/mlir-nvidia/mlir-nvidia-gcc7/llvm.obj/bin/mlir-opt /vol/worker/mlir-nvidia/mlir-nvidia-gcc7/llvm.src/mlir/test/Integration/GPU/CUDA/async.mlir # executed command: /vol/worker/mlir-nvidia/mlir-nvidia-gcc7/llvm.obj/bin/mlir-opt -gpu-kernel-outlining # executed command: /vol/worker/mlir-nvidia/mlir-nvidia-gcc7/llvm.obj/bin/mlir-opt '-pass-pipeline=builtin.module(gpu.module(strip-debuginfo,convert-gpu-to-nvvm),nvvm-attach-target)' # executed command: /vol/worker/mlir-nvidia/mlir-nvidia-gcc7/llvm.obj/bin/mlir-opt -gpu-async-region -gpu-to-llvm -reconcile-unrealized-casts -gpu-module-to-binary=format=fatbin # executed command: /vol/worker/mlir-nvidia/mlir-nvidia-gcc7/llvm.obj/bin/mlir-opt -async-to-async-runtime -async-runtime-ref-counting # executed command: /vol/worker/mlir-nvidia/mlir-nvidia-gcc7/llvm.obj/bin/mlir-opt -convert-async-to-llvm -convert-func-to-llvm -convert-arith-to-llvm -convert-cf-to-llvm -reconcile-unrealized-casts # executed command: /vol/worker/mlir-nvidia/mlir-nvidia-gcc7/llvm.obj/bin/mlir-runner --shared-libs=/vol/worker/mlir-nvidia/mlir-nvidia-gcc7/llvm.obj/lib/libmlir_cuda_runtime.so --shared-libs=/vol/worker/mlir-nvidia/mlir-nvidia-gcc7/llvm.obj/lib/libmlir_async_runtime.so --shared-libs=/vol/worker/mlir-nvidia/mlir-nvidia-gcc7/llvm.obj/lib/libmlir_runner_utils.so --entry-point-result=void -O0 # .---command stderr------------ # | 'cuStreamWaitEvent(stream, event, 0)' failed with 'CUDA_ERROR_CONTEXT_IS_DESTROYED' # | 'cuEventDestroy(event)' failed with 'CUDA_ERROR_CONTEXT_IS_DESTROYED' # | 'cuStreamWaitEvent(stream, event, 0)' failed with 'CUDA_ERROR_CONTEXT_IS_DESTROYED' # | 'cuEventDestroy(event)' failed with 'CUDA_ERROR_CONTEXT_IS_DESTROYED' # | 'cuStreamWaitEvent(stream, event, 0)' failed with 'CUDA_ERROR_CONTEXT_IS_DESTROYED' # | 'cuStreamWaitEvent(stream, event, 0)' failed with 'CUDA_ERROR_CONTEXT_IS_DESTROYED' # | 'cuEventDestroy(event)' failed with 'CUDA_ERROR_CONTEXT_IS_DESTROYED' # | 'cuEventDestroy(event)' failed with 'CUDA_ERROR_CONTEXT_IS_DESTROYED' # | 'cuEventSynchronize(event)' failed with 'CUDA_ERROR_CONTEXT_IS_DESTROYED' # | 'cuEventDestroy(event)' failed with 'CUDA_ERROR_CONTEXT_IS_DESTROYED' # `----------------------------- # executed command: /vol/worker/mlir-nvidia/mlir-nvidia-gcc7/llvm.obj/bin/FileCheck /vol/worker/mlir-nvidia/mlir-nvidia-gcc7/llvm.src/mlir/test/Integration/GPU/CUDA/async.mlir # .---command stderr------------ # | /vol/worker/mlir-nvidia/mlir-nvidia-gcc7/llvm.src/mlir/test/Integration/GPU/CUDA/async.mlir:68:12: error: CHECK: expected string not found in input # | // CHECK: [84, 84] # | ^ # | <stdin>:1:1: note: scanning from here # | Unranked Memref base@ = 0x5d1949841270 rank = 1 offset = 0 sizes = [2] strides = [1] data = # | ^ # | <stdin>:2:1: note: possible intended match here # | [42, 42] # | ^ # | # | Input file: <stdin> # | Check file: /vol/worker/mlir-nvidia/mlir-nvidia-gcc7/llvm.src/mlir/test/Integration/GPU/CUDA/async.mlir # | # | -dump-input=help explains the following input dump. # | # | Input was: # | <<<<<< # | 1: Unranked Memref base@ = 0x5d1949841270 rank = 1 offset = 0 sizes = [2] strides = [1] data = # | check:68'0 X~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ error: no match found # | 2: [42, 42] # | check:68'0 ~~~~~~~~~ # | check:68'1 ? possible intended match ... 
@llvm-ci
Copy link
Collaborator

llvm-ci commented Nov 19, 2025

LLVM Buildbot has detected a new failure on builder ppc64le-mlir-rhel-clang running on ppc64le-mlir-rhel-test while building clang,llvm at step 6 "test-build-check-mlir-build-only-check-mlir".

Full details are available at: https://lab.llvm.org/buildbot/#/builders/129/builds/33485

Here is the relevant piece of the build log for the reference
Step 6 (test-build-check-mlir-build-only-check-mlir) failure: 1200 seconds without output running [b'ninja', b'check-mlir'], attempting to kill ... PASS: MLIR :: Dialect/SCF/loop-unroll.mlir (3657 of 3676) PASS: MLIR-Unit :: IR/./MLIRIRTests/101/130 (3658 of 3676) PASS: MLIR :: Pass/ir-printing-file-tree.mlir (3659 of 3676) PASS: MLIR :: mlir-runner/utils.mlir (3660 of 3676) PASS: MLIR-Unit :: IR/./MLIRIRTests/100/130 (3661 of 3676) PASS: MLIR :: mlir-runner/simple.mlir (3662 of 3676) PASS: MLIR :: mlir-tblgen/cpp-class-comments.td (3663 of 3676) PASS: MLIR :: Pass/pipeline-parsing.mlir (3664 of 3676) PASS: MLIR :: mlir-tblgen/op-error.td (3665 of 3676) PASS: MLIR :: Pass/pipeline-options-parsing.mlir (3666 of 3676) command timed out: 1200 seconds without output running [b'ninja', b'check-mlir'], attempting to kill process killed by signal 9 program finished with exit code -1 elapsedTime=1790.266292 
aadeshps-mcw pushed a commit to aadeshps-mcw/llvm-project that referenced this pull request Nov 26, 2025
The option -falloc-token-max=0 is supposed to be usable to override previous settings back to the target default max tokens (SIZE_MAX). This did not work for the builtin: ``` | executed command: clang -cc1 [..] -nostdsysteminc -triple x86_64-linux-gnu -std=c++23 -fsyntax-only -verify clang/test/SemaCXX/alloc-token.cpp -falloc-token-max=0 | clang: llvm/lib/Support/AllocToken.cpp:38: std::optional<uint64_t> llvm::getAllocToken(AllocTokenMode, const AllocTokenMetadata &, uint64_t): Assertion `MaxTokens && "Must provide non-zero max tokens"' failed. ``` Fix it by also picking the default if "0" is passed. Improve the documentation to be clearer what the value of "0" means.
Priyanshu3820 pushed a commit to Priyanshu3820/llvm-project that referenced this pull request Nov 26, 2025
The option -falloc-token-max=0 is supposed to be usable to override previous settings back to the target default max tokens (SIZE_MAX). This did not work for the builtin: ``` | executed command: clang -cc1 [..] -nostdsysteminc -triple x86_64-linux-gnu -std=c++23 -fsyntax-only -verify clang/test/SemaCXX/alloc-token.cpp -falloc-token-max=0 | clang: llvm/lib/Support/AllocToken.cpp:38: std::optional<uint64_t> llvm::getAllocToken(AllocTokenMode, const AllocTokenMetadata &, uint64_t): Assertion `MaxTokens && "Must provide non-zero max tokens"' failed. ``` Fix it by also picking the default if "0" is passed. Improve the documentation to be clearer what the value of "0" means.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

clang:bytecode Issues for the clang bytecode constexpr interpreter clang:frontend Language frontend issues, e.g. anything involving "Sema" llvm:transforms

4 participants