Skip to content

Conversation

@mtrofin
Copy link
Member

@mtrofin mtrofin commented Aug 29, 2025

The inliner's FeatureMap used to be immutable, but in IR2Vec cases we don't know the shapes of the embedding vectors until later, so we need to initialize it at the time we construct the advisor. In non-distributed ThinLTO cases, for example, this means we'd mutate shared state.

The feature set is also needed when constructing the underlying model runner.

The alternative here is to postpone the creation of the model runner to the time we construct the advisor, and also make the feature map a member of the advisor object.

(issue identified by @efriedma-quic in PR #154541)

…2Vec cases The inliner's `FeatureMap` used to be immutable, but in IR2Vec cases we don't know the shapes of the embedding vectors until later, so we need to initialize it at the time we construct the advisor. In non-distributed ThinLTO cases, for example, this means we'd mutate shared state. The feature set is also needed when constructing the underlying model runner. The alternative here is to postpone the creation of the model runner to the time we construct the advisor, and also make the feature map a member of the advisor object.
@mtrofin mtrofin requested a review from svkeerthy August 29, 2025 22:47
@llvmbot llvmbot added mlgo llvm:analysis Includes value tracking, cost tables and constant folding labels Aug 29, 2025
@mtrofin mtrofin changed the title [mlgo][inliner] Fix potential concurrency issue in local ThinLTO + IR… [mlgo][inliner] Fix potential concurrency issue in local ThinLTO + IR2Vec cases Aug 29, 2025
@llvmbot
Copy link
Member

llvmbot commented Aug 29, 2025

@llvm/pr-subscribers-mlgo

Author: Mircea Trofin (mtrofin)

Changes

The inliner's FeatureMap used to be immutable, but in IR2Vec cases we don't know the shapes of the embedding vectors until later, so we need to initialize it at the time we construct the advisor. In non-distributed ThinLTO cases, for example, this means we'd mutate shared state.

The feature set is also needed when constructing the underlying model runner.

The alternative here is to postpone the creation of the model runner to the time we construct the advisor, and also make the feature map a member of the advisor object.


Full diff: https://github.com/llvm/llvm-project/pull/156120.diff

4 Files Affected:

  • (modified) llvm/include/llvm/Analysis/InlineModelFeatureMaps.h (-2)
  • (modified) llvm/include/llvm/Analysis/MLInlineAdvisor.h (+6-1)
  • (modified) llvm/lib/Analysis/DevelopmentModeInlineAdvisor.cpp (+43-29)
  • (modified) llvm/lib/Analysis/MLInlineAdvisor.cpp (+36-26)
diff --git a/llvm/include/llvm/Analysis/InlineModelFeatureMaps.h b/llvm/include/llvm/Analysis/InlineModelFeatureMaps.h index 5c6aee3ab38ab..e559171b9c257 100644 --- a/llvm/include/llvm/Analysis/InlineModelFeatureMaps.h +++ b/llvm/include/llvm/Analysis/InlineModelFeatureMaps.h @@ -160,8 +160,6 @@ inlineCostFeatureToMlFeature(InlineCostFeatureIndex Feature) { return static_cast<FeatureIndex>(static_cast<size_t>(Feature)); } -LLVM_ABI extern std::vector<TensorSpec> &getFeatureMap(); - LLVM_ABI extern const char *const DecisionName; LLVM_ABI extern const TensorSpec InlineDecisionSpec; LLVM_ABI extern const char *const DefaultDecisionName; diff --git a/llvm/include/llvm/Analysis/MLInlineAdvisor.h b/llvm/include/llvm/Analysis/MLInlineAdvisor.h index 8262dd0846ede..cc4c482b379e3 100644 --- a/llvm/include/llvm/Analysis/MLInlineAdvisor.h +++ b/llvm/include/llvm/Analysis/MLInlineAdvisor.h @@ -28,7 +28,9 @@ class ProfileSummaryInfo; class MLInlineAdvisor : public InlineAdvisor { public: MLInlineAdvisor(Module &M, ModuleAnalysisManager &MAM, - std::unique_ptr<MLModelRunner> ModelRunner, + std::function<std::unique_ptr<MLModelRunner>( + const std::vector<TensorSpec> &)> + GetModelRunner, std::function<bool(CallBase &)> GetDefaultAdvice); virtual ~MLInlineAdvisor() = default; @@ -46,6 +48,8 @@ class MLInlineAdvisor : public InlineAdvisor { int64_t getLocalCalls(Function &F); const MLModelRunner &getModelRunner() const { return *ModelRunner; } FunctionPropertiesInfo &getCachedFPI(Function &) const; + const std::vector<TensorSpec> &getFeatureMap() const { return FeatureMap; }; + static const std::vector<TensorSpec> &getInitialFeatureMap(); protected: std::unique_ptr<InlineAdvice> getAdviceImpl(CallBase &CB) override; @@ -65,6 +69,7 @@ class MLInlineAdvisor : public InlineAdvisor { std::unique_ptr<MLModelRunner> ModelRunner; std::function<bool(CallBase &)> GetDefaultAdvice; + std::vector<TensorSpec> FeatureMap; private: int64_t getModuleIRSize() const; diff --git a/llvm/lib/Analysis/DevelopmentModeInlineAdvisor.cpp b/llvm/lib/Analysis/DevelopmentModeInlineAdvisor.cpp index 790e00e1b3b06..99cd7364a4618 100644 --- a/llvm/lib/Analysis/DevelopmentModeInlineAdvisor.cpp +++ b/llvm/lib/Analysis/DevelopmentModeInlineAdvisor.cpp @@ -97,7 +97,8 @@ struct InlineEvent { /// Collect data we may use for training a model. class TrainingLogger final { public: - TrainingLogger(StringRef LogFileName, const ModelUnderTrainingRunner *MUTR); + TrainingLogger(StringRef LogFileName, const ModelUnderTrainingRunner *MUTR, + const std::vector<TensorSpec> &FeatureMap); /// Log one inlining event. void logInlineEvent(const InlineEvent &Event, @@ -106,6 +107,8 @@ class TrainingLogger final { private: StringRef LogFileName; const ModelUnderTrainingRunner *const MUTR; + const std::vector<TensorSpec> &FeatureMap; + std::unique_ptr<Logger> L; BitVector Effects; /// Set these 2 clearly OOB, to make sure we set them later. @@ -142,9 +145,10 @@ class DevelopmentModeMLInlineAdvisor : public MLInlineAdvisor { public: DevelopmentModeMLInlineAdvisor( Module &M, ModuleAnalysisManager &MAM, - std::unique_ptr<MLModelRunner> ModelRunner, - std::function<bool(CallBase &)> GetDefaultAdvice, - std::unique_ptr<TrainingLogger> Logger); + std::function< + std::unique_ptr<MLModelRunner>(const std::vector<TensorSpec> &)> + GetModelRunner, + std::function<bool(CallBase &)> GetDefaultAdvice); size_t getTotalSizeEstimate(); @@ -258,9 +262,13 @@ static const std::vector<TensorSpec> TrainingOnlyFeatures{ TensorSpec::createSpec<float>(TFFeedPrefix + "reward", {1}), TensorSpec::createSpec<int32_t>(TFFeedPrefix + "step_type", {1})}; -static const std::vector<TensorSpec> getInputFeatures() { +// add TFFeedPrefix to the names and also add the "TrainingOnlyFeatures" which +// the model runner needs to see present. We don't set them ourselves or +// interact with them. +static const std::vector<TensorSpec> +convertInputFeatures(const std::vector<TensorSpec> &OriginalFeatures) { std::vector<TensorSpec> InputSpecs; - for (const auto &Feature : FeatureMap) + for (const auto &Feature : OriginalFeatures) InputSpecs.push_back(TensorSpec(TFFeedPrefix + Feature.name(), Feature)); append_range(InputSpecs, TrainingOnlyFeatures); return InputSpecs; @@ -269,8 +277,9 @@ static const std::vector<TensorSpec> getInputFeatures() { } // namespace TrainingLogger::TrainingLogger(StringRef LogFileName, - const ModelUnderTrainingRunner *MUTR) - : LogFileName(LogFileName), MUTR(MUTR) { + const ModelUnderTrainingRunner *MUTR, + const std::vector<TensorSpec> &FeatureMap) + : LogFileName(LogFileName), MUTR(MUTR), FeatureMap(FeatureMap) { // The first output is the inlining decision. std::vector<TensorSpec> FT(FeatureMap.begin(), FeatureMap.end()); @@ -327,15 +336,19 @@ void TrainingLogger::logInlineEvent(const InlineEvent &Event, DevelopmentModeMLInlineAdvisor::DevelopmentModeMLInlineAdvisor( Module &M, ModuleAnalysisManager &MAM, - std::unique_ptr<MLModelRunner> ModelRunner, - std::function<bool(CallBase &)> GetDefaultAdvice, - std::unique_ptr<TrainingLogger> Logger) - : MLInlineAdvisor(M, MAM, std::move(ModelRunner), GetDefaultAdvice), + std::function< + std::unique_ptr<MLModelRunner>(const std::vector<TensorSpec> &)> + GetModelRunner, + std::function<bool(CallBase &)> GetDefaultAdvice) + : MLInlineAdvisor(M, MAM, GetModelRunner, GetDefaultAdvice), IsDoingInference(isa<ModelUnderTrainingRunner>(getModelRunner())), - Logger(std::move(Logger)), InitialNativeSize(isLogging() ? getTotalSizeEstimate() : 0), CurrentNativeSize(InitialNativeSize) { // We cannot have the case of neither inference nor logging. + if (!TrainingLog.empty()) + Logger = std::make_unique<TrainingLogger>( + TrainingLog, dyn_cast<ModelUnderTrainingRunner>(ModelRunner.get()), + getFeatureMap()); assert(IsDoingInference || isLogging()); } @@ -401,21 +414,22 @@ std::unique_ptr<InlineAdvisor> llvm::getDevelopmentModeAdvisor( Module &M, ModuleAnalysisManager &MAM, std::function<bool(CallBase &)> GetDefaultAdvice) { auto &Ctx = M.getContext(); - std::unique_ptr<MLModelRunner> Runner; - if (TFModelUnderTrainingPath.empty()) - Runner.reset(new NoInferenceModelRunner(Ctx, getInputFeatures())); - else - Runner = ModelUnderTrainingRunner::createAndEnsureValid( - Ctx, TFModelUnderTrainingPath, DecisionName, getInputFeatures(), - TFOutputSpecOverride); - if (!Runner) - return nullptr; - std::unique_ptr<TrainingLogger> Logger; - if (!TrainingLog.empty()) - Logger = std::make_unique<TrainingLogger>( - TrainingLog, dyn_cast<ModelUnderTrainingRunner>(Runner.get())); - - return std::make_unique<DevelopmentModeMLInlineAdvisor>( - M, MAM, std::move(Runner), GetDefaultAdvice, std::move(Logger)); + auto RunnerFactory = [&](const std::vector<TensorSpec> &InputFeatures) + -> std::unique_ptr<MLModelRunner> { + std::unique_ptr<MLModelRunner> Runner; + const std::vector<TensorSpec> ConvertedFeatures = + convertInputFeatures(InputFeatures); + if (TFModelUnderTrainingPath.empty()) + Runner.reset(new NoInferenceModelRunner(Ctx, ConvertedFeatures)); + else + Runner = ModelUnderTrainingRunner::createAndEnsureValid( + Ctx, TFModelUnderTrainingPath, DecisionName, ConvertedFeatures, + TFOutputSpecOverride); + if (!Runner) + return nullptr; + return Runner; + }; + return std::make_unique<DevelopmentModeMLInlineAdvisor>(M, MAM, RunnerFactory, + GetDefaultAdvice); } #endif // defined(LLVM_HAVE_TFLITE) diff --git a/llvm/lib/Analysis/MLInlineAdvisor.cpp b/llvm/lib/Analysis/MLInlineAdvisor.cpp index 7854c19088ad3..f90717d3085eb 100644 --- a/llvm/lib/Analysis/MLInlineAdvisor.cpp +++ b/llvm/lib/Analysis/MLInlineAdvisor.cpp @@ -75,21 +75,22 @@ llvm::getReleaseModeAdvisor(Module &M, ModuleAnalysisManager &MAM, if (!llvm::isEmbeddedModelEvaluatorValid<CompiledModelType>() && InteractiveChannelBaseName.empty()) return nullptr; - std::unique_ptr<MLModelRunner> AOTRunner; - if (InteractiveChannelBaseName.empty()) - AOTRunner = std::make_unique<ReleaseModeModelRunner<CompiledModelType>>( - M.getContext(), getFeatureMap(), DecisionName, - EmbeddedModelRunnerOptions().setModelSelector(ModelSelector)); - else { - auto Features = getFeatureMap(); - if (InteractiveIncludeDefault) - Features.push_back(DefaultDecisionSpec); - AOTRunner = std::make_unique<InteractiveModelRunner>( - M.getContext(), Features, InlineDecisionSpec, - InteractiveChannelBaseName + ".out", - InteractiveChannelBaseName + ".in"); - } - return std::make_unique<MLInlineAdvisor>(M, MAM, std::move(AOTRunner), + auto RunnerFactory = [&](const std::vector<TensorSpec> &InputFeatures) + -> std::unique_ptr<MLModelRunner> { + std::unique_ptr<MLModelRunner> AOTRunner; + if (InteractiveChannelBaseName.empty()) + AOTRunner = std::make_unique<ReleaseModeModelRunner<CompiledModelType>>( + M.getContext(), InputFeatures, DecisionName, + EmbeddedModelRunnerOptions().setModelSelector(ModelSelector)); + else { + AOTRunner = std::make_unique<InteractiveModelRunner>( + M.getContext(), InputFeatures, InlineDecisionSpec, + InteractiveChannelBaseName + ".out", + InteractiveChannelBaseName + ".in"); + } + return AOTRunner; + }; + return std::make_unique<MLInlineAdvisor>(M, MAM, RunnerFactory, GetDefaultAdvice); } @@ -107,7 +108,7 @@ static cl::opt<bool> KeepFPICache( "For test - keep the ML Inline advisor's FunctionPropertiesInfo cache"), cl::init(false)); -std::vector<TensorSpec> &llvm::getFeatureMap() { +const std::vector<TensorSpec> &MLInlineAdvisor::getInitialFeatureMap() { // clang-format off static std::vector<TensorSpec> FeatureMap{ #define POPULATE_NAMES(DTYPE, SHAPE, NAME, __) TensorSpec::createSpec<DTYPE>(#NAME, SHAPE), @@ -142,17 +143,17 @@ CallBase *getInlinableCS(Instruction &I) { MLInlineAdvisor::MLInlineAdvisor( Module &M, ModuleAnalysisManager &MAM, - std::unique_ptr<MLModelRunner> Runner, + std::function< + std::unique_ptr<MLModelRunner>(const std::vector<TensorSpec> &)> + GetModelRunner, std::function<bool(CallBase &)> GetDefaultAdvice) : InlineAdvisor( M, MAM.getResult<FunctionAnalysisManagerModuleProxy>(M).getManager()), - ModelRunner(std::move(Runner)), GetDefaultAdvice(GetDefaultAdvice), + GetDefaultAdvice(GetDefaultAdvice), FeatureMap(getInitialFeatureMap()), CG(MAM.getResult<LazyCallGraphAnalysis>(M)), UseIR2Vec(MAM.getCachedResult<IR2VecVocabAnalysis>(M) != nullptr), InitialIRSize(getModuleIRSize()), CurrentIRSize(InitialIRSize), PSI(MAM.getResult<ProfileSummaryAnalysis>(M)) { - assert(ModelRunner); - ModelRunner->switchContext(""); // Extract the 'call site height' feature - the position of a call site // relative to the farthest statically reachable SCC node. We don't mutate // this value while inlining happens. Empirically, this feature proved @@ -192,18 +193,27 @@ MLInlineAdvisor::MLInlineAdvisor( } NodeCount = AllNodes.size(); - if (auto IR2VecVocabResult = MAM.getCachedResult<IR2VecVocabAnalysis>(M)) { + if (auto *IR2VecVocabResult = MAM.getCachedResult<IR2VecVocabAnalysis>(M)) { if (!IR2VecVocabResult->isValid()) { M.getContext().emitError("IR2VecVocabAnalysis is not valid"); return; } // Add the IR2Vec features to the feature map auto IR2VecDim = IR2VecVocabResult->getDimension(); - getFeatureMap().push_back( + FeatureMap.push_back( TensorSpec::createSpec<float>("callee_embedding", {IR2VecDim})); - getFeatureMap().push_back( + FeatureMap.push_back( TensorSpec::createSpec<float>("caller_embedding", {IR2VecDim})); } + if (InteractiveIncludeDefault) + FeatureMap.push_back(DefaultDecisionSpec); + + ModelRunner = GetModelRunner(getFeatureMap()); + if (!ModelRunner) { + M.getContext().emitError("Could not create model runner"); + return; + } + ModelRunner->switchContext(""); } unsigned MLInlineAdvisor::getInitialFunctionLevel(const Function &F) const { @@ -475,7 +485,7 @@ std::unique_ptr<InlineAdvice> MLInlineAdvisor::getAdviceImpl(CallBase &CB) { } // This one would have been set up to be right at the end. if (!InteractiveChannelBaseName.empty() && InteractiveIncludeDefault) - *ModelRunner->getTensor<int64_t>(getFeatureMap().size()) = + *ModelRunner->getTensor<int64_t>(getFeatureMap().size() - 1) = GetDefaultAdvice(CB); return getAdviceFromModel(CB, ORE); } @@ -554,8 +564,8 @@ void MLInlineAdvice::reportContextForRemark( DiagnosticInfoOptimizationBase &OR) { using namespace ore; OR << NV("Callee", Callee->getName()); - for (size_t I = 0; I < getFeatureMap().size(); ++I) - OR << NV(getFeatureMap()[I].name(), + for (size_t I = 0; I < getAdvisor()->getFeatureMap().size(); ++I) + OR << NV(getAdvisor()->getFeatureMap()[I].name(), *getAdvisor()->getModelRunner().getTensor<int64_t>(I)); OR << NV("ShouldInline", isInliningRecommended()); } 
@llvmbot
Copy link
Member

llvmbot commented Aug 29, 2025

@llvm/pr-subscribers-llvm-analysis

Author: Mircea Trofin (mtrofin)

Changes

The inliner's FeatureMap used to be immutable, but in IR2Vec cases we don't know the shapes of the embedding vectors until later, so we need to initialize it at the time we construct the advisor. In non-distributed ThinLTO cases, for example, this means we'd mutate shared state.

The feature set is also needed when constructing the underlying model runner.

The alternative here is to postpone the creation of the model runner to the time we construct the advisor, and also make the feature map a member of the advisor object.


Full diff: https://github.com/llvm/llvm-project/pull/156120.diff

4 Files Affected:

  • (modified) llvm/include/llvm/Analysis/InlineModelFeatureMaps.h (-2)
  • (modified) llvm/include/llvm/Analysis/MLInlineAdvisor.h (+6-1)
  • (modified) llvm/lib/Analysis/DevelopmentModeInlineAdvisor.cpp (+43-29)
  • (modified) llvm/lib/Analysis/MLInlineAdvisor.cpp (+36-26)
diff --git a/llvm/include/llvm/Analysis/InlineModelFeatureMaps.h b/llvm/include/llvm/Analysis/InlineModelFeatureMaps.h index 5c6aee3ab38ab..e559171b9c257 100644 --- a/llvm/include/llvm/Analysis/InlineModelFeatureMaps.h +++ b/llvm/include/llvm/Analysis/InlineModelFeatureMaps.h @@ -160,8 +160,6 @@ inlineCostFeatureToMlFeature(InlineCostFeatureIndex Feature) { return static_cast<FeatureIndex>(static_cast<size_t>(Feature)); } -LLVM_ABI extern std::vector<TensorSpec> &getFeatureMap(); - LLVM_ABI extern const char *const DecisionName; LLVM_ABI extern const TensorSpec InlineDecisionSpec; LLVM_ABI extern const char *const DefaultDecisionName; diff --git a/llvm/include/llvm/Analysis/MLInlineAdvisor.h b/llvm/include/llvm/Analysis/MLInlineAdvisor.h index 8262dd0846ede..cc4c482b379e3 100644 --- a/llvm/include/llvm/Analysis/MLInlineAdvisor.h +++ b/llvm/include/llvm/Analysis/MLInlineAdvisor.h @@ -28,7 +28,9 @@ class ProfileSummaryInfo; class MLInlineAdvisor : public InlineAdvisor { public: MLInlineAdvisor(Module &M, ModuleAnalysisManager &MAM, - std::unique_ptr<MLModelRunner> ModelRunner, + std::function<std::unique_ptr<MLModelRunner>( + const std::vector<TensorSpec> &)> + GetModelRunner, std::function<bool(CallBase &)> GetDefaultAdvice); virtual ~MLInlineAdvisor() = default; @@ -46,6 +48,8 @@ class MLInlineAdvisor : public InlineAdvisor { int64_t getLocalCalls(Function &F); const MLModelRunner &getModelRunner() const { return *ModelRunner; } FunctionPropertiesInfo &getCachedFPI(Function &) const; + const std::vector<TensorSpec> &getFeatureMap() const { return FeatureMap; }; + static const std::vector<TensorSpec> &getInitialFeatureMap(); protected: std::unique_ptr<InlineAdvice> getAdviceImpl(CallBase &CB) override; @@ -65,6 +69,7 @@ class MLInlineAdvisor : public InlineAdvisor { std::unique_ptr<MLModelRunner> ModelRunner; std::function<bool(CallBase &)> GetDefaultAdvice; + std::vector<TensorSpec> FeatureMap; private: int64_t getModuleIRSize() const; diff --git a/llvm/lib/Analysis/DevelopmentModeInlineAdvisor.cpp b/llvm/lib/Analysis/DevelopmentModeInlineAdvisor.cpp index 790e00e1b3b06..99cd7364a4618 100644 --- a/llvm/lib/Analysis/DevelopmentModeInlineAdvisor.cpp +++ b/llvm/lib/Analysis/DevelopmentModeInlineAdvisor.cpp @@ -97,7 +97,8 @@ struct InlineEvent { /// Collect data we may use for training a model. class TrainingLogger final { public: - TrainingLogger(StringRef LogFileName, const ModelUnderTrainingRunner *MUTR); + TrainingLogger(StringRef LogFileName, const ModelUnderTrainingRunner *MUTR, + const std::vector<TensorSpec> &FeatureMap); /// Log one inlining event. void logInlineEvent(const InlineEvent &Event, @@ -106,6 +107,8 @@ class TrainingLogger final { private: StringRef LogFileName; const ModelUnderTrainingRunner *const MUTR; + const std::vector<TensorSpec> &FeatureMap; + std::unique_ptr<Logger> L; BitVector Effects; /// Set these 2 clearly OOB, to make sure we set them later. @@ -142,9 +145,10 @@ class DevelopmentModeMLInlineAdvisor : public MLInlineAdvisor { public: DevelopmentModeMLInlineAdvisor( Module &M, ModuleAnalysisManager &MAM, - std::unique_ptr<MLModelRunner> ModelRunner, - std::function<bool(CallBase &)> GetDefaultAdvice, - std::unique_ptr<TrainingLogger> Logger); + std::function< + std::unique_ptr<MLModelRunner>(const std::vector<TensorSpec> &)> + GetModelRunner, + std::function<bool(CallBase &)> GetDefaultAdvice); size_t getTotalSizeEstimate(); @@ -258,9 +262,13 @@ static const std::vector<TensorSpec> TrainingOnlyFeatures{ TensorSpec::createSpec<float>(TFFeedPrefix + "reward", {1}), TensorSpec::createSpec<int32_t>(TFFeedPrefix + "step_type", {1})}; -static const std::vector<TensorSpec> getInputFeatures() { +// add TFFeedPrefix to the names and also add the "TrainingOnlyFeatures" which +// the model runner needs to see present. We don't set them ourselves or +// interact with them. +static const std::vector<TensorSpec> +convertInputFeatures(const std::vector<TensorSpec> &OriginalFeatures) { std::vector<TensorSpec> InputSpecs; - for (const auto &Feature : FeatureMap) + for (const auto &Feature : OriginalFeatures) InputSpecs.push_back(TensorSpec(TFFeedPrefix + Feature.name(), Feature)); append_range(InputSpecs, TrainingOnlyFeatures); return InputSpecs; @@ -269,8 +277,9 @@ static const std::vector<TensorSpec> getInputFeatures() { } // namespace TrainingLogger::TrainingLogger(StringRef LogFileName, - const ModelUnderTrainingRunner *MUTR) - : LogFileName(LogFileName), MUTR(MUTR) { + const ModelUnderTrainingRunner *MUTR, + const std::vector<TensorSpec> &FeatureMap) + : LogFileName(LogFileName), MUTR(MUTR), FeatureMap(FeatureMap) { // The first output is the inlining decision. std::vector<TensorSpec> FT(FeatureMap.begin(), FeatureMap.end()); @@ -327,15 +336,19 @@ void TrainingLogger::logInlineEvent(const InlineEvent &Event, DevelopmentModeMLInlineAdvisor::DevelopmentModeMLInlineAdvisor( Module &M, ModuleAnalysisManager &MAM, - std::unique_ptr<MLModelRunner> ModelRunner, - std::function<bool(CallBase &)> GetDefaultAdvice, - std::unique_ptr<TrainingLogger> Logger) - : MLInlineAdvisor(M, MAM, std::move(ModelRunner), GetDefaultAdvice), + std::function< + std::unique_ptr<MLModelRunner>(const std::vector<TensorSpec> &)> + GetModelRunner, + std::function<bool(CallBase &)> GetDefaultAdvice) + : MLInlineAdvisor(M, MAM, GetModelRunner, GetDefaultAdvice), IsDoingInference(isa<ModelUnderTrainingRunner>(getModelRunner())), - Logger(std::move(Logger)), InitialNativeSize(isLogging() ? getTotalSizeEstimate() : 0), CurrentNativeSize(InitialNativeSize) { // We cannot have the case of neither inference nor logging. + if (!TrainingLog.empty()) + Logger = std::make_unique<TrainingLogger>( + TrainingLog, dyn_cast<ModelUnderTrainingRunner>(ModelRunner.get()), + getFeatureMap()); assert(IsDoingInference || isLogging()); } @@ -401,21 +414,22 @@ std::unique_ptr<InlineAdvisor> llvm::getDevelopmentModeAdvisor( Module &M, ModuleAnalysisManager &MAM, std::function<bool(CallBase &)> GetDefaultAdvice) { auto &Ctx = M.getContext(); - std::unique_ptr<MLModelRunner> Runner; - if (TFModelUnderTrainingPath.empty()) - Runner.reset(new NoInferenceModelRunner(Ctx, getInputFeatures())); - else - Runner = ModelUnderTrainingRunner::createAndEnsureValid( - Ctx, TFModelUnderTrainingPath, DecisionName, getInputFeatures(), - TFOutputSpecOverride); - if (!Runner) - return nullptr; - std::unique_ptr<TrainingLogger> Logger; - if (!TrainingLog.empty()) - Logger = std::make_unique<TrainingLogger>( - TrainingLog, dyn_cast<ModelUnderTrainingRunner>(Runner.get())); - - return std::make_unique<DevelopmentModeMLInlineAdvisor>( - M, MAM, std::move(Runner), GetDefaultAdvice, std::move(Logger)); + auto RunnerFactory = [&](const std::vector<TensorSpec> &InputFeatures) + -> std::unique_ptr<MLModelRunner> { + std::unique_ptr<MLModelRunner> Runner; + const std::vector<TensorSpec> ConvertedFeatures = + convertInputFeatures(InputFeatures); + if (TFModelUnderTrainingPath.empty()) + Runner.reset(new NoInferenceModelRunner(Ctx, ConvertedFeatures)); + else + Runner = ModelUnderTrainingRunner::createAndEnsureValid( + Ctx, TFModelUnderTrainingPath, DecisionName, ConvertedFeatures, + TFOutputSpecOverride); + if (!Runner) + return nullptr; + return Runner; + }; + return std::make_unique<DevelopmentModeMLInlineAdvisor>(M, MAM, RunnerFactory, + GetDefaultAdvice); } #endif // defined(LLVM_HAVE_TFLITE) diff --git a/llvm/lib/Analysis/MLInlineAdvisor.cpp b/llvm/lib/Analysis/MLInlineAdvisor.cpp index 7854c19088ad3..f90717d3085eb 100644 --- a/llvm/lib/Analysis/MLInlineAdvisor.cpp +++ b/llvm/lib/Analysis/MLInlineAdvisor.cpp @@ -75,21 +75,22 @@ llvm::getReleaseModeAdvisor(Module &M, ModuleAnalysisManager &MAM, if (!llvm::isEmbeddedModelEvaluatorValid<CompiledModelType>() && InteractiveChannelBaseName.empty()) return nullptr; - std::unique_ptr<MLModelRunner> AOTRunner; - if (InteractiveChannelBaseName.empty()) - AOTRunner = std::make_unique<ReleaseModeModelRunner<CompiledModelType>>( - M.getContext(), getFeatureMap(), DecisionName, - EmbeddedModelRunnerOptions().setModelSelector(ModelSelector)); - else { - auto Features = getFeatureMap(); - if (InteractiveIncludeDefault) - Features.push_back(DefaultDecisionSpec); - AOTRunner = std::make_unique<InteractiveModelRunner>( - M.getContext(), Features, InlineDecisionSpec, - InteractiveChannelBaseName + ".out", - InteractiveChannelBaseName + ".in"); - } - return std::make_unique<MLInlineAdvisor>(M, MAM, std::move(AOTRunner), + auto RunnerFactory = [&](const std::vector<TensorSpec> &InputFeatures) + -> std::unique_ptr<MLModelRunner> { + std::unique_ptr<MLModelRunner> AOTRunner; + if (InteractiveChannelBaseName.empty()) + AOTRunner = std::make_unique<ReleaseModeModelRunner<CompiledModelType>>( + M.getContext(), InputFeatures, DecisionName, + EmbeddedModelRunnerOptions().setModelSelector(ModelSelector)); + else { + AOTRunner = std::make_unique<InteractiveModelRunner>( + M.getContext(), InputFeatures, InlineDecisionSpec, + InteractiveChannelBaseName + ".out", + InteractiveChannelBaseName + ".in"); + } + return AOTRunner; + }; + return std::make_unique<MLInlineAdvisor>(M, MAM, RunnerFactory, GetDefaultAdvice); } @@ -107,7 +108,7 @@ static cl::opt<bool> KeepFPICache( "For test - keep the ML Inline advisor's FunctionPropertiesInfo cache"), cl::init(false)); -std::vector<TensorSpec> &llvm::getFeatureMap() { +const std::vector<TensorSpec> &MLInlineAdvisor::getInitialFeatureMap() { // clang-format off static std::vector<TensorSpec> FeatureMap{ #define POPULATE_NAMES(DTYPE, SHAPE, NAME, __) TensorSpec::createSpec<DTYPE>(#NAME, SHAPE), @@ -142,17 +143,17 @@ CallBase *getInlinableCS(Instruction &I) { MLInlineAdvisor::MLInlineAdvisor( Module &M, ModuleAnalysisManager &MAM, - std::unique_ptr<MLModelRunner> Runner, + std::function< + std::unique_ptr<MLModelRunner>(const std::vector<TensorSpec> &)> + GetModelRunner, std::function<bool(CallBase &)> GetDefaultAdvice) : InlineAdvisor( M, MAM.getResult<FunctionAnalysisManagerModuleProxy>(M).getManager()), - ModelRunner(std::move(Runner)), GetDefaultAdvice(GetDefaultAdvice), + GetDefaultAdvice(GetDefaultAdvice), FeatureMap(getInitialFeatureMap()), CG(MAM.getResult<LazyCallGraphAnalysis>(M)), UseIR2Vec(MAM.getCachedResult<IR2VecVocabAnalysis>(M) != nullptr), InitialIRSize(getModuleIRSize()), CurrentIRSize(InitialIRSize), PSI(MAM.getResult<ProfileSummaryAnalysis>(M)) { - assert(ModelRunner); - ModelRunner->switchContext(""); // Extract the 'call site height' feature - the position of a call site // relative to the farthest statically reachable SCC node. We don't mutate // this value while inlining happens. Empirically, this feature proved @@ -192,18 +193,27 @@ MLInlineAdvisor::MLInlineAdvisor( } NodeCount = AllNodes.size(); - if (auto IR2VecVocabResult = MAM.getCachedResult<IR2VecVocabAnalysis>(M)) { + if (auto *IR2VecVocabResult = MAM.getCachedResult<IR2VecVocabAnalysis>(M)) { if (!IR2VecVocabResult->isValid()) { M.getContext().emitError("IR2VecVocabAnalysis is not valid"); return; } // Add the IR2Vec features to the feature map auto IR2VecDim = IR2VecVocabResult->getDimension(); - getFeatureMap().push_back( + FeatureMap.push_back( TensorSpec::createSpec<float>("callee_embedding", {IR2VecDim})); - getFeatureMap().push_back( + FeatureMap.push_back( TensorSpec::createSpec<float>("caller_embedding", {IR2VecDim})); } + if (InteractiveIncludeDefault) + FeatureMap.push_back(DefaultDecisionSpec); + + ModelRunner = GetModelRunner(getFeatureMap()); + if (!ModelRunner) { + M.getContext().emitError("Could not create model runner"); + return; + } + ModelRunner->switchContext(""); } unsigned MLInlineAdvisor::getInitialFunctionLevel(const Function &F) const { @@ -475,7 +485,7 @@ std::unique_ptr<InlineAdvice> MLInlineAdvisor::getAdviceImpl(CallBase &CB) { } // This one would have been set up to be right at the end. if (!InteractiveChannelBaseName.empty() && InteractiveIncludeDefault) - *ModelRunner->getTensor<int64_t>(getFeatureMap().size()) = + *ModelRunner->getTensor<int64_t>(getFeatureMap().size() - 1) = GetDefaultAdvice(CB); return getAdviceFromModel(CB, ORE); } @@ -554,8 +564,8 @@ void MLInlineAdvice::reportContextForRemark( DiagnosticInfoOptimizationBase &OR) { using namespace ore; OR << NV("Callee", Callee->getName()); - for (size_t I = 0; I < getFeatureMap().size(); ++I) - OR << NV(getFeatureMap()[I].name(), + for (size_t I = 0; I < getAdvisor()->getFeatureMap().size(); ++I) + OR << NV(getAdvisor()->getFeatureMap()[I].name(), *getAdvisor()->getModelRunner().getTensor<int64_t>(I)); OR << NV("ShouldInline", isInliningRecommended()); } 
FeatureMap.push_back(
TensorSpec::createSpec<float>("caller_embedding", {IR2VecDim}));
}
if (InteractiveIncludeDefault)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should it also check for !InteractiveChannelBaseName.empty()?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ya, but if not, and the model doesn't have it, it's a noop (minus compiletime penalty).

// This one would have been set up to be right at the end.
if (!InteractiveChannelBaseName.empty() && InteractiveIncludeDefault)
*ModelRunner->getTensor<int64_t>(getFeatureMap().size()) =
*ModelRunner->getTensor<int64_t>(getFeatureMap().size() - 1) =
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Trying to understand.. why did this change?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Before, we were adding the extra feature to a local clone we'd send the model (see lines 84-85 on the left)

@mtrofin mtrofin merged commit c878baf into llvm:main Aug 30, 2025
9 checks passed
@llvm-ci
Copy link
Collaborator

llvm-ci commented Aug 30, 2025

LLVM Buildbot has detected a new failure on builder ml-opt-dev-x86-64 running on ml-opt-dev-x86-64-b1 while building llvm at step 5 "build-unified-tree".

Full details are available at: https://lab.llvm.org/buildbot/#/builders/137/builds/24546

Here is the relevant piece of the build log for the reference
Step 5 (build-unified-tree) failure: build (failure) ... 40.021 [2399/64/1447] Building CXX object lib/Analysis/CMakeFiles/LLVMAnalysis.dir/UniformityAnalysis.cpp.o 40.086 [2398/64/1448] Building CXX object lib/Analysis/CMakeFiles/LLVMAnalysis.dir/ScopedNoAliasAA.cpp.o 40.170 [2397/64/1449] Building CXX object lib/Analysis/CMakeFiles/LLVMAnalysis.dir/ValueLattice.cpp.o 40.240 [2396/64/1450] Building CXX object lib/Analysis/CMakeFiles/LLVMAnalysis.dir/ValueLatticeUtils.cpp.o 40.299 [2395/64/1451] Building CXX object lib/Transforms/Utils/CMakeFiles/LLVMTransformUtils.dir/SplitModuleByCategory.cpp.o 40.330 [2394/64/1452] Building CXX object lib/Analysis/CMakeFiles/LLVMAnalysis.dir/ValueTracking.cpp.o 40.387 [2393/64/1453] Building CXX object lib/Analysis/CMakeFiles/LLVMAnalysis.dir/VectorUtils.cpp.o 41.527 [2392/64/1454] Building CXX object lib/Transforms/IPO/CMakeFiles/LLVMipo.dir/AlwaysInliner.cpp.o 41.884 [2391/64/1455] Building CXX object lib/Transforms/Utils/CMakeFiles/LLVMTransformUtils.dir/BreakCriticalEdges.cpp.o 44.001 [2390/64/1456] Building CXX object lib/Analysis/CMakeFiles/LLVMAnalysis.dir/DevelopmentModeInlineAdvisor.cpp.o FAILED: lib/Analysis/CMakeFiles/LLVMAnalysis.dir/DevelopmentModeInlineAdvisor.cpp.o ccache /usr/bin/c++ -DCPUINFO_SUPPORTED_PLATFORM=1 -DGTEST_HAS_RTTI=0 -D_DEBUG -D_GLIBCXX_ASSERTIONS -D_GLIBCXX_USE_CXX11_ABI=1 -D_GNU_SOURCE -D__STDC_CONSTANT_MACROS -D__STDC_FORMAT_MACROS -D__STDC_LIMIT_MACROS -I/b/ml-opt-dev-x86-64-b1/build/lib/Analysis -I/b/ml-opt-dev-x86-64-b1/llvm-project/llvm/lib/Analysis -I/b/ml-opt-dev-x86-64-b1/build/include -I/b/ml-opt-dev-x86-64-b1/llvm-project/llvm/include -isystem /tmp/tflitebuild/tensorflow/include -isystem /tmp/tflitebuild/eigen/include/eigen3 -isystem /tmp/tflitebuild/abseil-cpp/include -isystem /tmp/tflitebuild/flatbuffers/include -isystem /tmp/tflitebuild/gemmlowp/include/gemmlowp -isystem /tmp/tflitebuild/ml_dtypes/src/ml_dtypes -isystem /tmp/tflitebuild/ml_dtypes/src/ml_dtypes/ml_dtypes -isystem /tmp/tflitebuild/ruy/include -isystem /tmp/tflitebuild/cpuinfo/include -isystem /tmp/tflitebuild/ARM_NEON_2_x86_SSE/include -fPIC -fno-semantic-interposition -fvisibility-inlines-hidden -Werror=date-time -Werror=unguarded-availability-new -Wall -Wextra -Wno-unused-parameter -Wwrite-strings -Wcast-qual -Wmissing-field-initializers -pedantic -Wno-long-long -Wc++98-compat-extra-semi -Wimplicit-fallthrough -Wcovered-switch-default -Wno-noexcept-type -Wnon-virtual-dtor -Wdelete-non-virtual-dtor -Wsuggest-override -Wstring-conversion -Wmisleading-indentation -Wctad-maybe-unsupported -fdiagnostics-color -ffunction-sections -fdata-sections -O3 -DNDEBUG -fno-exceptions -funwind-tables -fno-rtti -UNDEBUG -DEIGEN_NEON_GEBP_NR=4 -DTFL_STATIC_LIBRARY_BUILD -std=c++17 -MD -MT lib/Analysis/CMakeFiles/LLVMAnalysis.dir/DevelopmentModeInlineAdvisor.cpp.o -MF lib/Analysis/CMakeFiles/LLVMAnalysis.dir/DevelopmentModeInlineAdvisor.cpp.o.d -o lib/Analysis/CMakeFiles/LLVMAnalysis.dir/DevelopmentModeInlineAdvisor.cpp.o -c /b/ml-opt-dev-x86-64-b1/llvm-project/llvm/lib/Analysis/DevelopmentModeInlineAdvisor.cpp /b/ml-opt-dev-x86-64-b1/llvm-project/llvm/lib/Analysis/DevelopmentModeInlineAdvisor.cpp:284:30: error: use of undeclared identifier 'getFeatureMap' 284 | std::vector<TensorSpec> FT(getFeatureMap().begin(), getFeatureMap().end()); | ^ /b/ml-opt-dev-x86-64-b1/llvm-project/llvm/lib/Analysis/DevelopmentModeInlineAdvisor.cpp:284:55: error: use of undeclared identifier 'getFeatureMap' 284 | std::vector<TensorSpec> FT(getFeatureMap().begin(), getFeatureMap().end()); | ^ /b/ml-opt-dev-x86-64-b1/llvm-project/llvm/lib/Analysis/DevelopmentModeInlineAdvisor.cpp:310:27: error: use of undeclared identifier 'getFeatureMap' 310 | size_t FeatureMapSize = getFeatureMap().size(); | ^ /b/ml-opt-dev-x86-64-b1/llvm-project/llvm/lib/Analysis/DevelopmentModeInlineAdvisor.cpp:110:34: warning: private field 'FeatureMap' is not used [-Wunused-private-field] 110 | const std::vector<TensorSpec> &FeatureMap; | ^ 1 warning and 3 errors generated. 44.154 [2390/63/1457] Building CXX object lib/Transforms/Coroutines/CMakeFiles/LLVMCoroutines.dir/CoroAnnotationElide.cpp.o 46.125 [2390/62/1458] Building CXX object lib/CodeGen/CMakeFiles/LLVMCodeGen.dir/WinEHPrepare.cpp.o 47.075 [2390/61/1459] Building CXX object lib/Transforms/Utils/CMakeFiles/LLVMTransformUtils.dir/LoopConstrainer.cpp.o 47.533 [2390/60/1460] Building CXX object lib/Transforms/Scalar/CMakeFiles/LLVMScalarOpts.dir/CallSiteSplitting.cpp.o 48.255 [2390/59/1461] Building CXX object lib/Transforms/Scalar/CMakeFiles/LLVMScalarOpts.dir/PlaceSafepoints.cpp.o 49.043 [2390/58/1462] Building CXX object lib/Analysis/CMakeFiles/LLVMAnalysis.dir/ReplayInlineAdvisor.cpp.o 49.180 [2390/57/1463] Building CXX object lib/Transforms/IPO/CMakeFiles/LLVMipo.dir/ModuleInliner.cpp.o 49.447 [2390/56/1464] Building CXX object lib/Transforms/Utils/CMakeFiles/LLVMTransformUtils.dir/LoopRotationUtils.cpp.o 49.590 [2390/55/1465] Building CXX object lib/Transforms/Utils/CMakeFiles/LLVMTransformUtils.dir/LoopVersioning.cpp.o 49.954 [2390/54/1466] Building CXX object lib/Transforms/Scalar/CMakeFiles/LLVMScalarOpts.dir/InductiveRangeCheckElimination.cpp.o 50.725 [2390/53/1467] Building CXX object lib/Transforms/IPO/CMakeFiles/LLVMipo.dir/SCCP.cpp.o 51.101 [2390/52/1468] Building CXX object lib/Transforms/Utils/CMakeFiles/LLVMTransformUtils.dir/CloneFunction.cpp.o 51.720 [2390/51/1469] Building CXX object lib/Transforms/Utils/CMakeFiles/LLVMTransformUtils.dir/LoopUnrollRuntime.cpp.o 51.781 [2390/50/1470] Building CXX object lib/Transforms/Scalar/CMakeFiles/LLVMScalarOpts.dir/LoopBoundSplit.cpp.o 53.083 [2390/49/1471] Building CXX object lib/Transforms/Utils/CMakeFiles/LLVMTransformUtils.dir/LoopUnrollAndJam.cpp.o 53.778 [2390/48/1472] Building CXX object lib/CodeGen/CMakeFiles/LLVMCodeGen.dir/SafeStack.cpp.o 53.936 [2390/47/1473] Building AArch64GenSubtargetInfo.inc... 54.500 [2390/46/1474] Building CXX object lib/Analysis/CMakeFiles/LLVMAnalysis.dir/InlineAdvisor.cpp.o 54.511 [2390/45/1475] Building CXX object lib/Analysis/CMakeFiles/LLVMAnalysis.dir/InlineOrder.cpp.o 55.089 [2390/44/1476] Building AMDGPUGenMCPseudoLowering.inc... 55.376 [2390/43/1477] Building CXX object lib/Transforms/Scalar/CMakeFiles/LLVMScalarOpts.dir/LoopDistribute.cpp.o 55.596 [2390/42/1478] Building CXX object lib/Transforms/IPO/CMakeFiles/LLVMipo.dir/FunctionSpecialization.cpp.o 55.827 [2390/41/1479] Building CXX object lib/Transforms/Utils/CMakeFiles/LLVMTransformUtils.dir/LoopPeel.cpp.o 55.880 [2390/40/1480] Building CXX object lib/Transforms/Utils/CMakeFiles/LLVMTransformUtils.dir/LoopUnroll.cpp.o 
@llvm-ci
Copy link
Collaborator

llvm-ci commented Aug 30, 2025

LLVM Buildbot has detected a new failure on builder ml-opt-devrel-x86-64 running on ml-opt-devrel-x86-64-b1 while building llvm at step 5 "build-unified-tree".

Full details are available at: https://lab.llvm.org/buildbot/#/builders/175/builds/24385

Here is the relevant piece of the build log for the reference
Step 5 (build-unified-tree) failure: build (failure) ... 66.966 [1541/53/2349] Building CXX object lib/Target/X86/MCTargetDesc/CMakeFiles/LLVMX86Desc.dir/X86WinCOFFStreamer.cpp.o 66.993 [1541/52/2350] Linking CXX static library lib/libLLVMX86Info.a 67.137 [1539/53/2351] Linking CXX static library lib/libLLVMX86Disassembler.a 67.203 [1539/52/2352] Linking CXX static library lib/libLLVMX86Desc.a 67.309 [1537/53/2353] Linking CXX static library lib/libLLVMX86AsmParser.a 67.314 [1537/52/2354] Linking CXX static library lib/libLLVMX86TargetMCA.a 68.452 [1537/51/2355] Building CXX object lib/Transforms/Coroutines/CMakeFiles/LLVMCoroutines.dir/CoroAnnotationElide.cpp.o 68.604 [1537/50/2356] Building CXX object lib/Transforms/IPO/CMakeFiles/LLVMipo.dir/AlwaysInliner.cpp.o 68.775 [1537/49/2357] Building CXX object lib/Analysis/CMakeFiles/LLVMAnalysis.dir/ReplayInlineAdvisor.cpp.o 68.838 [1537/48/2358] Building CXX object lib/Analysis/CMakeFiles/LLVMAnalysis.dir/DevelopmentModeInlineAdvisor.cpp.o FAILED: lib/Analysis/CMakeFiles/LLVMAnalysis.dir/DevelopmentModeInlineAdvisor.cpp.o ccache /usr/bin/c++ -DCPUINFO_SUPPORTED_PLATFORM=1 -DGTEST_HAS_RTTI=0 -DLLVM_HAVE_TF_AOT_INLINERSIZEMODEL -D_DEBUG -D_GLIBCXX_ASSERTIONS -D_GLIBCXX_USE_CXX11_ABI=1 -D_GNU_SOURCE -D__STDC_CONSTANT_MACROS -D__STDC_FORMAT_MACROS -D__STDC_LIMIT_MACROS -I/b/ml-opt-devrel-x86-64-b1/build/lib/Analysis -I/b/ml-opt-devrel-x86-64-b1/llvm-project/llvm/lib/Analysis -I/var/lib/buildbot/.local/lib/python3.7/site-packages/tensorflow/include -I/b/ml-opt-devrel-x86-64-b1/build/include -I/b/ml-opt-devrel-x86-64-b1/llvm-project/llvm/include -isystem /tmp/tflitebuild/tensorflow/include -isystem /tmp/tflitebuild/eigen/include/eigen3 -isystem /tmp/tflitebuild/abseil-cpp/include -isystem /tmp/tflitebuild/flatbuffers/include -isystem /tmp/tflitebuild/gemmlowp/include/gemmlowp -isystem /tmp/tflitebuild/ml_dtypes/src/ml_dtypes -isystem /tmp/tflitebuild/ml_dtypes/src/ml_dtypes/ml_dtypes -isystem /tmp/tflitebuild/ruy/include -isystem /tmp/tflitebuild/cpuinfo/include -isystem /tmp/tflitebuild/ARM_NEON_2_x86_SSE/include -fPIC -fno-semantic-interposition -fvisibility-inlines-hidden -Werror=date-time -Werror=unguarded-availability-new -Wall -Wextra -Wno-unused-parameter -Wwrite-strings -Wcast-qual -Wmissing-field-initializers -pedantic -Wno-long-long -Wc++98-compat-extra-semi -Wimplicit-fallthrough -Wcovered-switch-default -Wno-noexcept-type -Wnon-virtual-dtor -Wdelete-non-virtual-dtor -Wsuggest-override -Wstring-conversion -Wmisleading-indentation -Wctad-maybe-unsupported -fdiagnostics-color -ffunction-sections -fdata-sections -O3 -DNDEBUG -fno-exceptions -funwind-tables -fno-rtti -UNDEBUG -DEIGEN_NEON_GEBP_NR=4 -DTFL_STATIC_LIBRARY_BUILD -std=c++17 -MD -MT lib/Analysis/CMakeFiles/LLVMAnalysis.dir/DevelopmentModeInlineAdvisor.cpp.o -MF lib/Analysis/CMakeFiles/LLVMAnalysis.dir/DevelopmentModeInlineAdvisor.cpp.o.d -o lib/Analysis/CMakeFiles/LLVMAnalysis.dir/DevelopmentModeInlineAdvisor.cpp.o -c /b/ml-opt-devrel-x86-64-b1/llvm-project/llvm/lib/Analysis/DevelopmentModeInlineAdvisor.cpp /b/ml-opt-devrel-x86-64-b1/llvm-project/llvm/lib/Analysis/DevelopmentModeInlineAdvisor.cpp:284:30: error: use of undeclared identifier 'getFeatureMap' 284 | std::vector<TensorSpec> FT(getFeatureMap().begin(), getFeatureMap().end()); | ^ /b/ml-opt-devrel-x86-64-b1/llvm-project/llvm/lib/Analysis/DevelopmentModeInlineAdvisor.cpp:284:55: error: use of undeclared identifier 'getFeatureMap' 284 | std::vector<TensorSpec> FT(getFeatureMap().begin(), getFeatureMap().end()); | ^ /b/ml-opt-devrel-x86-64-b1/llvm-project/llvm/lib/Analysis/DevelopmentModeInlineAdvisor.cpp:310:27: error: use of undeclared identifier 'getFeatureMap' 310 | size_t FeatureMapSize = getFeatureMap().size(); | ^ /b/ml-opt-devrel-x86-64-b1/llvm-project/llvm/lib/Analysis/DevelopmentModeInlineAdvisor.cpp:110:34: warning: private field 'FeatureMap' is not used [-Wunused-private-field] 110 | const std::vector<TensorSpec> &FeatureMap; | ^ 1 warning and 3 errors generated. 70.214 [1537/47/2359] Building AMDGPUGenCallingConv.inc... 71.449 [1537/46/2360] Building CXX object lib/Transforms/Scalar/CMakeFiles/LLVMScalarOpts.dir/CallSiteSplitting.cpp.o 71.956 [1537/45/2361] Building CXX object lib/Transforms/Utils/CMakeFiles/LLVMTransformUtils.dir/LoopRotationUtils.cpp.o 72.224 [1537/44/2362] Building CXX object lib/Transforms/Scalar/CMakeFiles/LLVMScalarOpts.dir/PlaceSafepoints.cpp.o 72.403 [1537/43/2363] Building RISCVGenInstrInfo.inc... 72.426 [1537/42/2364] Building CXX object lib/Transforms/Utils/CMakeFiles/LLVMTransformUtils.dir/LoopConstrainer.cpp.o 73.340 [1537/41/2365] Building CXX object lib/Transforms/Utils/CMakeFiles/LLVMTransformUtils.dir/LoopVersioning.cpp.o 73.493 [1537/40/2366] Building CXX object lib/Transforms/IPO/CMakeFiles/LLVMipo.dir/ModuleInliner.cpp.o 73.558 [1537/39/2367] Building CXX object lib/Transforms/Utils/CMakeFiles/LLVMTransformUtils.dir/LoopUnrollRuntime.cpp.o 73.757 [1537/38/2368] Building CXX object lib/Transforms/Scalar/CMakeFiles/LLVMScalarOpts.dir/LoopBoundSplit.cpp.o 73.809 [1537/37/2369] Building CXX object lib/Transforms/IPO/CMakeFiles/LLVMipo.dir/SCCP.cpp.o 73.987 [1537/36/2370] Building CXX object lib/Transforms/Scalar/CMakeFiles/LLVMScalarOpts.dir/InductiveRangeCheckElimination.cpp.o 74.018 [1537/35/2371] Building CXX object lib/Transforms/Utils/CMakeFiles/LLVMTransformUtils.dir/CloneFunction.cpp.o 74.150 [1537/34/2372] Building AMDGPUGenAsmWriter.inc... 75.087 [1537/33/2373] Building RISCVGenGlobalISel.inc... 75.232 [1537/32/2374] Building CXX object lib/Analysis/CMakeFiles/LLVMAnalysis.dir/InlineOrder.cpp.o 75.857 [1537/31/2375] Building CXX object lib/Transforms/Scalar/CMakeFiles/LLVMScalarOpts.dir/LoopDistribute.cpp.o 76.420 [1537/30/2376] Building CXX object lib/Analysis/CMakeFiles/LLVMAnalysis.dir/InlineAdvisor.cpp.o 76.915 [1537/29/2377] Building CXX object lib/Transforms/Utils/CMakeFiles/LLVMTransformUtils.dir/LoopPeel.cpp.o 76.952 [1537/28/2378] Building CXX object lib/Transforms/Utils/CMakeFiles/LLVMTransformUtils.dir/LoopUnrollAndJam.cpp.o 76.991 [1537/27/2379] Building CXX object lib/Transforms/IPO/CMakeFiles/LLVMipo.dir/FunctionSpecialization.cpp.o 77.035 [1537/26/2380] Building CXX object lib/Transforms/IPO/CMakeFiles/LLVMipo.dir/Inliner.cpp.o 77.756 [1537/25/2381] Building CXX object lib/Transforms/IPO/CMakeFiles/LLVMipo.dir/ThinLTOBitcodeWriter.cpp.o 77.870 [1537/24/2382] Building CXX object lib/Transforms/Utils/CMakeFiles/LLVMTransformUtils.dir/LoopUnroll.cpp.o 
mtrofin added a commit to mtrofin/llvm-project that referenced this pull request Aug 30, 2025
mtrofin added a commit that referenced this pull request Aug 30, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

llvm:analysis Includes value tracking, cost tables and constant folding mlgo

5 participants