Skip to content

Conversation

@topperc
Copy link
Collaborator

@topperc topperc commented Dec 19, 2024

This function is most often used in range based loops or algorithms where the iterator is implicitly dereferenced. The dereference returns an SDNode * of the user rather than SDUse * so users() is a better name.

I've long beeen annoyed that we can't write a range based loop over SDUse when we need getOperandNo. I plan to rename use_iterator to user_iterator and add a use_iterator that returns SDUse* on dereference. This will make it more like IR.

This function is most often used in range based loops or algorithms where the iterator is implicitly dereferenced. The dereference returns an SDNode * of the user rather than SDUse * so users() is a better name. I've long beeen annoyed that we can't write a range based loop over SDUse when we need getOperandNo. I plan to rename use_iterator to user_iterator and add a use_iterator that returns SDUse* on dereference. This will make it more like IR.
@llvmbot
Copy link
Member

llvmbot commented Dec 19, 2024

@llvm/pr-subscribers-backend-m68k
@llvm/pr-subscribers-backend-aarch64
@llvm/pr-subscribers-backend-systemz
@llvm/pr-subscribers-backend-nvptx
@llvm/pr-subscribers-llvm-selectiondag

@llvm/pr-subscribers-backend-arm

Author: Craig Topper (topperc)

Changes

This function is most often used in range based loops or algorithms where the iterator is implicitly dereferenced. The dereference returns an SDNode * of the user rather than SDUse * so users() is a better name.

I've long beeen annoyed that we can't write a range based loop over SDUse when we need getOperandNo. I plan to rename use_iterator to user_iterator and add a use_iterator that returns SDUse* on dereference. This will make it more like IR.


Patch is 59.13 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/120499.diff

28 Files Affected:

  • (modified) llvm/include/llvm/CodeGen/SelectionDAGNodes.h (+7-3)
  • (modified) llvm/lib/CodeGen/SelectionDAG/DAGCombiner.cpp (+18-18)
  • (modified) llvm/lib/CodeGen/SelectionDAG/InstrEmitter.cpp (+3-3)
  • (modified) llvm/lib/CodeGen/SelectionDAG/LegalizeDAG.cpp (+2-2)
  • (modified) llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.cpp (+2-2)
  • (modified) llvm/lib/CodeGen/SelectionDAG/ScheduleDAGFast.cpp (+1-1)
  • (modified) llvm/lib/CodeGen/SelectionDAG/ScheduleDAGSDNodes.cpp (+1-1)
  • (modified) llvm/lib/CodeGen/SelectionDAG/SelectionDAG.cpp (+7-7)
  • (modified) llvm/lib/CodeGen/SelectionDAG/SelectionDAGISel.cpp (+1-1)
  • (modified) llvm/lib/Target/AArch64/AArch64ISelDAGToDAG.cpp (+6-6)
  • (modified) llvm/lib/Target/AArch64/AArch64ISelLowering.cpp (+8-8)
  • (modified) llvm/lib/Target/AMDGPU/AMDGPUISelLowering.cpp (+5-5)
  • (modified) llvm/lib/Target/AMDGPU/SIISelLowering.cpp (+3-3)
  • (modified) llvm/lib/Target/ARM/ARMISelLowering.cpp (+7-7)
  • (modified) llvm/lib/Target/Hexagon/HexagonISelDAGToDAGHVX.cpp (+3-3)
  • (modified) llvm/lib/Target/LoongArch/LoongArchISelLowering.cpp (+1-1)
  • (modified) llvm/lib/Target/NVPTX/NVPTXISelDAGToDAG.cpp (+1-1)
  • (modified) llvm/lib/Target/NVPTX/NVPTXISelLowering.cpp (+4-4)
  • (modified) llvm/lib/Target/PowerPC/PPCISelDAGToDAG.cpp (+5-5)
  • (modified) llvm/lib/Target/PowerPC/PPCISelLowering.cpp (+29-29)
  • (modified) llvm/lib/Target/RISCV/RISCVISelDAGToDAG.cpp (+1-1)
  • (modified) llvm/lib/Target/RISCV/RISCVISelLowering.cpp (+3-3)
  • (modified) llvm/lib/Target/SystemZ/SystemZISelDAGToDAG.cpp (+1-1)
  • (modified) llvm/lib/Target/SystemZ/SystemZISelLowering.cpp (+6-6)
  • (modified) llvm/lib/Target/VE/VEISelLowering.cpp (+2-2)
  • (modified) llvm/lib/Target/X86/X86ISelDAGToDAG.cpp (+2-2)
  • (modified) llvm/lib/Target/X86/X86ISelLowering.cpp (+27-26)
  • (modified) llvm/lib/Target/X86/X86ISelLoweringCall.cpp (+1-1)
diff --git a/llvm/include/llvm/CodeGen/SelectionDAGNodes.h b/llvm/include/llvm/CodeGen/SelectionDAGNodes.h index 61f3c6329efce8..b525872f9dd2a2 100644 --- a/llvm/include/llvm/CodeGen/SelectionDAGNodes.h +++ b/llvm/include/llvm/CodeGen/SelectionDAGNodes.h @@ -750,7 +750,7 @@ END_TWO_BYTE_PACK() bool use_empty() const { return UseList == nullptr; } /// Return true if there is exactly one use of this node. - bool hasOneUse() const { return hasSingleElement(uses()); } + bool hasOneUse() const { return hasSingleElement(users()); } /// Return the number of uses of this node. This method takes /// time proportional to the number of uses. @@ -844,10 +844,14 @@ END_TWO_BYTE_PACK() static use_iterator use_end() { return use_iterator(nullptr); } - inline iterator_range<use_iterator> uses() { + // Dereferencing use_iterator returns the user SDNode* making it closer to a + // user_iterator thus this function is called users() to reflect that. + // FIXME: Rename to user_iterator and introduce a use_iterator that returns + // SDUse*. + inline iterator_range<use_iterator> users() { return make_range(use_begin(), use_end()); } - inline iterator_range<use_iterator> uses() const { + inline iterator_range<use_iterator> users() const { return make_range(use_begin(), use_end()); } diff --git a/llvm/lib/CodeGen/SelectionDAG/DAGCombiner.cpp b/llvm/lib/CodeGen/SelectionDAG/DAGCombiner.cpp index 10fc8eecaff907..9b0dc853ac037d 100644 --- a/llvm/lib/CodeGen/SelectionDAG/DAGCombiner.cpp +++ b/llvm/lib/CodeGen/SelectionDAG/DAGCombiner.cpp @@ -202,7 +202,7 @@ namespace { /// When an instruction is simplified, add all users of the instruction to /// the work lists because they might get more simplified now. void AddUsersToWorklist(SDNode *N) { - for (SDNode *Node : N->uses()) + for (SDNode *Node : N->users()) AddToWorklist(Node); } @@ -1113,7 +1113,7 @@ bool DAGCombiner::reassociationCanBreakAddressingModePattern(unsigned Opc, : N1.getConstantOperandVal(1))); if (Opc == ISD::SUB) ScalableOffset = -ScalableOffset; - if (all_of(N->uses(), [&](SDNode *Node) { + if (all_of(N->users(), [&](SDNode *Node) { if (auto *LoadStore = dyn_cast<MemSDNode>(Node); LoadStore && LoadStore->getBasePtr().getNode() == N) { TargetLoweringBase::AddrMode AM; @@ -1151,7 +1151,7 @@ bool DAGCombiner::reassociationCanBreakAddressingModePattern(unsigned Opc, return false; const int64_t CombinedValue = CombinedValueIntVal.getSExtValue(); - for (SDNode *Node : N->uses()) { + for (SDNode *Node : N->users()) { if (auto *LoadStore = dyn_cast<MemSDNode>(Node)) { // Is x[offset2] already not a legal addressing mode? If so then // reassociating the constants breaks nothing (we test offset2 because @@ -1176,7 +1176,7 @@ bool DAGCombiner::reassociationCanBreakAddressingModePattern(unsigned Opc, if (GA->getOpcode() == ISD::GlobalAddress && TLI.isOffsetFoldingLegal(GA)) return false; - for (SDNode *Node : N->uses()) { + for (SDNode *Node : N->users()) { auto *LoadStore = dyn_cast<MemSDNode>(Node); if (!LoadStore) return false; @@ -4720,7 +4720,7 @@ SDValue DAGCombiner::useDivRem(SDNode *Node) { SDValue Op0 = Node->getOperand(0); SDValue Op1 = Node->getOperand(1); SDValue combined; - for (SDNode *User : Op0->uses()) { + for (SDNode *User : Op0->users()) { if (User == Node || User->getOpcode() == ISD::DELETED_NODE || User->use_empty()) continue; @@ -10369,7 +10369,7 @@ static SDValue combineShiftToMULH(SDNode *N, const SDLoc &DL, SelectionDAG &DAG, unsigned MulLoHiOp = IsSignExt ? ISD::SMUL_LOHI : ISD::UMUL_LOHI; if (!ShiftOperand.hasOneUse() && TLI.isOperationLegalOrCustom(MulLoHiOp, NarrowVT) && - llvm::any_of(ShiftOperand->uses(), UserOfLowerBits)) { + llvm::any_of(ShiftOperand->users(), UserOfLowerBits)) { return SDValue(); } @@ -13570,7 +13570,7 @@ static SDValue tryToFoldExtOfLoad(SelectionDAG &DAG, DAGCombiner &Combiner, if (NonNegZExt) { assert(ExtLoadType == ISD::ZEXTLOAD && ExtOpc == ISD::ZERO_EXTEND && "Unexpected load type or opcode"); - for (SDNode *User : N0->uses()) { + for (SDNode *User : N0->users()) { if (User->getOpcode() == ISD::SETCC) { ISD::CondCode CC = cast<CondCodeSDNode>(User->getOperand(2))->get(); if (ISD::isSignedIntSetCC(CC)) { @@ -17673,7 +17673,7 @@ SDValue DAGCombiner::combineRepeatedFPDivisors(SDNode *N) { // Find all FDIV users of the same divisor. // Use a set because duplicates may be present in the user list. SetVector<SDNode *> Users; - for (auto *U : N1->uses()) { + for (auto *U : N1->users()) { if (U->getOpcode() == ISD::FDIV && U->getOperand(1) == N1) { // Skip X/sqrt(X) that has not been simplified to sqrt(X) yet. if (U->getOperand(1).getOpcode() == ISD::FSQRT && @@ -18965,7 +18965,7 @@ bool DAGCombiner::CombineToPreIndexedLoadStore(SDNode *N) { // Now check for #3 and #4. bool RealUse = false; - for (SDNode *Use : Ptr->uses()) { + for (SDNode *Use : Ptr->users()) { if (Use == N) continue; if (SDNode::hasPredecessorHelper(Use, Visited, Worklist, MaxSteps)) @@ -19089,7 +19089,7 @@ static bool shouldCombineToPostInc(SDNode *N, SDValue Ptr, SDNode *PtrUse, SmallPtrSet<const SDNode *, 32> Visited; unsigned MaxSteps = SelectionDAG::getHasPredecessorMaxSteps(); - for (SDNode *Use : BasePtr->uses()) { + for (SDNode *Use : BasePtr->users()) { if (Use == Ptr.getNode()) continue; @@ -19110,7 +19110,7 @@ static bool shouldCombineToPostInc(SDNode *N, SDValue Ptr, SDNode *PtrUse, // If all the uses are load / store addresses, then don't do the // transformation. if (Use->getOpcode() == ISD::ADD || Use->getOpcode() == ISD::SUB) { - for (SDNode *UseUse : Use->uses()) + for (SDNode *UseUse : Use->users()) if (canFoldInAddressingMode(Use, UseUse, DAG, TLI)) return false; } @@ -19136,7 +19136,7 @@ static SDNode *getPostIndexedLoadStoreOp(SDNode *N, bool &IsLoad, // nor a successor of N. Otherwise, if Op is folded that would // create a cycle. unsigned MaxSteps = SelectionDAG::getHasPredecessorMaxSteps(); - for (SDNode *Op : Ptr->uses()) { + for (SDNode *Op : Ptr->users()) { // Check for #1. if (!shouldCombineToPostInc(N, Ptr, Op, BasePtr, Offset, AM, DAG, TLI)) continue; @@ -20515,7 +20515,7 @@ bool DAGCombiner::isMulAddWithConstProfitable(SDNode *MulNode, SDValue AddNode, return true; // Walk all the users of the constant with which we're multiplying. - for (SDNode *Use : ConstNode->uses()) { + for (SDNode *Use : ConstNode->users()) { if (Use == MulNode) // This use is the one we're on right now. Skip it. continue; @@ -22902,7 +22902,7 @@ bool DAGCombiner::refineExtractVectorEltIntoMultipleNarrowExtractVectorElts( // Did we fail to model any of the users of the Producer? bool ProducerIsLeaf = false; // Look at each user of this Producer. - for (SDNode *User : E.Producer->uses()) { + for (SDNode *User : E.Producer->users()) { switch (User->getOpcode()) { // TODO: support ISD::BITCAST // TODO: support ISD::ANY_EXTEND @@ -23176,13 +23176,13 @@ SDValue DAGCombiner::visitEXTRACT_VECTOR_ELT(SDNode *N) { // If only EXTRACT_VECTOR_ELT nodes use the source vector we can // simplify it based on the (valid) extraction indices. - if (llvm::all_of(VecOp->uses(), [&](SDNode *Use) { + if (llvm::all_of(VecOp->users(), [&](SDNode *Use) { return Use->getOpcode() == ISD::EXTRACT_VECTOR_ELT && Use->getOperand(0) == VecOp && isa<ConstantSDNode>(Use->getOperand(1)); })) { APInt DemandedElts = APInt::getZero(NumElts); - for (SDNode *Use : VecOp->uses()) { + for (SDNode *Use : VecOp->users()) { auto *CstElt = cast<ConstantSDNode>(Use->getOperand(1)); if (CstElt->getAPIntValue().ult(NumElts)) DemandedElts.setBit(CstElt->getZExtValue()); @@ -27302,7 +27302,7 @@ SDValue DAGCombiner::visitGET_FPENV_MEM(SDNode *N) { // Check if the memory, where FP state is written to, is used only in a single // load operation. LoadSDNode *LdNode = nullptr; - for (auto *U : Ptr->uses()) { + for (auto *U : Ptr->users()) { if (U == N) continue; if (auto *Ld = dyn_cast<LoadSDNode>(U)) { @@ -27352,7 +27352,7 @@ SDValue DAGCombiner::visitSET_FPENV_MEM(SDNode *N) { // Check if the address of FP state is used also in a store operation only. StoreSDNode *StNode = nullptr; - for (auto *U : Ptr->uses()) { + for (auto *U : Ptr->users()) { if (U == N) continue; if (auto *St = dyn_cast<StoreSDNode>(U)) { diff --git a/llvm/lib/CodeGen/SelectionDAG/InstrEmitter.cpp b/llvm/lib/CodeGen/SelectionDAG/InstrEmitter.cpp index 9c7085cc7e7a83..8e313fb21eedea 100644 --- a/llvm/lib/CodeGen/SelectionDAG/InstrEmitter.cpp +++ b/llvm/lib/CodeGen/SelectionDAG/InstrEmitter.cpp @@ -105,7 +105,7 @@ void InstrEmitter::EmitCopyFromReg(SDNode *Node, unsigned ResNo, bool IsClone, if (TLI->isTypeLegal(VT)) UseRC = TLI->getRegClassFor(VT, Node->isDivergent()); - for (SDNode *User : Node->uses()) { + for (SDNode *User : Node->users()) { bool Match = true; if (User->getOpcode() == ISD::CopyToReg && User->getOperand(2).getNode() == Node && @@ -225,7 +225,7 @@ void InstrEmitter::CreateVirtualRegisters(SDNode *Node, } if (!VRBase && !IsClone && !IsCloned) - for (SDNode *User : Node->uses()) { + for (SDNode *User : Node->users()) { if (User->getOpcode() == ISD::CopyToReg && User->getOperand(2).getNode() == Node && User->getOperand(2).getResNo() == i) { @@ -502,7 +502,7 @@ void InstrEmitter::EmitSubregNode(SDNode *Node, VRBaseMapType &VRBaseMap, // If the node is only used by a CopyToReg and the dest reg is a vreg, use // the CopyToReg'd destination register instead of creating a new vreg. - for (SDNode *User : Node->uses()) { + for (SDNode *User : Node->users()) { if (User->getOpcode() == ISD::CopyToReg && User->getOperand(2).getNode() == Node) { Register DestReg = cast<RegisterSDNode>(User->getOperand(1))->getReg(); diff --git a/llvm/lib/CodeGen/SelectionDAG/LegalizeDAG.cpp b/llvm/lib/CodeGen/SelectionDAG/LegalizeDAG.cpp index ca87168929f964..595a410101eca1 100644 --- a/llvm/lib/CodeGen/SelectionDAG/LegalizeDAG.cpp +++ b/llvm/lib/CodeGen/SelectionDAG/LegalizeDAG.cpp @@ -1394,7 +1394,7 @@ SDValue SelectionDAGLegalize::ExpandExtractFromVectorThroughStack(SDValue Op) { Visited.insert(Op.getNode()); Worklist.push_back(Idx.getNode()); SDValue StackPtr, Ch; - for (SDNode *User : Vec.getNode()->uses()) { + for (SDNode *User : Vec.getNode()->users()) { if (StoreSDNode *ST = dyn_cast<StoreSDNode>(User)) { if (ST->isIndexed() || ST->isTruncatingStore() || ST->getValue() != Vec) @@ -2293,7 +2293,7 @@ static bool useSinCos(SDNode *Node) { ? ISD::FCOS : ISD::FSIN; SDValue Op0 = Node->getOperand(0); - for (const SDNode *User : Op0.getNode()->uses()) { + for (const SDNode *User : Op0.getNode()->users()) { if (User == Node) continue; // The other user might have been turned into sincos already. diff --git a/llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.cpp b/llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.cpp index cb6d3fe4db8a43..c7d29ec1a836c1 100644 --- a/llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.cpp +++ b/llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.cpp @@ -189,7 +189,7 @@ void DAGTypeLegalizer::PerformExpensiveChecks() { #ifndef NDEBUG // Checked that NewNodes are only used by other NewNodes. for (SDNode *N : NewNodes) { - for (SDNode *U : N->uses()) + for (SDNode *U : N->users()) assert(U->getNodeId() == NewNode && "NewNode used by non-NewNode!"); } #endif @@ -399,7 +399,7 @@ bool DAGTypeLegalizer::run() { assert(N->getNodeId() == ReadyToProcess && "Node ID recalculated?"); N->setNodeId(Processed); - for (SDNode *User : N->uses()) { + for (SDNode *User : N->users()) { int NodeId = User->getNodeId(); // This node has two options: it can either be a new node or its Node ID diff --git a/llvm/lib/CodeGen/SelectionDAG/ScheduleDAGFast.cpp b/llvm/lib/CodeGen/SelectionDAG/ScheduleDAGFast.cpp index 70a7438440191a..26eba4b257fb9c 100644 --- a/llvm/lib/CodeGen/SelectionDAG/ScheduleDAGFast.cpp +++ b/llvm/lib/CodeGen/SelectionDAG/ScheduleDAGFast.cpp @@ -756,7 +756,7 @@ void ScheduleDAGLinearize::Schedule() { // Glue user must be scheduled together with the glue operand. So other // users of the glue operand must be treated as its users. SDNode *ImmGUser = Glue->getGluedUser(); - for (const SDNode *U : Glue->uses()) + for (const SDNode *U : Glue->users()) if (U == ImmGUser) --Degree; GUser->setNodeId(UDegree + Degree); diff --git a/llvm/lib/CodeGen/SelectionDAG/ScheduleDAGSDNodes.cpp b/llvm/lib/CodeGen/SelectionDAG/ScheduleDAGSDNodes.cpp index 31939ae5922ec0..2e59dbf2f70280 100644 --- a/llvm/lib/CodeGen/SelectionDAG/ScheduleDAGSDNodes.cpp +++ b/llvm/lib/CodeGen/SelectionDAG/ScheduleDAGSDNodes.cpp @@ -388,7 +388,7 @@ void ScheduleDAGSDNodes::BuildSchedUnits() { // There are either zero or one users of the Glue result. bool HasGlueUse = false; - for (SDNode *U : N->uses()) + for (SDNode *U : N->users()) if (GlueVal.isOperandOf(U)) { HasGlueUse = true; assert(N->getNodeId() == -1 && "Node already inserted!"); diff --git a/llvm/lib/CodeGen/SelectionDAG/SelectionDAG.cpp b/llvm/lib/CodeGen/SelectionDAG/SelectionDAG.cpp index 0fb5c4d5c4cb9b..bd9e5d4dce8ec6 100644 --- a/llvm/lib/CodeGen/SelectionDAG/SelectionDAG.cpp +++ b/llvm/lib/CodeGen/SelectionDAG/SelectionDAG.cpp @@ -2556,7 +2556,7 @@ bool SelectionDAG::expandMultipleResultFPLibCall( // destination pointers can be used instead of creating stack allocations. SDValue StoresInChain; SmallVector<StoreSDNode *, 2> ResultStores(NumResults); - for (SDNode *User : Node->uses()) { + for (SDNode *User : Node->users()) { if (!ISD::isNormalStore(User)) continue; auto *ST = cast<StoreSDNode>(User); @@ -7933,7 +7933,7 @@ SDValue SelectionDAG::getStackArgumentTokenFactor(SDValue Chain) { ArgChains.push_back(Chain); // Add a chain value for each stack argument. - for (SDNode *U : getEntryNode().getNode()->uses()) + for (SDNode *U : getEntryNode().getNode()->users()) if (LoadSDNode *L = dyn_cast<LoadSDNode>(U)) if (FrameIndexSDNode *FI = dyn_cast<FrameIndexSDNode>(L->getBasePtr())) if (FI->getIndex() < 0) @@ -11926,7 +11926,7 @@ void SelectionDAG::updateDivergence(SDNode *N) { bool IsDivergent = calculateDivergence(N); if (N->SDNodeBits.IsDivergent != IsDivergent) { N->SDNodeBits.IsDivergent = IsDivergent; - llvm::append_range(Worklist, N->uses()); + llvm::append_range(Worklist, N->users()); } } while (!Worklist.empty()); } @@ -11942,7 +11942,7 @@ void SelectionDAG::CreateTopologicalOrder(std::vector<SDNode *> &Order) { } for (size_t I = 0; I != Order.size(); ++I) { SDNode *N = Order[I]; - for (auto *U : N->uses()) { + for (auto *U : N->users()) { unsigned &UnsortedOps = Degree[U]; if (0 == --UnsortedOps) Order.push_back(U); @@ -12071,7 +12071,7 @@ unsigned SelectionDAG::AssignTopologicalOrder() { checkForCycles(N, this); // N is in sorted position, so all its uses have one less operand // that needs to be sorted. - for (SDNode *P : N->uses()) { + for (SDNode *P : N->users()) { unsigned Degree = P->getNodeId(); assert(Degree != 0 && "Invalid node degree"); --Degree; @@ -12489,7 +12489,7 @@ bool SDNode::hasAnyUseOfValue(unsigned Value) const { /// isOnlyUserOf - Return true if this node is the only use of N. bool SDNode::isOnlyUserOf(const SDNode *N) const { bool Seen = false; - for (const SDNode *User : N->uses()) { + for (const SDNode *User : N->users()) { if (User == this) Seen = true; else @@ -12502,7 +12502,7 @@ bool SDNode::isOnlyUserOf(const SDNode *N) const { /// Return true if the only users of N are contained in Nodes. bool SDNode::areOnlyUsersOf(ArrayRef<const SDNode *> Nodes, const SDNode *N) { bool Seen = false; - for (const SDNode *User : N->uses()) { + for (const SDNode *User : N->users()) { if (llvm::is_contained(Nodes, User)) Seen = true; else diff --git a/llvm/lib/CodeGen/SelectionDAG/SelectionDAGISel.cpp b/llvm/lib/CodeGen/SelectionDAG/SelectionDAGISel.cpp index 35aa7b87bc3b7f..9147fb1c2badfc 100644 --- a/llvm/lib/CodeGen/SelectionDAG/SelectionDAGISel.cpp +++ b/llvm/lib/CodeGen/SelectionDAG/SelectionDAGISel.cpp @@ -1225,7 +1225,7 @@ void SelectionDAGISel::EnforceNodeIdInvariant(SDNode *Node) { while (!Nodes.empty()) { SDNode *N = Nodes.pop_back_val(); - for (auto *U : N->uses()) { + for (auto *U : N->users()) { auto UId = U->getNodeId(); if (UId > 0) { InvalidateNodeId(U); diff --git a/llvm/lib/Target/AArch64/AArch64ISelDAGToDAG.cpp b/llvm/lib/Target/AArch64/AArch64ISelDAGToDAG.cpp index 5df61b37220373..f831f8de705476 100644 --- a/llvm/lib/Target/AArch64/AArch64ISelDAGToDAG.cpp +++ b/llvm/lib/Target/AArch64/AArch64ISelDAGToDAG.cpp @@ -679,9 +679,9 @@ static bool isWorthFoldingSHL(SDValue V) { // operation. If yes, do not try to fold this node into the address // computation, since the computation will be kept. const SDNode *Node = V.getNode(); - for (SDNode *UI : Node->uses()) + for (SDNode *UI : Node->users()) if (!isa<MemSDNode>(*UI)) - for (SDNode *UII : UI->uses()) + for (SDNode *UII : UI->users()) if (!isa<MemSDNode>(*UII)) return false; return true; @@ -1012,7 +1012,7 @@ bool AArch64DAGToDAGISel::SelectArithUXTXRegister(SDValue N, SDValue &Reg, /// a single pseudo-instruction for an ADRP/ADD pair so over-aggressive folding /// leads to duplicated ADRP instructions. static bool isWorthFoldingADDlow(SDValue N) { - for (auto *Use : N->uses()) { + for (auto *Use : N->users()) { if (Use->getOpcode() != ISD::LOAD && Use->getOpcode() != ISD::STORE && Use->getOpcode() != ISD::ATOMIC_LOAD && Use->getOpcode() != ISD::ATOMIC_STORE) @@ -1245,7 +1245,7 @@ bool AArch64DAGToDAGISel::SelectAddrModeWRO(SDValue N, unsigned Size, // operation. If yes, do not try to fold this node into the address // computation, since the computation will be kept. const SDNode *Node = N.getNode(); - for (SDNode *UI : Node->uses()) { + for (SDNode *UI : Node->users()) { if (!isa<MemSDNode>(*UI)) return false; } @@ -1329,7 +1329,7 @@ bool AArch64DAGToDAGISel::SelectAddrModeXRO(SDValue N, unsigned Size, // operation. If yes, do not try to fold this node into the address // computation, since the computation will be kept. const SDNode *Node = N.getNode(); - for (SDNode *UI : Node->uses()) { + for (SDNode *UI : Node->users()) { if (!isa<MemSDNode>(*UI)) return false; } @@ -3031,7 +3031,7 @@ static void getUsefulBits(SDValue Op, APInt &UsefulBits, unsigned Depth) { } APInt UsersUsefulBits(UsefulBits.getBitWidth(), 0); - for (SDNode *Node : Op.getNode()->uses()) { + for (SDNode *Node : Op.getNode()->users()) { // A use cannot produce useful bits APInt UsefulBitsForUse = APInt(UsefulBits); getUsefulBitsForUse(Node, UsefulBitsForUse, Op, Depth); diff --git a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp index cb6ba06bd4425c..5865dbe1307baf 100644 --- a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp +++ b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp @@ -6464,7 +6464,7 @@ bool AArch64TargetLowering::isVectorLoadExtDesirable(SDValue ExtVal) const { return false; unsigned NumExtMaskedLoads = 0; - for (auto *U : Ld->getMask()->uses()) + for (auto *U : Ld->getMask()->users()) if (isa<MaskedLoadSDNode>(U)) NumExtMaskedLoads++; @@ -8559,7 +8559,7 @@ SDValue AArch64TargetLowering::addTokenForArgument(SDValue Chain, ArgChains.push_b... [truncated] 
@llvmbot
Copy link
Member

llvmbot commented Dec 19, 2024

@llvm/pr-subscribers-backend-hexagon

Author: Craig Topper (topperc)

Changes

This function is most often used in range based loops or algorithms where the iterator is implicitly dereferenced. The dereference returns an SDNode * of the user rather than SDUse * so users() is a better name.

I've long beeen annoyed that we can't write a range based loop over SDUse when we need getOperandNo. I plan to rename use_iterator to user_iterator and add a use_iterator that returns SDUse* on dereference. This will make it more like IR.


Patch is 59.13 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/120499.diff

28 Files Affected:

  • (modified) llvm/include/llvm/CodeGen/SelectionDAGNodes.h (+7-3)
  • (modified) llvm/lib/CodeGen/SelectionDAG/DAGCombiner.cpp (+18-18)
  • (modified) llvm/lib/CodeGen/SelectionDAG/InstrEmitter.cpp (+3-3)
  • (modified) llvm/lib/CodeGen/SelectionDAG/LegalizeDAG.cpp (+2-2)
  • (modified) llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.cpp (+2-2)
  • (modified) llvm/lib/CodeGen/SelectionDAG/ScheduleDAGFast.cpp (+1-1)
  • (modified) llvm/lib/CodeGen/SelectionDAG/ScheduleDAGSDNodes.cpp (+1-1)
  • (modified) llvm/lib/CodeGen/SelectionDAG/SelectionDAG.cpp (+7-7)
  • (modified) llvm/lib/CodeGen/SelectionDAG/SelectionDAGISel.cpp (+1-1)
  • (modified) llvm/lib/Target/AArch64/AArch64ISelDAGToDAG.cpp (+6-6)
  • (modified) llvm/lib/Target/AArch64/AArch64ISelLowering.cpp (+8-8)
  • (modified) llvm/lib/Target/AMDGPU/AMDGPUISelLowering.cpp (+5-5)
  • (modified) llvm/lib/Target/AMDGPU/SIISelLowering.cpp (+3-3)
  • (modified) llvm/lib/Target/ARM/ARMISelLowering.cpp (+7-7)
  • (modified) llvm/lib/Target/Hexagon/HexagonISelDAGToDAGHVX.cpp (+3-3)
  • (modified) llvm/lib/Target/LoongArch/LoongArchISelLowering.cpp (+1-1)
  • (modified) llvm/lib/Target/NVPTX/NVPTXISelDAGToDAG.cpp (+1-1)
  • (modified) llvm/lib/Target/NVPTX/NVPTXISelLowering.cpp (+4-4)
  • (modified) llvm/lib/Target/PowerPC/PPCISelDAGToDAG.cpp (+5-5)
  • (modified) llvm/lib/Target/PowerPC/PPCISelLowering.cpp (+29-29)
  • (modified) llvm/lib/Target/RISCV/RISCVISelDAGToDAG.cpp (+1-1)
  • (modified) llvm/lib/Target/RISCV/RISCVISelLowering.cpp (+3-3)
  • (modified) llvm/lib/Target/SystemZ/SystemZISelDAGToDAG.cpp (+1-1)
  • (modified) llvm/lib/Target/SystemZ/SystemZISelLowering.cpp (+6-6)
  • (modified) llvm/lib/Target/VE/VEISelLowering.cpp (+2-2)
  • (modified) llvm/lib/Target/X86/X86ISelDAGToDAG.cpp (+2-2)
  • (modified) llvm/lib/Target/X86/X86ISelLowering.cpp (+27-26)
  • (modified) llvm/lib/Target/X86/X86ISelLoweringCall.cpp (+1-1)
diff --git a/llvm/include/llvm/CodeGen/SelectionDAGNodes.h b/llvm/include/llvm/CodeGen/SelectionDAGNodes.h index 61f3c6329efce8..b525872f9dd2a2 100644 --- a/llvm/include/llvm/CodeGen/SelectionDAGNodes.h +++ b/llvm/include/llvm/CodeGen/SelectionDAGNodes.h @@ -750,7 +750,7 @@ END_TWO_BYTE_PACK() bool use_empty() const { return UseList == nullptr; } /// Return true if there is exactly one use of this node. - bool hasOneUse() const { return hasSingleElement(uses()); } + bool hasOneUse() const { return hasSingleElement(users()); } /// Return the number of uses of this node. This method takes /// time proportional to the number of uses. @@ -844,10 +844,14 @@ END_TWO_BYTE_PACK() static use_iterator use_end() { return use_iterator(nullptr); } - inline iterator_range<use_iterator> uses() { + // Dereferencing use_iterator returns the user SDNode* making it closer to a + // user_iterator thus this function is called users() to reflect that. + // FIXME: Rename to user_iterator and introduce a use_iterator that returns + // SDUse*. + inline iterator_range<use_iterator> users() { return make_range(use_begin(), use_end()); } - inline iterator_range<use_iterator> uses() const { + inline iterator_range<use_iterator> users() const { return make_range(use_begin(), use_end()); } diff --git a/llvm/lib/CodeGen/SelectionDAG/DAGCombiner.cpp b/llvm/lib/CodeGen/SelectionDAG/DAGCombiner.cpp index 10fc8eecaff907..9b0dc853ac037d 100644 --- a/llvm/lib/CodeGen/SelectionDAG/DAGCombiner.cpp +++ b/llvm/lib/CodeGen/SelectionDAG/DAGCombiner.cpp @@ -202,7 +202,7 @@ namespace { /// When an instruction is simplified, add all users of the instruction to /// the work lists because they might get more simplified now. void AddUsersToWorklist(SDNode *N) { - for (SDNode *Node : N->uses()) + for (SDNode *Node : N->users()) AddToWorklist(Node); } @@ -1113,7 +1113,7 @@ bool DAGCombiner::reassociationCanBreakAddressingModePattern(unsigned Opc, : N1.getConstantOperandVal(1))); if (Opc == ISD::SUB) ScalableOffset = -ScalableOffset; - if (all_of(N->uses(), [&](SDNode *Node) { + if (all_of(N->users(), [&](SDNode *Node) { if (auto *LoadStore = dyn_cast<MemSDNode>(Node); LoadStore && LoadStore->getBasePtr().getNode() == N) { TargetLoweringBase::AddrMode AM; @@ -1151,7 +1151,7 @@ bool DAGCombiner::reassociationCanBreakAddressingModePattern(unsigned Opc, return false; const int64_t CombinedValue = CombinedValueIntVal.getSExtValue(); - for (SDNode *Node : N->uses()) { + for (SDNode *Node : N->users()) { if (auto *LoadStore = dyn_cast<MemSDNode>(Node)) { // Is x[offset2] already not a legal addressing mode? If so then // reassociating the constants breaks nothing (we test offset2 because @@ -1176,7 +1176,7 @@ bool DAGCombiner::reassociationCanBreakAddressingModePattern(unsigned Opc, if (GA->getOpcode() == ISD::GlobalAddress && TLI.isOffsetFoldingLegal(GA)) return false; - for (SDNode *Node : N->uses()) { + for (SDNode *Node : N->users()) { auto *LoadStore = dyn_cast<MemSDNode>(Node); if (!LoadStore) return false; @@ -4720,7 +4720,7 @@ SDValue DAGCombiner::useDivRem(SDNode *Node) { SDValue Op0 = Node->getOperand(0); SDValue Op1 = Node->getOperand(1); SDValue combined; - for (SDNode *User : Op0->uses()) { + for (SDNode *User : Op0->users()) { if (User == Node || User->getOpcode() == ISD::DELETED_NODE || User->use_empty()) continue; @@ -10369,7 +10369,7 @@ static SDValue combineShiftToMULH(SDNode *N, const SDLoc &DL, SelectionDAG &DAG, unsigned MulLoHiOp = IsSignExt ? ISD::SMUL_LOHI : ISD::UMUL_LOHI; if (!ShiftOperand.hasOneUse() && TLI.isOperationLegalOrCustom(MulLoHiOp, NarrowVT) && - llvm::any_of(ShiftOperand->uses(), UserOfLowerBits)) { + llvm::any_of(ShiftOperand->users(), UserOfLowerBits)) { return SDValue(); } @@ -13570,7 +13570,7 @@ static SDValue tryToFoldExtOfLoad(SelectionDAG &DAG, DAGCombiner &Combiner, if (NonNegZExt) { assert(ExtLoadType == ISD::ZEXTLOAD && ExtOpc == ISD::ZERO_EXTEND && "Unexpected load type or opcode"); - for (SDNode *User : N0->uses()) { + for (SDNode *User : N0->users()) { if (User->getOpcode() == ISD::SETCC) { ISD::CondCode CC = cast<CondCodeSDNode>(User->getOperand(2))->get(); if (ISD::isSignedIntSetCC(CC)) { @@ -17673,7 +17673,7 @@ SDValue DAGCombiner::combineRepeatedFPDivisors(SDNode *N) { // Find all FDIV users of the same divisor. // Use a set because duplicates may be present in the user list. SetVector<SDNode *> Users; - for (auto *U : N1->uses()) { + for (auto *U : N1->users()) { if (U->getOpcode() == ISD::FDIV && U->getOperand(1) == N1) { // Skip X/sqrt(X) that has not been simplified to sqrt(X) yet. if (U->getOperand(1).getOpcode() == ISD::FSQRT && @@ -18965,7 +18965,7 @@ bool DAGCombiner::CombineToPreIndexedLoadStore(SDNode *N) { // Now check for #3 and #4. bool RealUse = false; - for (SDNode *Use : Ptr->uses()) { + for (SDNode *Use : Ptr->users()) { if (Use == N) continue; if (SDNode::hasPredecessorHelper(Use, Visited, Worklist, MaxSteps)) @@ -19089,7 +19089,7 @@ static bool shouldCombineToPostInc(SDNode *N, SDValue Ptr, SDNode *PtrUse, SmallPtrSet<const SDNode *, 32> Visited; unsigned MaxSteps = SelectionDAG::getHasPredecessorMaxSteps(); - for (SDNode *Use : BasePtr->uses()) { + for (SDNode *Use : BasePtr->users()) { if (Use == Ptr.getNode()) continue; @@ -19110,7 +19110,7 @@ static bool shouldCombineToPostInc(SDNode *N, SDValue Ptr, SDNode *PtrUse, // If all the uses are load / store addresses, then don't do the // transformation. if (Use->getOpcode() == ISD::ADD || Use->getOpcode() == ISD::SUB) { - for (SDNode *UseUse : Use->uses()) + for (SDNode *UseUse : Use->users()) if (canFoldInAddressingMode(Use, UseUse, DAG, TLI)) return false; } @@ -19136,7 +19136,7 @@ static SDNode *getPostIndexedLoadStoreOp(SDNode *N, bool &IsLoad, // nor a successor of N. Otherwise, if Op is folded that would // create a cycle. unsigned MaxSteps = SelectionDAG::getHasPredecessorMaxSteps(); - for (SDNode *Op : Ptr->uses()) { + for (SDNode *Op : Ptr->users()) { // Check for #1. if (!shouldCombineToPostInc(N, Ptr, Op, BasePtr, Offset, AM, DAG, TLI)) continue; @@ -20515,7 +20515,7 @@ bool DAGCombiner::isMulAddWithConstProfitable(SDNode *MulNode, SDValue AddNode, return true; // Walk all the users of the constant with which we're multiplying. - for (SDNode *Use : ConstNode->uses()) { + for (SDNode *Use : ConstNode->users()) { if (Use == MulNode) // This use is the one we're on right now. Skip it. continue; @@ -22902,7 +22902,7 @@ bool DAGCombiner::refineExtractVectorEltIntoMultipleNarrowExtractVectorElts( // Did we fail to model any of the users of the Producer? bool ProducerIsLeaf = false; // Look at each user of this Producer. - for (SDNode *User : E.Producer->uses()) { + for (SDNode *User : E.Producer->users()) { switch (User->getOpcode()) { // TODO: support ISD::BITCAST // TODO: support ISD::ANY_EXTEND @@ -23176,13 +23176,13 @@ SDValue DAGCombiner::visitEXTRACT_VECTOR_ELT(SDNode *N) { // If only EXTRACT_VECTOR_ELT nodes use the source vector we can // simplify it based on the (valid) extraction indices. - if (llvm::all_of(VecOp->uses(), [&](SDNode *Use) { + if (llvm::all_of(VecOp->users(), [&](SDNode *Use) { return Use->getOpcode() == ISD::EXTRACT_VECTOR_ELT && Use->getOperand(0) == VecOp && isa<ConstantSDNode>(Use->getOperand(1)); })) { APInt DemandedElts = APInt::getZero(NumElts); - for (SDNode *Use : VecOp->uses()) { + for (SDNode *Use : VecOp->users()) { auto *CstElt = cast<ConstantSDNode>(Use->getOperand(1)); if (CstElt->getAPIntValue().ult(NumElts)) DemandedElts.setBit(CstElt->getZExtValue()); @@ -27302,7 +27302,7 @@ SDValue DAGCombiner::visitGET_FPENV_MEM(SDNode *N) { // Check if the memory, where FP state is written to, is used only in a single // load operation. LoadSDNode *LdNode = nullptr; - for (auto *U : Ptr->uses()) { + for (auto *U : Ptr->users()) { if (U == N) continue; if (auto *Ld = dyn_cast<LoadSDNode>(U)) { @@ -27352,7 +27352,7 @@ SDValue DAGCombiner::visitSET_FPENV_MEM(SDNode *N) { // Check if the address of FP state is used also in a store operation only. StoreSDNode *StNode = nullptr; - for (auto *U : Ptr->uses()) { + for (auto *U : Ptr->users()) { if (U == N) continue; if (auto *St = dyn_cast<StoreSDNode>(U)) { diff --git a/llvm/lib/CodeGen/SelectionDAG/InstrEmitter.cpp b/llvm/lib/CodeGen/SelectionDAG/InstrEmitter.cpp index 9c7085cc7e7a83..8e313fb21eedea 100644 --- a/llvm/lib/CodeGen/SelectionDAG/InstrEmitter.cpp +++ b/llvm/lib/CodeGen/SelectionDAG/InstrEmitter.cpp @@ -105,7 +105,7 @@ void InstrEmitter::EmitCopyFromReg(SDNode *Node, unsigned ResNo, bool IsClone, if (TLI->isTypeLegal(VT)) UseRC = TLI->getRegClassFor(VT, Node->isDivergent()); - for (SDNode *User : Node->uses()) { + for (SDNode *User : Node->users()) { bool Match = true; if (User->getOpcode() == ISD::CopyToReg && User->getOperand(2).getNode() == Node && @@ -225,7 +225,7 @@ void InstrEmitter::CreateVirtualRegisters(SDNode *Node, } if (!VRBase && !IsClone && !IsCloned) - for (SDNode *User : Node->uses()) { + for (SDNode *User : Node->users()) { if (User->getOpcode() == ISD::CopyToReg && User->getOperand(2).getNode() == Node && User->getOperand(2).getResNo() == i) { @@ -502,7 +502,7 @@ void InstrEmitter::EmitSubregNode(SDNode *Node, VRBaseMapType &VRBaseMap, // If the node is only used by a CopyToReg and the dest reg is a vreg, use // the CopyToReg'd destination register instead of creating a new vreg. - for (SDNode *User : Node->uses()) { + for (SDNode *User : Node->users()) { if (User->getOpcode() == ISD::CopyToReg && User->getOperand(2).getNode() == Node) { Register DestReg = cast<RegisterSDNode>(User->getOperand(1))->getReg(); diff --git a/llvm/lib/CodeGen/SelectionDAG/LegalizeDAG.cpp b/llvm/lib/CodeGen/SelectionDAG/LegalizeDAG.cpp index ca87168929f964..595a410101eca1 100644 --- a/llvm/lib/CodeGen/SelectionDAG/LegalizeDAG.cpp +++ b/llvm/lib/CodeGen/SelectionDAG/LegalizeDAG.cpp @@ -1394,7 +1394,7 @@ SDValue SelectionDAGLegalize::ExpandExtractFromVectorThroughStack(SDValue Op) { Visited.insert(Op.getNode()); Worklist.push_back(Idx.getNode()); SDValue StackPtr, Ch; - for (SDNode *User : Vec.getNode()->uses()) { + for (SDNode *User : Vec.getNode()->users()) { if (StoreSDNode *ST = dyn_cast<StoreSDNode>(User)) { if (ST->isIndexed() || ST->isTruncatingStore() || ST->getValue() != Vec) @@ -2293,7 +2293,7 @@ static bool useSinCos(SDNode *Node) { ? ISD::FCOS : ISD::FSIN; SDValue Op0 = Node->getOperand(0); - for (const SDNode *User : Op0.getNode()->uses()) { + for (const SDNode *User : Op0.getNode()->users()) { if (User == Node) continue; // The other user might have been turned into sincos already. diff --git a/llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.cpp b/llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.cpp index cb6d3fe4db8a43..c7d29ec1a836c1 100644 --- a/llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.cpp +++ b/llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.cpp @@ -189,7 +189,7 @@ void DAGTypeLegalizer::PerformExpensiveChecks() { #ifndef NDEBUG // Checked that NewNodes are only used by other NewNodes. for (SDNode *N : NewNodes) { - for (SDNode *U : N->uses()) + for (SDNode *U : N->users()) assert(U->getNodeId() == NewNode && "NewNode used by non-NewNode!"); } #endif @@ -399,7 +399,7 @@ bool DAGTypeLegalizer::run() { assert(N->getNodeId() == ReadyToProcess && "Node ID recalculated?"); N->setNodeId(Processed); - for (SDNode *User : N->uses()) { + for (SDNode *User : N->users()) { int NodeId = User->getNodeId(); // This node has two options: it can either be a new node or its Node ID diff --git a/llvm/lib/CodeGen/SelectionDAG/ScheduleDAGFast.cpp b/llvm/lib/CodeGen/SelectionDAG/ScheduleDAGFast.cpp index 70a7438440191a..26eba4b257fb9c 100644 --- a/llvm/lib/CodeGen/SelectionDAG/ScheduleDAGFast.cpp +++ b/llvm/lib/CodeGen/SelectionDAG/ScheduleDAGFast.cpp @@ -756,7 +756,7 @@ void ScheduleDAGLinearize::Schedule() { // Glue user must be scheduled together with the glue operand. So other // users of the glue operand must be treated as its users. SDNode *ImmGUser = Glue->getGluedUser(); - for (const SDNode *U : Glue->uses()) + for (const SDNode *U : Glue->users()) if (U == ImmGUser) --Degree; GUser->setNodeId(UDegree + Degree); diff --git a/llvm/lib/CodeGen/SelectionDAG/ScheduleDAGSDNodes.cpp b/llvm/lib/CodeGen/SelectionDAG/ScheduleDAGSDNodes.cpp index 31939ae5922ec0..2e59dbf2f70280 100644 --- a/llvm/lib/CodeGen/SelectionDAG/ScheduleDAGSDNodes.cpp +++ b/llvm/lib/CodeGen/SelectionDAG/ScheduleDAGSDNodes.cpp @@ -388,7 +388,7 @@ void ScheduleDAGSDNodes::BuildSchedUnits() { // There are either zero or one users of the Glue result. bool HasGlueUse = false; - for (SDNode *U : N->uses()) + for (SDNode *U : N->users()) if (GlueVal.isOperandOf(U)) { HasGlueUse = true; assert(N->getNodeId() == -1 && "Node already inserted!"); diff --git a/llvm/lib/CodeGen/SelectionDAG/SelectionDAG.cpp b/llvm/lib/CodeGen/SelectionDAG/SelectionDAG.cpp index 0fb5c4d5c4cb9b..bd9e5d4dce8ec6 100644 --- a/llvm/lib/CodeGen/SelectionDAG/SelectionDAG.cpp +++ b/llvm/lib/CodeGen/SelectionDAG/SelectionDAG.cpp @@ -2556,7 +2556,7 @@ bool SelectionDAG::expandMultipleResultFPLibCall( // destination pointers can be used instead of creating stack allocations. SDValue StoresInChain; SmallVector<StoreSDNode *, 2> ResultStores(NumResults); - for (SDNode *User : Node->uses()) { + for (SDNode *User : Node->users()) { if (!ISD::isNormalStore(User)) continue; auto *ST = cast<StoreSDNode>(User); @@ -7933,7 +7933,7 @@ SDValue SelectionDAG::getStackArgumentTokenFactor(SDValue Chain) { ArgChains.push_back(Chain); // Add a chain value for each stack argument. - for (SDNode *U : getEntryNode().getNode()->uses()) + for (SDNode *U : getEntryNode().getNode()->users()) if (LoadSDNode *L = dyn_cast<LoadSDNode>(U)) if (FrameIndexSDNode *FI = dyn_cast<FrameIndexSDNode>(L->getBasePtr())) if (FI->getIndex() < 0) @@ -11926,7 +11926,7 @@ void SelectionDAG::updateDivergence(SDNode *N) { bool IsDivergent = calculateDivergence(N); if (N->SDNodeBits.IsDivergent != IsDivergent) { N->SDNodeBits.IsDivergent = IsDivergent; - llvm::append_range(Worklist, N->uses()); + llvm::append_range(Worklist, N->users()); } } while (!Worklist.empty()); } @@ -11942,7 +11942,7 @@ void SelectionDAG::CreateTopologicalOrder(std::vector<SDNode *> &Order) { } for (size_t I = 0; I != Order.size(); ++I) { SDNode *N = Order[I]; - for (auto *U : N->uses()) { + for (auto *U : N->users()) { unsigned &UnsortedOps = Degree[U]; if (0 == --UnsortedOps) Order.push_back(U); @@ -12071,7 +12071,7 @@ unsigned SelectionDAG::AssignTopologicalOrder() { checkForCycles(N, this); // N is in sorted position, so all its uses have one less operand // that needs to be sorted. - for (SDNode *P : N->uses()) { + for (SDNode *P : N->users()) { unsigned Degree = P->getNodeId(); assert(Degree != 0 && "Invalid node degree"); --Degree; @@ -12489,7 +12489,7 @@ bool SDNode::hasAnyUseOfValue(unsigned Value) const { /// isOnlyUserOf - Return true if this node is the only use of N. bool SDNode::isOnlyUserOf(const SDNode *N) const { bool Seen = false; - for (const SDNode *User : N->uses()) { + for (const SDNode *User : N->users()) { if (User == this) Seen = true; else @@ -12502,7 +12502,7 @@ bool SDNode::isOnlyUserOf(const SDNode *N) const { /// Return true if the only users of N are contained in Nodes. bool SDNode::areOnlyUsersOf(ArrayRef<const SDNode *> Nodes, const SDNode *N) { bool Seen = false; - for (const SDNode *User : N->uses()) { + for (const SDNode *User : N->users()) { if (llvm::is_contained(Nodes, User)) Seen = true; else diff --git a/llvm/lib/CodeGen/SelectionDAG/SelectionDAGISel.cpp b/llvm/lib/CodeGen/SelectionDAG/SelectionDAGISel.cpp index 35aa7b87bc3b7f..9147fb1c2badfc 100644 --- a/llvm/lib/CodeGen/SelectionDAG/SelectionDAGISel.cpp +++ b/llvm/lib/CodeGen/SelectionDAG/SelectionDAGISel.cpp @@ -1225,7 +1225,7 @@ void SelectionDAGISel::EnforceNodeIdInvariant(SDNode *Node) { while (!Nodes.empty()) { SDNode *N = Nodes.pop_back_val(); - for (auto *U : N->uses()) { + for (auto *U : N->users()) { auto UId = U->getNodeId(); if (UId > 0) { InvalidateNodeId(U); diff --git a/llvm/lib/Target/AArch64/AArch64ISelDAGToDAG.cpp b/llvm/lib/Target/AArch64/AArch64ISelDAGToDAG.cpp index 5df61b37220373..f831f8de705476 100644 --- a/llvm/lib/Target/AArch64/AArch64ISelDAGToDAG.cpp +++ b/llvm/lib/Target/AArch64/AArch64ISelDAGToDAG.cpp @@ -679,9 +679,9 @@ static bool isWorthFoldingSHL(SDValue V) { // operation. If yes, do not try to fold this node into the address // computation, since the computation will be kept. const SDNode *Node = V.getNode(); - for (SDNode *UI : Node->uses()) + for (SDNode *UI : Node->users()) if (!isa<MemSDNode>(*UI)) - for (SDNode *UII : UI->uses()) + for (SDNode *UII : UI->users()) if (!isa<MemSDNode>(*UII)) return false; return true; @@ -1012,7 +1012,7 @@ bool AArch64DAGToDAGISel::SelectArithUXTXRegister(SDValue N, SDValue &Reg, /// a single pseudo-instruction for an ADRP/ADD pair so over-aggressive folding /// leads to duplicated ADRP instructions. static bool isWorthFoldingADDlow(SDValue N) { - for (auto *Use : N->uses()) { + for (auto *Use : N->users()) { if (Use->getOpcode() != ISD::LOAD && Use->getOpcode() != ISD::STORE && Use->getOpcode() != ISD::ATOMIC_LOAD && Use->getOpcode() != ISD::ATOMIC_STORE) @@ -1245,7 +1245,7 @@ bool AArch64DAGToDAGISel::SelectAddrModeWRO(SDValue N, unsigned Size, // operation. If yes, do not try to fold this node into the address // computation, since the computation will be kept. const SDNode *Node = N.getNode(); - for (SDNode *UI : Node->uses()) { + for (SDNode *UI : Node->users()) { if (!isa<MemSDNode>(*UI)) return false; } @@ -1329,7 +1329,7 @@ bool AArch64DAGToDAGISel::SelectAddrModeXRO(SDValue N, unsigned Size, // operation. If yes, do not try to fold this node into the address // computation, since the computation will be kept. const SDNode *Node = N.getNode(); - for (SDNode *UI : Node->uses()) { + for (SDNode *UI : Node->users()) { if (!isa<MemSDNode>(*UI)) return false; } @@ -3031,7 +3031,7 @@ static void getUsefulBits(SDValue Op, APInt &UsefulBits, unsigned Depth) { } APInt UsersUsefulBits(UsefulBits.getBitWidth(), 0); - for (SDNode *Node : Op.getNode()->uses()) { + for (SDNode *Node : Op.getNode()->users()) { // A use cannot produce useful bits APInt UsefulBitsForUse = APInt(UsefulBits); getUsefulBitsForUse(Node, UsefulBitsForUse, Op, Depth); diff --git a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp index cb6ba06bd4425c..5865dbe1307baf 100644 --- a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp +++ b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp @@ -6464,7 +6464,7 @@ bool AArch64TargetLowering::isVectorLoadExtDesirable(SDValue ExtVal) const { return false; unsigned NumExtMaskedLoads = 0; - for (auto *U : Ld->getMask()->uses()) + for (auto *U : Ld->getMask()->users()) if (isa<MaskedLoadSDNode>(U)) NumExtMaskedLoads++; @@ -8559,7 +8559,7 @@ SDValue AArch64TargetLowering::addTokenForArgument(SDValue Chain, ArgChains.push_b... [truncated] 
@llvmbot
Copy link
Member

llvmbot commented Dec 19, 2024

@llvm/pr-subscribers-backend-x86

Author: Craig Topper (topperc)

Changes

This function is most often used in range based loops or algorithms where the iterator is implicitly dereferenced. The dereference returns an SDNode * of the user rather than SDUse * so users() is a better name.

I've long beeen annoyed that we can't write a range based loop over SDUse when we need getOperandNo. I plan to rename use_iterator to user_iterator and add a use_iterator that returns SDUse* on dereference. This will make it more like IR.


Patch is 59.13 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/120499.diff

28 Files Affected:

  • (modified) llvm/include/llvm/CodeGen/SelectionDAGNodes.h (+7-3)
  • (modified) llvm/lib/CodeGen/SelectionDAG/DAGCombiner.cpp (+18-18)
  • (modified) llvm/lib/CodeGen/SelectionDAG/InstrEmitter.cpp (+3-3)
  • (modified) llvm/lib/CodeGen/SelectionDAG/LegalizeDAG.cpp (+2-2)
  • (modified) llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.cpp (+2-2)
  • (modified) llvm/lib/CodeGen/SelectionDAG/ScheduleDAGFast.cpp (+1-1)
  • (modified) llvm/lib/CodeGen/SelectionDAG/ScheduleDAGSDNodes.cpp (+1-1)
  • (modified) llvm/lib/CodeGen/SelectionDAG/SelectionDAG.cpp (+7-7)
  • (modified) llvm/lib/CodeGen/SelectionDAG/SelectionDAGISel.cpp (+1-1)
  • (modified) llvm/lib/Target/AArch64/AArch64ISelDAGToDAG.cpp (+6-6)
  • (modified) llvm/lib/Target/AArch64/AArch64ISelLowering.cpp (+8-8)
  • (modified) llvm/lib/Target/AMDGPU/AMDGPUISelLowering.cpp (+5-5)
  • (modified) llvm/lib/Target/AMDGPU/SIISelLowering.cpp (+3-3)
  • (modified) llvm/lib/Target/ARM/ARMISelLowering.cpp (+7-7)
  • (modified) llvm/lib/Target/Hexagon/HexagonISelDAGToDAGHVX.cpp (+3-3)
  • (modified) llvm/lib/Target/LoongArch/LoongArchISelLowering.cpp (+1-1)
  • (modified) llvm/lib/Target/NVPTX/NVPTXISelDAGToDAG.cpp (+1-1)
  • (modified) llvm/lib/Target/NVPTX/NVPTXISelLowering.cpp (+4-4)
  • (modified) llvm/lib/Target/PowerPC/PPCISelDAGToDAG.cpp (+5-5)
  • (modified) llvm/lib/Target/PowerPC/PPCISelLowering.cpp (+29-29)
  • (modified) llvm/lib/Target/RISCV/RISCVISelDAGToDAG.cpp (+1-1)
  • (modified) llvm/lib/Target/RISCV/RISCVISelLowering.cpp (+3-3)
  • (modified) llvm/lib/Target/SystemZ/SystemZISelDAGToDAG.cpp (+1-1)
  • (modified) llvm/lib/Target/SystemZ/SystemZISelLowering.cpp (+6-6)
  • (modified) llvm/lib/Target/VE/VEISelLowering.cpp (+2-2)
  • (modified) llvm/lib/Target/X86/X86ISelDAGToDAG.cpp (+2-2)
  • (modified) llvm/lib/Target/X86/X86ISelLowering.cpp (+27-26)
  • (modified) llvm/lib/Target/X86/X86ISelLoweringCall.cpp (+1-1)
diff --git a/llvm/include/llvm/CodeGen/SelectionDAGNodes.h b/llvm/include/llvm/CodeGen/SelectionDAGNodes.h index 61f3c6329efce8..b525872f9dd2a2 100644 --- a/llvm/include/llvm/CodeGen/SelectionDAGNodes.h +++ b/llvm/include/llvm/CodeGen/SelectionDAGNodes.h @@ -750,7 +750,7 @@ END_TWO_BYTE_PACK() bool use_empty() const { return UseList == nullptr; } /// Return true if there is exactly one use of this node. - bool hasOneUse() const { return hasSingleElement(uses()); } + bool hasOneUse() const { return hasSingleElement(users()); } /// Return the number of uses of this node. This method takes /// time proportional to the number of uses. @@ -844,10 +844,14 @@ END_TWO_BYTE_PACK() static use_iterator use_end() { return use_iterator(nullptr); } - inline iterator_range<use_iterator> uses() { + // Dereferencing use_iterator returns the user SDNode* making it closer to a + // user_iterator thus this function is called users() to reflect that. + // FIXME: Rename to user_iterator and introduce a use_iterator that returns + // SDUse*. + inline iterator_range<use_iterator> users() { return make_range(use_begin(), use_end()); } - inline iterator_range<use_iterator> uses() const { + inline iterator_range<use_iterator> users() const { return make_range(use_begin(), use_end()); } diff --git a/llvm/lib/CodeGen/SelectionDAG/DAGCombiner.cpp b/llvm/lib/CodeGen/SelectionDAG/DAGCombiner.cpp index 10fc8eecaff907..9b0dc853ac037d 100644 --- a/llvm/lib/CodeGen/SelectionDAG/DAGCombiner.cpp +++ b/llvm/lib/CodeGen/SelectionDAG/DAGCombiner.cpp @@ -202,7 +202,7 @@ namespace { /// When an instruction is simplified, add all users of the instruction to /// the work lists because they might get more simplified now. void AddUsersToWorklist(SDNode *N) { - for (SDNode *Node : N->uses()) + for (SDNode *Node : N->users()) AddToWorklist(Node); } @@ -1113,7 +1113,7 @@ bool DAGCombiner::reassociationCanBreakAddressingModePattern(unsigned Opc, : N1.getConstantOperandVal(1))); if (Opc == ISD::SUB) ScalableOffset = -ScalableOffset; - if (all_of(N->uses(), [&](SDNode *Node) { + if (all_of(N->users(), [&](SDNode *Node) { if (auto *LoadStore = dyn_cast<MemSDNode>(Node); LoadStore && LoadStore->getBasePtr().getNode() == N) { TargetLoweringBase::AddrMode AM; @@ -1151,7 +1151,7 @@ bool DAGCombiner::reassociationCanBreakAddressingModePattern(unsigned Opc, return false; const int64_t CombinedValue = CombinedValueIntVal.getSExtValue(); - for (SDNode *Node : N->uses()) { + for (SDNode *Node : N->users()) { if (auto *LoadStore = dyn_cast<MemSDNode>(Node)) { // Is x[offset2] already not a legal addressing mode? If so then // reassociating the constants breaks nothing (we test offset2 because @@ -1176,7 +1176,7 @@ bool DAGCombiner::reassociationCanBreakAddressingModePattern(unsigned Opc, if (GA->getOpcode() == ISD::GlobalAddress && TLI.isOffsetFoldingLegal(GA)) return false; - for (SDNode *Node : N->uses()) { + for (SDNode *Node : N->users()) { auto *LoadStore = dyn_cast<MemSDNode>(Node); if (!LoadStore) return false; @@ -4720,7 +4720,7 @@ SDValue DAGCombiner::useDivRem(SDNode *Node) { SDValue Op0 = Node->getOperand(0); SDValue Op1 = Node->getOperand(1); SDValue combined; - for (SDNode *User : Op0->uses()) { + for (SDNode *User : Op0->users()) { if (User == Node || User->getOpcode() == ISD::DELETED_NODE || User->use_empty()) continue; @@ -10369,7 +10369,7 @@ static SDValue combineShiftToMULH(SDNode *N, const SDLoc &DL, SelectionDAG &DAG, unsigned MulLoHiOp = IsSignExt ? ISD::SMUL_LOHI : ISD::UMUL_LOHI; if (!ShiftOperand.hasOneUse() && TLI.isOperationLegalOrCustom(MulLoHiOp, NarrowVT) && - llvm::any_of(ShiftOperand->uses(), UserOfLowerBits)) { + llvm::any_of(ShiftOperand->users(), UserOfLowerBits)) { return SDValue(); } @@ -13570,7 +13570,7 @@ static SDValue tryToFoldExtOfLoad(SelectionDAG &DAG, DAGCombiner &Combiner, if (NonNegZExt) { assert(ExtLoadType == ISD::ZEXTLOAD && ExtOpc == ISD::ZERO_EXTEND && "Unexpected load type or opcode"); - for (SDNode *User : N0->uses()) { + for (SDNode *User : N0->users()) { if (User->getOpcode() == ISD::SETCC) { ISD::CondCode CC = cast<CondCodeSDNode>(User->getOperand(2))->get(); if (ISD::isSignedIntSetCC(CC)) { @@ -17673,7 +17673,7 @@ SDValue DAGCombiner::combineRepeatedFPDivisors(SDNode *N) { // Find all FDIV users of the same divisor. // Use a set because duplicates may be present in the user list. SetVector<SDNode *> Users; - for (auto *U : N1->uses()) { + for (auto *U : N1->users()) { if (U->getOpcode() == ISD::FDIV && U->getOperand(1) == N1) { // Skip X/sqrt(X) that has not been simplified to sqrt(X) yet. if (U->getOperand(1).getOpcode() == ISD::FSQRT && @@ -18965,7 +18965,7 @@ bool DAGCombiner::CombineToPreIndexedLoadStore(SDNode *N) { // Now check for #3 and #4. bool RealUse = false; - for (SDNode *Use : Ptr->uses()) { + for (SDNode *Use : Ptr->users()) { if (Use == N) continue; if (SDNode::hasPredecessorHelper(Use, Visited, Worklist, MaxSteps)) @@ -19089,7 +19089,7 @@ static bool shouldCombineToPostInc(SDNode *N, SDValue Ptr, SDNode *PtrUse, SmallPtrSet<const SDNode *, 32> Visited; unsigned MaxSteps = SelectionDAG::getHasPredecessorMaxSteps(); - for (SDNode *Use : BasePtr->uses()) { + for (SDNode *Use : BasePtr->users()) { if (Use == Ptr.getNode()) continue; @@ -19110,7 +19110,7 @@ static bool shouldCombineToPostInc(SDNode *N, SDValue Ptr, SDNode *PtrUse, // If all the uses are load / store addresses, then don't do the // transformation. if (Use->getOpcode() == ISD::ADD || Use->getOpcode() == ISD::SUB) { - for (SDNode *UseUse : Use->uses()) + for (SDNode *UseUse : Use->users()) if (canFoldInAddressingMode(Use, UseUse, DAG, TLI)) return false; } @@ -19136,7 +19136,7 @@ static SDNode *getPostIndexedLoadStoreOp(SDNode *N, bool &IsLoad, // nor a successor of N. Otherwise, if Op is folded that would // create a cycle. unsigned MaxSteps = SelectionDAG::getHasPredecessorMaxSteps(); - for (SDNode *Op : Ptr->uses()) { + for (SDNode *Op : Ptr->users()) { // Check for #1. if (!shouldCombineToPostInc(N, Ptr, Op, BasePtr, Offset, AM, DAG, TLI)) continue; @@ -20515,7 +20515,7 @@ bool DAGCombiner::isMulAddWithConstProfitable(SDNode *MulNode, SDValue AddNode, return true; // Walk all the users of the constant with which we're multiplying. - for (SDNode *Use : ConstNode->uses()) { + for (SDNode *Use : ConstNode->users()) { if (Use == MulNode) // This use is the one we're on right now. Skip it. continue; @@ -22902,7 +22902,7 @@ bool DAGCombiner::refineExtractVectorEltIntoMultipleNarrowExtractVectorElts( // Did we fail to model any of the users of the Producer? bool ProducerIsLeaf = false; // Look at each user of this Producer. - for (SDNode *User : E.Producer->uses()) { + for (SDNode *User : E.Producer->users()) { switch (User->getOpcode()) { // TODO: support ISD::BITCAST // TODO: support ISD::ANY_EXTEND @@ -23176,13 +23176,13 @@ SDValue DAGCombiner::visitEXTRACT_VECTOR_ELT(SDNode *N) { // If only EXTRACT_VECTOR_ELT nodes use the source vector we can // simplify it based on the (valid) extraction indices. - if (llvm::all_of(VecOp->uses(), [&](SDNode *Use) { + if (llvm::all_of(VecOp->users(), [&](SDNode *Use) { return Use->getOpcode() == ISD::EXTRACT_VECTOR_ELT && Use->getOperand(0) == VecOp && isa<ConstantSDNode>(Use->getOperand(1)); })) { APInt DemandedElts = APInt::getZero(NumElts); - for (SDNode *Use : VecOp->uses()) { + for (SDNode *Use : VecOp->users()) { auto *CstElt = cast<ConstantSDNode>(Use->getOperand(1)); if (CstElt->getAPIntValue().ult(NumElts)) DemandedElts.setBit(CstElt->getZExtValue()); @@ -27302,7 +27302,7 @@ SDValue DAGCombiner::visitGET_FPENV_MEM(SDNode *N) { // Check if the memory, where FP state is written to, is used only in a single // load operation. LoadSDNode *LdNode = nullptr; - for (auto *U : Ptr->uses()) { + for (auto *U : Ptr->users()) { if (U == N) continue; if (auto *Ld = dyn_cast<LoadSDNode>(U)) { @@ -27352,7 +27352,7 @@ SDValue DAGCombiner::visitSET_FPENV_MEM(SDNode *N) { // Check if the address of FP state is used also in a store operation only. StoreSDNode *StNode = nullptr; - for (auto *U : Ptr->uses()) { + for (auto *U : Ptr->users()) { if (U == N) continue; if (auto *St = dyn_cast<StoreSDNode>(U)) { diff --git a/llvm/lib/CodeGen/SelectionDAG/InstrEmitter.cpp b/llvm/lib/CodeGen/SelectionDAG/InstrEmitter.cpp index 9c7085cc7e7a83..8e313fb21eedea 100644 --- a/llvm/lib/CodeGen/SelectionDAG/InstrEmitter.cpp +++ b/llvm/lib/CodeGen/SelectionDAG/InstrEmitter.cpp @@ -105,7 +105,7 @@ void InstrEmitter::EmitCopyFromReg(SDNode *Node, unsigned ResNo, bool IsClone, if (TLI->isTypeLegal(VT)) UseRC = TLI->getRegClassFor(VT, Node->isDivergent()); - for (SDNode *User : Node->uses()) { + for (SDNode *User : Node->users()) { bool Match = true; if (User->getOpcode() == ISD::CopyToReg && User->getOperand(2).getNode() == Node && @@ -225,7 +225,7 @@ void InstrEmitter::CreateVirtualRegisters(SDNode *Node, } if (!VRBase && !IsClone && !IsCloned) - for (SDNode *User : Node->uses()) { + for (SDNode *User : Node->users()) { if (User->getOpcode() == ISD::CopyToReg && User->getOperand(2).getNode() == Node && User->getOperand(2).getResNo() == i) { @@ -502,7 +502,7 @@ void InstrEmitter::EmitSubregNode(SDNode *Node, VRBaseMapType &VRBaseMap, // If the node is only used by a CopyToReg and the dest reg is a vreg, use // the CopyToReg'd destination register instead of creating a new vreg. - for (SDNode *User : Node->uses()) { + for (SDNode *User : Node->users()) { if (User->getOpcode() == ISD::CopyToReg && User->getOperand(2).getNode() == Node) { Register DestReg = cast<RegisterSDNode>(User->getOperand(1))->getReg(); diff --git a/llvm/lib/CodeGen/SelectionDAG/LegalizeDAG.cpp b/llvm/lib/CodeGen/SelectionDAG/LegalizeDAG.cpp index ca87168929f964..595a410101eca1 100644 --- a/llvm/lib/CodeGen/SelectionDAG/LegalizeDAG.cpp +++ b/llvm/lib/CodeGen/SelectionDAG/LegalizeDAG.cpp @@ -1394,7 +1394,7 @@ SDValue SelectionDAGLegalize::ExpandExtractFromVectorThroughStack(SDValue Op) { Visited.insert(Op.getNode()); Worklist.push_back(Idx.getNode()); SDValue StackPtr, Ch; - for (SDNode *User : Vec.getNode()->uses()) { + for (SDNode *User : Vec.getNode()->users()) { if (StoreSDNode *ST = dyn_cast<StoreSDNode>(User)) { if (ST->isIndexed() || ST->isTruncatingStore() || ST->getValue() != Vec) @@ -2293,7 +2293,7 @@ static bool useSinCos(SDNode *Node) { ? ISD::FCOS : ISD::FSIN; SDValue Op0 = Node->getOperand(0); - for (const SDNode *User : Op0.getNode()->uses()) { + for (const SDNode *User : Op0.getNode()->users()) { if (User == Node) continue; // The other user might have been turned into sincos already. diff --git a/llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.cpp b/llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.cpp index cb6d3fe4db8a43..c7d29ec1a836c1 100644 --- a/llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.cpp +++ b/llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.cpp @@ -189,7 +189,7 @@ void DAGTypeLegalizer::PerformExpensiveChecks() { #ifndef NDEBUG // Checked that NewNodes are only used by other NewNodes. for (SDNode *N : NewNodes) { - for (SDNode *U : N->uses()) + for (SDNode *U : N->users()) assert(U->getNodeId() == NewNode && "NewNode used by non-NewNode!"); } #endif @@ -399,7 +399,7 @@ bool DAGTypeLegalizer::run() { assert(N->getNodeId() == ReadyToProcess && "Node ID recalculated?"); N->setNodeId(Processed); - for (SDNode *User : N->uses()) { + for (SDNode *User : N->users()) { int NodeId = User->getNodeId(); // This node has two options: it can either be a new node or its Node ID diff --git a/llvm/lib/CodeGen/SelectionDAG/ScheduleDAGFast.cpp b/llvm/lib/CodeGen/SelectionDAG/ScheduleDAGFast.cpp index 70a7438440191a..26eba4b257fb9c 100644 --- a/llvm/lib/CodeGen/SelectionDAG/ScheduleDAGFast.cpp +++ b/llvm/lib/CodeGen/SelectionDAG/ScheduleDAGFast.cpp @@ -756,7 +756,7 @@ void ScheduleDAGLinearize::Schedule() { // Glue user must be scheduled together with the glue operand. So other // users of the glue operand must be treated as its users. SDNode *ImmGUser = Glue->getGluedUser(); - for (const SDNode *U : Glue->uses()) + for (const SDNode *U : Glue->users()) if (U == ImmGUser) --Degree; GUser->setNodeId(UDegree + Degree); diff --git a/llvm/lib/CodeGen/SelectionDAG/ScheduleDAGSDNodes.cpp b/llvm/lib/CodeGen/SelectionDAG/ScheduleDAGSDNodes.cpp index 31939ae5922ec0..2e59dbf2f70280 100644 --- a/llvm/lib/CodeGen/SelectionDAG/ScheduleDAGSDNodes.cpp +++ b/llvm/lib/CodeGen/SelectionDAG/ScheduleDAGSDNodes.cpp @@ -388,7 +388,7 @@ void ScheduleDAGSDNodes::BuildSchedUnits() { // There are either zero or one users of the Glue result. bool HasGlueUse = false; - for (SDNode *U : N->uses()) + for (SDNode *U : N->users()) if (GlueVal.isOperandOf(U)) { HasGlueUse = true; assert(N->getNodeId() == -1 && "Node already inserted!"); diff --git a/llvm/lib/CodeGen/SelectionDAG/SelectionDAG.cpp b/llvm/lib/CodeGen/SelectionDAG/SelectionDAG.cpp index 0fb5c4d5c4cb9b..bd9e5d4dce8ec6 100644 --- a/llvm/lib/CodeGen/SelectionDAG/SelectionDAG.cpp +++ b/llvm/lib/CodeGen/SelectionDAG/SelectionDAG.cpp @@ -2556,7 +2556,7 @@ bool SelectionDAG::expandMultipleResultFPLibCall( // destination pointers can be used instead of creating stack allocations. SDValue StoresInChain; SmallVector<StoreSDNode *, 2> ResultStores(NumResults); - for (SDNode *User : Node->uses()) { + for (SDNode *User : Node->users()) { if (!ISD::isNormalStore(User)) continue; auto *ST = cast<StoreSDNode>(User); @@ -7933,7 +7933,7 @@ SDValue SelectionDAG::getStackArgumentTokenFactor(SDValue Chain) { ArgChains.push_back(Chain); // Add a chain value for each stack argument. - for (SDNode *U : getEntryNode().getNode()->uses()) + for (SDNode *U : getEntryNode().getNode()->users()) if (LoadSDNode *L = dyn_cast<LoadSDNode>(U)) if (FrameIndexSDNode *FI = dyn_cast<FrameIndexSDNode>(L->getBasePtr())) if (FI->getIndex() < 0) @@ -11926,7 +11926,7 @@ void SelectionDAG::updateDivergence(SDNode *N) { bool IsDivergent = calculateDivergence(N); if (N->SDNodeBits.IsDivergent != IsDivergent) { N->SDNodeBits.IsDivergent = IsDivergent; - llvm::append_range(Worklist, N->uses()); + llvm::append_range(Worklist, N->users()); } } while (!Worklist.empty()); } @@ -11942,7 +11942,7 @@ void SelectionDAG::CreateTopologicalOrder(std::vector<SDNode *> &Order) { } for (size_t I = 0; I != Order.size(); ++I) { SDNode *N = Order[I]; - for (auto *U : N->uses()) { + for (auto *U : N->users()) { unsigned &UnsortedOps = Degree[U]; if (0 == --UnsortedOps) Order.push_back(U); @@ -12071,7 +12071,7 @@ unsigned SelectionDAG::AssignTopologicalOrder() { checkForCycles(N, this); // N is in sorted position, so all its uses have one less operand // that needs to be sorted. - for (SDNode *P : N->uses()) { + for (SDNode *P : N->users()) { unsigned Degree = P->getNodeId(); assert(Degree != 0 && "Invalid node degree"); --Degree; @@ -12489,7 +12489,7 @@ bool SDNode::hasAnyUseOfValue(unsigned Value) const { /// isOnlyUserOf - Return true if this node is the only use of N. bool SDNode::isOnlyUserOf(const SDNode *N) const { bool Seen = false; - for (const SDNode *User : N->uses()) { + for (const SDNode *User : N->users()) { if (User == this) Seen = true; else @@ -12502,7 +12502,7 @@ bool SDNode::isOnlyUserOf(const SDNode *N) const { /// Return true if the only users of N are contained in Nodes. bool SDNode::areOnlyUsersOf(ArrayRef<const SDNode *> Nodes, const SDNode *N) { bool Seen = false; - for (const SDNode *User : N->uses()) { + for (const SDNode *User : N->users()) { if (llvm::is_contained(Nodes, User)) Seen = true; else diff --git a/llvm/lib/CodeGen/SelectionDAG/SelectionDAGISel.cpp b/llvm/lib/CodeGen/SelectionDAG/SelectionDAGISel.cpp index 35aa7b87bc3b7f..9147fb1c2badfc 100644 --- a/llvm/lib/CodeGen/SelectionDAG/SelectionDAGISel.cpp +++ b/llvm/lib/CodeGen/SelectionDAG/SelectionDAGISel.cpp @@ -1225,7 +1225,7 @@ void SelectionDAGISel::EnforceNodeIdInvariant(SDNode *Node) { while (!Nodes.empty()) { SDNode *N = Nodes.pop_back_val(); - for (auto *U : N->uses()) { + for (auto *U : N->users()) { auto UId = U->getNodeId(); if (UId > 0) { InvalidateNodeId(U); diff --git a/llvm/lib/Target/AArch64/AArch64ISelDAGToDAG.cpp b/llvm/lib/Target/AArch64/AArch64ISelDAGToDAG.cpp index 5df61b37220373..f831f8de705476 100644 --- a/llvm/lib/Target/AArch64/AArch64ISelDAGToDAG.cpp +++ b/llvm/lib/Target/AArch64/AArch64ISelDAGToDAG.cpp @@ -679,9 +679,9 @@ static bool isWorthFoldingSHL(SDValue V) { // operation. If yes, do not try to fold this node into the address // computation, since the computation will be kept. const SDNode *Node = V.getNode(); - for (SDNode *UI : Node->uses()) + for (SDNode *UI : Node->users()) if (!isa<MemSDNode>(*UI)) - for (SDNode *UII : UI->uses()) + for (SDNode *UII : UI->users()) if (!isa<MemSDNode>(*UII)) return false; return true; @@ -1012,7 +1012,7 @@ bool AArch64DAGToDAGISel::SelectArithUXTXRegister(SDValue N, SDValue &Reg, /// a single pseudo-instruction for an ADRP/ADD pair so over-aggressive folding /// leads to duplicated ADRP instructions. static bool isWorthFoldingADDlow(SDValue N) { - for (auto *Use : N->uses()) { + for (auto *Use : N->users()) { if (Use->getOpcode() != ISD::LOAD && Use->getOpcode() != ISD::STORE && Use->getOpcode() != ISD::ATOMIC_LOAD && Use->getOpcode() != ISD::ATOMIC_STORE) @@ -1245,7 +1245,7 @@ bool AArch64DAGToDAGISel::SelectAddrModeWRO(SDValue N, unsigned Size, // operation. If yes, do not try to fold this node into the address // computation, since the computation will be kept. const SDNode *Node = N.getNode(); - for (SDNode *UI : Node->uses()) { + for (SDNode *UI : Node->users()) { if (!isa<MemSDNode>(*UI)) return false; } @@ -1329,7 +1329,7 @@ bool AArch64DAGToDAGISel::SelectAddrModeXRO(SDValue N, unsigned Size, // operation. If yes, do not try to fold this node into the address // computation, since the computation will be kept. const SDNode *Node = N.getNode(); - for (SDNode *UI : Node->uses()) { + for (SDNode *UI : Node->users()) { if (!isa<MemSDNode>(*UI)) return false; } @@ -3031,7 +3031,7 @@ static void getUsefulBits(SDValue Op, APInt &UsefulBits, unsigned Depth) { } APInt UsersUsefulBits(UsefulBits.getBitWidth(), 0); - for (SDNode *Node : Op.getNode()->uses()) { + for (SDNode *Node : Op.getNode()->users()) { // A use cannot produce useful bits APInt UsefulBitsForUse = APInt(UsefulBits); getUsefulBitsForUse(Node, UsefulBitsForUse, Op, Depth); diff --git a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp index cb6ba06bd4425c..5865dbe1307baf 100644 --- a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp +++ b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp @@ -6464,7 +6464,7 @@ bool AArch64TargetLowering::isVectorLoadExtDesirable(SDValue ExtVal) const { return false; unsigned NumExtMaskedLoads = 0; - for (auto *U : Ld->getMask()->uses()) + for (auto *U : Ld->getMask()->users()) if (isa<MaskedLoadSDNode>(U)) NumExtMaskedLoads++; @@ -8559,7 +8559,7 @@ SDValue AArch64TargetLowering::addTokenForArgument(SDValue Chain, ArgChains.push_b... [truncated] 
Copy link
Contributor

@s-barannikov s-barannikov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM with a tiny nit.

@topperc topperc merged commit 104ad92 into llvm:main Dec 19, 2024
9 checks passed
@topperc topperc deleted the pr/sdnode-users branch December 19, 2024 04:09
@llvm-ci
Copy link
Collaborator

llvm-ci commented Dec 19, 2024

LLVM Buildbot has detected a new failure on builder openmp-offload-libc-amdgpu-runtime running on omp-vega20-1 while building llvm at step 7 "Add check check-offload".

Full details are available at: https://lab.llvm.org/buildbot/#/builders/73/builds/10613

Here is the relevant piece of the build log for the reference
Step 7 (Add check check-offload) failure: 1200 seconds without output running [b'ninja', b'-j 32', b'check-offload'], attempting to kill ... PASS: libomptarget :: x86_64-unknown-linux-gnu-LTO :: offloading/bug47654.cpp (980 of 993) PASS: libomptarget :: x86_64-unknown-linux-gnu-LTO :: offloading/bug49779.cpp (981 of 993) PASS: libomptarget :: x86_64-unknown-linux-gnu-LTO :: offloading/test_libc.cpp (982 of 993) PASS: libomptarget :: x86_64-unknown-linux-gnu-LTO :: offloading/bug50022.cpp (983 of 993) PASS: libomptarget :: x86_64-unknown-linux-gnu-LTO :: offloading/wtime.c (984 of 993) PASS: libomptarget :: x86_64-unknown-linux-gnu :: offloading/bug49021.cpp (985 of 993) PASS: libomptarget :: x86_64-unknown-linux-gnu :: offloading/std_complex_arithmetic.cpp (986 of 993) PASS: libomptarget :: x86_64-unknown-linux-gnu-LTO :: offloading/complex_reduction.cpp (987 of 993) PASS: libomptarget :: x86_64-unknown-linux-gnu-LTO :: offloading/bug49021.cpp (988 of 993) PASS: libomptarget :: x86_64-unknown-linux-gnu-LTO :: offloading/std_complex_arithmetic.cpp (989 of 993) command timed out: 1200 seconds without output running [b'ninja', b'-j 32', b'check-offload'], attempting to kill process killed by signal 9 program finished with exit code -1 elapsedTime=1238.145281 
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment