I didn't realize how big of a problem it was until I adopted an ECS which encouraged bigger, loopy system functions (with systems being the only ones having functions) and dependencies flowing towards raw data, not abstractions.
That, to my surprise, yielded a codebase so much easier to reason about and maintain compared to the codebases I worked in during the past where, during debugging, you had to trace through all kinds of teeny little functions, often through abstract functions calls through pure interfaces leading to who knows where until you trace into it, only to spawn some cascade of events which lead to places you never thought the code should ever lead.
Unlike John Carmack, my biggest problem with those codebases wasn't performance since I never had that ultra-tight latency demand of AAA game engines and most of our performance issues related more to throughput. Of course you can also start to make it more and more difficult to optimize hotspots when you're working in narrower and narrower confines of teenier and teenier functions and classes without that structure getting in the way (requiring you to fuse all these teeny pieces back to something bigger before you can even begin to effectively tackle it), but.
Yet the biggest issue for me was being unable to confidently reason about the system's overall correctness in spite of all tests passing. There was too much to take into my brain and comprehend because that type of system didn't let you reason about it without taking into account all these tiny details and endless interactions between tiny functions and objects that were going on everywhere. There were too many "what ifs?", too many things that needed to be called at the right time, too many questions about what would happen if they were called the wrong time (which start to become raised to the point of paranoia when you have one event triggering another event triggering another leading you to all kinds of unpredictable places), etc.
Now I like my big ass 80-line functions here and there, as long as they're still performing a singular and clear responsibility and don't have like 8 levels of nested blocks. They lead to a feeling that there are less things in the system to test and comprehend, even if the smaller, diced up versions of these bigger functions were only private implementation details not able to be called by anyone else... still, somehow, it tends to feel like there's less interactions going on throughout the system. There's something there to it when you have a shallower call stack and bigger, meatier functions and objects... a "flatter" system, not a "deeper" one.
Simplicity doesn't always reduce complexity at the big picture level if the option is between one meaty function vs. 12 uber-simple ones which call each other with a complex graph of dependencies. At the end of the day you often have to reason about what goes on beyond a function, have to reason about what these functions add up to ultimately do, and it can be harder to see that big picture if you have to deduce it from the smallest puzzle pieces.
Of course very general-purpose library type code that's well-tested can be exempt from this rule, since such general-purpose code often functions and stands well on its own. Also it tends to be teeny compared to the code a bit closer to the domain of your application (thousands of lines of code, not millions), and so widely applicable that it starts to become a part of the daily vocabulary. But with something more specific to your application where the system-wide invariants you have to maintain go far beyond a single function or class, I tend to find it helps to have meatier functions for whatever reason. I find it much easier working with bigger puzzle pieces in trying to figure out what's going on with the big picture.