Anyone who's done even a small amount of TDD knows how painful retrofitting tests to brown field untestable code can be ... The primary causes are implicit dependencies (you can't pull all the levers to assert results from code - you can't mock scenarios) and violation of the single responsibility principle (tests are complicated, contrived, require too much setup and are hard to understand).
We temporarily grew our QA team (from 1/2one, maybe two people to half a dozen or more) to test the platform before any release. It was hugely expensive time wise and financially, some releases would take 3three months to complete 'testing'. Even then we knew we were shipping with issues, they just weren't 'blockers' or 'critical', just 'high-priority'.
I'm happy to report I'm practicing TDD in my current company (telecommunications, web and mobile app developersdevelopment house), coupled with using Jenkins CI to give other static analysis reports (code coverage being the most useful after asserting the test suite passes).
Programming is our domain, and in my mind this makes it our responsibilityour responsibility, as professionals, to advise on best practice like TDD. Not for project managers to decide if it's done to reduce development time, it's out of their jurisdiction. In the same way they don't tell you what framework, caching solution or search algorithm to use, they shouldn't tell you if you should be employing automated testing.
TheIn my opinion the software development industry (on the whole) is broken at present, the fact that having tests for your software is NOT the norm.
Perhaps it's unfair to say no testing occurs because it does ... but in companies without automated testing, it's very manual/human (read clunky and often error prone) process.