You're not likely to get a consensus answer on this because the word necessary begs more information. Indeed, this answer makes the excellent point that control of error rate is across some set of tests / procedures. If you designed the study in this particular way, you are free to choose what set of tests belong together in terms of needing to control Type I error rate. Using Tukey's HSD for each ANOVA is controlling the familywise error rate for that specific set of tests (presumably at the nominal $\alpha = .05$). One could argue that since you intended to run ANOVAs on each dependent variable, that you aren't doing those tests post hoc, so among the set of ANOVAs, you would not need to further control the error rate.
I think the main thing to remember is that in frequentist inference, we acknowledge that the decision-making procedure inherent in hypothesis testing is error prone. We are free to choose and to justify our choices with respect to our power, test statistic, error-controlling procedure, etc, if we do so a priori. Using post-hoc procedures following an ANOVA follows from the fact that you probably wouldn't have done those pair-wise comparisons if the ANOVA $p$-value wasn't first under some threshold. Either way, I think in your instance, you could either do some further easy adjustment like the simple Bonferonni correction, which will be overly conservative, or just argue, as most might, that you have planned these tests in advance, and presumably designed your study to have decent power, and so no adjustment is necessary in your case.